Jekyll2024-03-04T11:30:03+00:00/feed.xmlopenSUSE Kubic - RetiredRetired Kubernetes distribution & container-related technologies formerly built by the openSUSE communityKubic Project Wound Down2022-06-10T10:00:00+00:002022-06-10T10:00:00+00:00/blog/kubic-retired<h2 id="announcement">Announcement</h2>
<p>As <a href="https://lists.opensuse.org/archives/list/kubic@lists.opensuse.org/thread/23ODJTP4PLGC3HWFQNA2MD4ETFSNW4KV/">previously discussed</a> on the Kubic Project mailing lists, the Kubic Project is now officially wound down.</p>
<p>Kubic is no longer available for download, and will no longer be maintained.</p>
<p>openSUSE MicroOS, once an offshoot of the Kubic Project, will now take more of a prominent role for those of us contributing who previously needed to split our attention between them.</p>
<p>Users wishing to run kubernetes workloads atop of an openSUSE base are recommended to install <a href="https://microos.opensuse.org">openSUSE MicroOS</a> and then install <a href="https://rancher.com/docs/k3s/latest/en/installation/install-options/#options-for-installation-with-script">k3s</a>.</p>
<p>Users who prefer the kubernetes RPM packages and containers formerly offered by the Kubic Project may continue to use them, as they will be maintained by a new community maintainer going forward. Please understand this effort is entirely voluntary and on a ‘best effort’. It is not expected to be as polished an experience as previously offered under Kubic. Exact details regarding any installation/migration/upgrade steps using those packages will be communicated via the Kubic mailing list and openSUSE wiki.</p>
<p>All of the ‘non-kubernetes’ official openSUSE containers that were jointly maintained by the MicroOS and Kubic Project teams will continue to be maintained and supported for MicroOS. You can get them, as always, direct from <a href="https://registry.opensuse.org/cgi-bin/cooverview?srch_term=project%3D%5EopenSUSE%3AContainers%3A+container%3Dopensuse%5C%2F%2F*">registry.opensuse.org</a></p>
<p>Thanks for everyones contributions over the years, and we look forward to seeing where we can take MicroOS with this more focused approach.</p>
<p><strong>The Kubic/MicroOS Team</strong></p>Richard BrownAnnouncementSwitching from wicked to NetworkManager2022-05-05T15:10:00+00:002022-05-05T15:10:00+00:00/blog/NetworkManager-Wicked<h2 id="intro">Intro</h2>
<p>NetworkManager was used already a long time for the majority of openSUSE
Tumbleweed installations, except for server, MicroOS and Kubic.
But more and more users requested NetworkManager also for this flavours, since
wicked is missing some functionality (like 5G modem support) or there are k8s
network plugins, which only support NetworkManager. And since MicroOS Desktop
was already using Networking, it was a logical choice to switch completly.
So openSUSE MicroOS and openSUSE Kubic are now using NetworkManager as default
instead of wicked since quite some time.</p>
<h2 id="configuration-files">Configuration files</h2>
<p>As of today there are no plans to automatically switch a system from wicked to
NetworkManager. The reason is: depending on the configuration, it may be as
easy as just replacing wicked with NetworkManager and everything will continue
to work, or, in worst case, everything needs to be re-created from scratch for
NetworkManager. There is no feature parity between this tools, so an automatic
conversation may not work in all cases.</p>
<p>The <code class="language-plaintext highlighter-rouge">/etc/sysconfig/network/ifcfg-*</code> files should be compatible, but it’s not
clear if there are no corner cases where they are incompatible. A migratoin
should be pretty easy in this case. But if a native wicked xml configuration
is in use, a manual migration of the configuration has to be done.</p>
<h2 id="migration">Migration</h2>
<p>If the network configuration got created during installation or if only
<code class="language-plaintext highlighter-rouge">ifcfg-*</code> files are used, the switch to NetworkManager should be very
simple. Just replace wicked with NetworkManager:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># transactional-update shell
-> zypper in --no-recommends NetworkManager
-> rpm -e wicked wicked-service
-> systemctl enable --now NetworkManager
-> exit
# reboot
</code></pre></div></div>Thorsten KukukIntroMicroOS Remote Attestation with TPM and Keylime2021-11-08T12:20:00+00:002021-11-08T12:20:00+00:00/blog/MicroOS-Keylime-TPM<h2 id="introduction">Introduction</h2>
<p>During 2021 we have been starting to focus more in security for
<a href="https://microos.opensuse.org/">MicroOS</a>. By default MicroOS is a
fairly secure distribution: during the development all the changes are
publicly reviewed, fixes (including CVEs) are integrated first (or at
the same time) in Tumbleweed, we have read-only root system and a tool
to recover old snapshots, and periodically the security team audit
some of the new components. Also, the move from AppArmor to SELinux
should help to standardize the security management.</p>
<p>But we really want to rise the bar when it is possible. For example,
we are starting to think on how to enable
<a href="https://en.opensuse.org/SDB:Ima_evm">IMA/EVM</a> properly in the
distribution, or what alternatives we have for <a href="https://0pointer.net/blog/unlocking-luks2-volumes-with-tpm2-fido2-pkcs11-security-hardware-on-systemd-248.html">full disk encryption
supported by a
TPM</a>.
There are some evaluation on
<a href="https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/verity.html">dm-verity</a>
inside the new <a href="https://github.com/thkukuk/tiu">Transactional Image
Update</a> installer.</p>
<p>Another area where we make progress in MicroOS is how to measure the
health of our systems, detect remotely when an unauthorized change has
been made (remote attestation), and actuate over it globally and as
fast as possible.</p>
<h3 id="tpm-as-a-root-of-trust">TPM as a root of trust</h3>
<p>Today all our devices (laptops, desktops, servers, phone or tablets)
includes a cryptoprocessor known as TPM (from the initials Trusted
Platform Module). Sometimes is inside the CPU, but can also be found
soldered in the motherboard or implemented as a software in our
firmware. Those co-processors are really cheap (and sadly slow), but
very useful when we want to design software that requires a hardware
based root of trust.</p>
<p>For example, imagine that we want to encrypt the disk but we do not
want to be asked for the password at the beginning of the boot. We
should need some trusted component that can provide the password very
early in the boot process, and that the system can - somehow -
validate that it is the real deal and not some other agent trying to
impersonate it. A TPM provides mechanisms to do this validation, and
if every this goes OK to unseal the password to the kernel so it can
decrypt the disk.</p>
<p>This same role is required for many other operations, like accessing
to a VPN that requires the validation of the combination machine /
user. Because of how the TPMs are designed, they can generate keys
that we can check that comes from this specific TPM and no other.
This property is valuable to open the access to the internal network,
for example.</p>
<p>Another activity where a root of trust is require is when we need to
validate the health of our systems via <em>measured boot</em>.</p>
<h3 id="measured-boot-and-remote-attestation">Measured Boot and remote attestation</h3>
<p>The general mechanism goes like this: during the boot process, a very
early piece of the firmware takes care of initializing some hardware
components and setting some clocks, and when is done, before
delegating the execution to the next stage (maybe some early stage in
the UEFI firmware), it will load in memory this next stage and will
calculate a hash of it (like SHA256), and will communicate this
information to some piece of hardware that we trust (the TPM in this
case). This component can now delegate the boot process to this
second stage, that will do the same operation when needs to move to
the third stage (load, measure and communicate the measurement to the
TPM), and this goes on until Grub2 enter in action and load the
kernel.</p>
<p>If we do this we effectively have a measurement of every step in the
bootloader chain, from the most early stage deep in the firmware until
(and included) the kernel. This process is known “measured boot”.
And the TPM have a record of all those measurements!</p>
<p>Actually I am lying (well, sort off, as we will see a bit later).</p>
<p>TPMs are cheap and do not have a lot of space inside, so the TPM by
itself cannot have the full record of measurement. Inside the TPM we
have some registers known as platform configuration registers (PCR)
that have one feature: we can read them, but we cannot directly write
on them. To change the value of a register we have an operation known
as <em>extension</em>, that can be viewed as taking the current value of the
register, attaching at the end the value of the hash that we want to
extend with (in this case the measurement done), and calculating the
hash of the full string (again, like SHA256). This new value
calculated is the one that will be stored in the PCR instead.</p>
<p>This <em>extend</em> operation makes that the current value of the PCR
depends on all the previous values of it and the values (measurements)
used before. So in order to replicate a value we need to know the
correct measurement of each stage of the boot chain (including the
initial value of the PCR after the reset, that is usually 0x00..0).</p>
<p>The goal here is that once all is booted, the user can ask for those
PCR values to the TPM via another operation named <em>quote</em>. This
operation returns a report signed by a key that only the TPM knows,
and that I can validate to see if, indeed, comes from my TPM or not.
I can later compare the PCR values with the ones that I expect for my
current version of UEFI, Grub2 and kernel. If they match I know for
sure that the boot chain has not been tampered, and if do not match
… well … we have been hacked (or we have an unauthorized updated
somewhere that we need to check).</p>
<p>To be honest, comparing raw PCR values is hard. If we do not know all
the measurements of our boot chain, they are impossible to predict.</p>
<p>This is why for each measurement, from the UEFI until the kernel,
besides extending a PCR register in the TPM we are also storing in
memory a log of all those measurements. We register (in the normal
memory) some bits extra of information used during the measurement,
like the PCR number and the value used for the extension. We can also
see in the log some signatures found (and expected if we are using
secure boot), text data (for example the kernel command line used in
Grub2), etc.</p>
<p>This log will be passed between stages until it reaches the kernel,
that will make is available to the user space via the security file
system. This log is known as the <em>event log</em>.</p>
<p>Another point here is that we can now ask, remotely from a different
machine, the current values of the PCRs via a quote to the TPM, and
the full content of the event log. With this information we can
attest remotely that the system has not been compromised during the
boot process. Of course, this is known as <em>remote attestation</em>.</p>
<h3 id="keylime">Keylime</h3>
<p>If we have a device with a TPM we can go to the BIOS / UEFI boot menu
and activate it. After that, every boot of this system will be
measured as described here, and we can do the attestation ourselves
requesting a <em>quote</em> to TPM (via the <code class="language-plaintext highlighter-rouge">tpm2.0-tool</code> package in MicroOS)
and comparing the values with the <em>event log</em>.</p>
<p>But this is cannot scale properly, we need a tool to help us doing
this automatically.</p>
<p><a href="https://keylime.dev/">Keylime</a> is an open source project designed to
do exactly what we need: do remote attestation of our devices, using a
TPM as a root of trust of all the measurements.</p>
<p>In Keylime we have some services available. The <em>agent</em> is the
service that needs to be installed in all of the nodes that we want to
monitor. This service is responsible of collecting some data (TPM
quotes, event log, IMA logs, etc) under demand.</p>
<p>This information is requested by another service known as <em>verifier</em>,
that will validate this data based on some user provide information.</p>
<p>For example, it will check that the PCR values are the expected ones.
It can also inspect the event log using an user provided policy (as a
Python code) that will search for some signatures, will see the Grub
menu used for booting and will compare some of the measurements that
known of (like the kernel one, for example).</p>
<p>Also, if IMA is enabled in the remote node, it can validate the hashes
of all the programs and services that we are running!</p>
<p>With Keylime we can deploy secrets to our nodes (like certificates or
keys), and we can execute user-defined actions when we detect an
unauthorized change in our nodes.</p>
<p>For example, if Keylime detect the execution of a program with a
different IMA hash that the one expected, we can execute a program
that will try to isolate this node from their peers, and will revoke
any access to the shared resources in the network, like databases.</p>
<p>The good news is that Keylime has been integrated in MicroOS via two
new system roles in YaST. During the installation we can see know two
new roles, one should be used for the nodes that we want to monitor
(<em>agent</em> role), and the other for the node that will take care of
collecting the information of the agents, doing the verification and
triggering the revocation actions when required (<em>verifier</em> role).</p>
<p>Those roles make very easy to start doing remote attestation in
MicroOS today and the process is fully documented.</p>
<h3 id="more-information">More information!</h3>
<p>We documented all this information (and more) in the <a href="https://en.opensuse.org/Portal:MicroOS/RemoteAttestation">MicroOS
portal</a>.
There you can find a more technical description, including how to
configure the <em>agent</em> services in order that they can find the
<em>verifier</em>, how to enable IMA in the nodes and prepare a white list of
hashes, and how to write programs that can acts when an intrusion has
been detected.</p>
<p>There is also some more details about the TPM and how to check if they
are present in the system and recognized by MicroOS.</p>
<p>Also recently we had the SUSE Labs 2021 conference, and all the videos
has been
<a href="https://www.youtube.com/playlist?list=PL4ibkKyj5eYTDzuV0aYfUrFuXPC9GAmrs">published</a>
recently, including a talk about <a href="https://www.youtube.com/watch?v=6F2mxG4YRKg&list=PL4ibkKyj5eYTDzuV0aYfUrFuXPC9GAmrs&index=18">Keylime and TPM in the context of
MicroOS</a>
that you can check. In the proceedings of this conference there is
also a
<a href="https://github.com/aplanas/keylime-suselabs/blob/main/keylime.pdf">paper</a>
that can be useful to complement this topic!</p>Alberto PlanasIntroductionKubic with Kubernetes 1.22.1 released2021-09-06T14:50:00+00:002021-09-06T14:50:00+00:00/blog/k8s-122<h2 id="announcement">Announcement</h2>
<p>The Kubic Project is proud to announce that snapshot 20210901 has been released containing Kubernetes 1.22.1.</p>
<p>Release Notes are avaialble <a href="https://kubernetes.io/docs/setup/release/notes/#changes">HERE</a>.</p>
<h2 id="upgrade-steps">Upgrade Steps</h2>
<p>All newly deployed Kubic clusters will automatically be Kubernetes 1.22.1 from this point.</p>
<p>For existing clusters, please follow our new documentation our wiki <a href="https://en.opensuse.org/Kubic:Upgrading_kubeadm_clusters">HERE</a></p>
<p>Thanks and have a lot of fun!</p>
<p><strong>The Kubic Team</strong></p>Richard BrownAnnouncementKubic with Kubernetes 1.21.0 released2021-04-28T14:50:00+00:002021-04-28T14:50:00+00:00/blog/k8s-121<h2 id="announcement">Announcement</h2>
<p>The Kubic Project is proud to announce that Snapshot 20210426 has been released containing Kubernetes 1.21.0.</p>
<p>Release Notes are avaialble <a href="https://kubernetes.io/docs/setup/release/notes/#changes">HERE</a>.</p>
<h2 id="upgrade-steps">Upgrade Steps</h2>
<p>All newly deployed Kubic clusters will automatically be Kubernetes 1.21.0 from this point.</p>
<p>For existing clusters, please follow our new documentation our wiki <a href="https://en.opensuse.org/Kubic:Upgrading_kubeadm_clusters">HERE</a></p>
<p>Thanks and have a lot of fun!</p>
<p><strong>The Kubic Team</strong></p>Richard BrownAnnouncementUsing Rancher and RKE with MicroOS and Kubic2021-02-08T12:20:00+00:002021-02-08T12:20:00+00:00/blog/MicroOS-Kubic-Rancher-RKE<h2 id="intro">Intro</h2>
<p>Since <a href="https://www.suse.com/en-en/news/suse-completes-rancher-acquisition/">SUSE acquired Rancher Labs</a>,
it’s time to explain how to run Rancher on MicroOS and how to import a Kubic cluster.</p>
<p>I used Rancher 2.5.5 for this, newer versions my have different requirements.</p>
<h3 id="rancher-with-microos">Rancher with MicroOS</h3>
<p>The good news is: Rancher works out of the box on MicroOS.</p>
<p>The necessary steps are:</p>
<ol>
<li>Install MicroOS as base OS (no Container Host system role is necessary)</li>
<li>Install docker: <code class="language-plaintext highlighter-rouge">transactional-update pkg install docker</code></li>
<li>Reboot: <code class="language-plaintext highlighter-rouge">systemctl reboot</code></li>
<li>Enable and start docker: <code class="language-plaintext highlighter-rouge">systemctl enable --now docker</code></li>
</ol>
<p>From here you can follow the standard <a href="https://rancher.com/docs/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/">Rancher documentation</a>
and install Rancher: <code class="language-plaintext highlighter-rouge">docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher</code></p>
<h3 id="rancher-with-rke-and-microos">Rancher with RKE and MicroOS</h3>
<p>Rancher offers the possibility to setup a new kubernetes cluster using
RKE on an existing, running OS. This section explains how to do that using
MicroOS as the host OS.</p>
<p>While in general, RKE works fine on MicroOS, there could be two pitfalls:</p>
<ol>
<li>MicroOS is using a read-only root filesystem while RKE tries to write to /usr/libexec/kubernetes</li>
<li>Rancher reports an error that the API is not reacheable.</li>
</ol>
<p>The second problem is most likely a docker problem. I suggest to start with
openSUSE MicroOS Build 20210205 or newer, I have never seen this problem with
docker 20.10.3ce introduced with this build. In my case, the reason for the
error message was that IP forwarding didn’t got fully enabled by
docker. Please make sure that IP forwarding is enabled for all devices:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># sysctl -a |grep \\.forward
net.ipv4.conf.all.forwarding = 1
net.ipv4.conf.default.forwarding = 1
net.ipv4.conf.docker0.forwarding = 1
net.ipv4.conf.eth0.forwarding = 1
net.ipv4.conf.lo.forwarding = 1
</code></pre></div></div>
<p>There are the steps to setup MicroOS for this:</p>
<ol>
<li>Install MicroOS as base OS (no Container Host system role is necessary)</li>
<li>Install docker: <code class="language-plaintext highlighter-rouge">transactional-update pkg install docker</code></li>
<li>Reboot: <code class="language-plaintext highlighter-rouge">systemctl reboot</code></li>
<li>Enable and start docker: <code class="language-plaintext highlighter-rouge">systemctl enable --now docker</code></li>
</ol>
<p>On the Rancher GUI select “Existing Nodes” of “Create a new Kubernetes cluster
With RKE and existing bare-metal servers or virtual machines” and follow the
documentation for <a href="https://rancher.com/docs/rke/latest/en/os/#flatcar-container-linux">Flatcar Container Linux</a>.</p>
<h3 id="rancher-with-kubic">Rancher with Kubic</h3>
<p>Importing an openSUSE Kubic cluster can be simple or difficult, depending
on which kubernetes version your cluster is running.
openSUSE Kubic currently comes with kuberenetes 1.20.2 as default. Rancher
only works with kuberenetes up to 1.19.7, it does not work with 1.20.2 as of
today. So if you haven’t updated your cluster yet to 1.20.x, you can go to
<code class="language-plaintext highlighter-rouge">Register an existing Kubernetes cluster</code>, select <code class="language-plaintext highlighter-rouge">Other Cluster</code> and follow
the workflow.</p>
<p>At the end, Rancher will come up with two errors:</p>
<ol>
<li>Alert: Component controller-manager is unhealthy.</li>
<li>Alert: Component scheduler is unhealthy.</li>
</ol>
<p>You can ignore this errors: Rancher uses a deprecated interface, which kubeadm disables by default.</p>Thorsten KukukIntroKubic with Kubernetes 1.20.0 released2020-12-12T17:50:00+00:002020-12-12T17:50:00+00:00/blog/k8s-120<h2 id="announcement">Announcement</h2>
<p>The Kubic Project is proud to announce that Snapshot 20201211 has been released containing Kubernetes 1.20.0.</p>
<p>Release Notes are avaialble <a href="https://kubernetes.io/docs/setup/release/notes/#changes">HERE</a>.</p>
<h2 id="upgrade-steps">Upgrade Steps</h2>
<p>All newly deployed Kubic clusters will automatically be Kubernetes 1.20.0 from this point.</p>
<p>For existing clusters, please follow our new documentation our wiki <a href="https://en.opensuse.org/Kubic:Upgrading_kubeadm_clusters">HERE</a></p>
<p>Thanks and have a lot of fun!</p>
<p><strong>The Kubic Team</strong></p>Richard BrownAnnouncementMicroOS & Kubic: New Lighter Minimum Hardware Requirements2020-11-23T11:15:10+00:002020-11-23T11:15:10+00:00/blog/requirements<h2 id="you-spoke-we-heard">You Spoke, We Heard</h2>
<p>openSUSE MicroOS has been getting a <a href="https://hackaday.com/2020/11/14/microos-is-immutable-linux">significant</a> <a href="https://news.ycombinator.com/item?id=25094753">amount</a> of <a href="https://www.youtube.com/watch?v=Ve6ygUYobCw&feature=emb_title">great</a> attention lately. <br />
We’d like to thank everyone who has reviewed and commented on what we are doing lately. One bit of clear feedback we received <em>loud and clear</em> was that the Minimum Hardware requirement of 20 GB disk space was surprisingly large for an Operating System calling itself <strong>MicroOS</strong>. We agree! And so we’ve reviewed and retuned that requirement.</p>
<h2 id="new-minimum-storage-requirements">New Minimum Storage Requirements</h2>
<p>The New Minimum Supported Storage Requirements for MicroOS are</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">5 GB</code> for the read-only <code class="language-plaintext highlighter-rouge">/ (root)</code> partition, with 20GB as the recommended maximum size.</li>
<li><code class="language-plaintext highlighter-rouge">5 GB</code> for the read-write <code class="language-plaintext highlighter-rouge">/var</code> partition, with 40GB as the recommended size, or however large you require for your workloads.</li>
</ul>
<p>Please Note, a standard installation of the minimal <code class="language-plaintext highlighter-rouge">MicroOS system role</code> currently uses no more than</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">450 MB</code> with bare metal hardware support.</li>
<li><code class="language-plaintext highlighter-rouge">285 MB</code> without bare metal hardware support.</li>
</ul>
<p>Therefore these new lighter requirements still ensure that your MicroOS installations have plenty of room for many automated snapshots from <code class="language-plaintext highlighter-rouge">transactional-updates</code>. These changes will not compromise the promise that MicroOS can be updated and rolled back atomically without worry.</p>
<h2 id="microos-desktop-differences">MicroOS Desktop Differences</h2>
<p>The <a href="https://www.youtube.com/watch?v=cZLckDUDYjw">MicroOS Desktop</a>, which is currently in Alpha and being actively developed, has a subtly different minimum requirement, as a result of its different use case.</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">5 GB</code> for the read-only <code class="language-plaintext highlighter-rouge">/ (root)</code> partition, with at least 40GB recommended, or however large you require for your desktop.</li>
<li><code class="language-plaintext highlighter-rouge">/var</code> and <code class="language-plaintext highlighter-rouge">/home</code> are provided as read-write <code class="language-plaintext highlighter-rouge">noCoW</code> sub-volumes as part of the <code class="language-plaintext highlighter-rouge">/ (root)</code> partition for the storage of containers, flatpaks and user-data.</li>
</ul>
<h2 id="available-now">Available Now</h2>
<p>These changes have all been submitted to openSUSE:Factory, tested in openQA, and will soon be released as part of Snapshot version 20201121. They will soon be available for both <a href="https://en.opensuse.org/Portal:MicroOS/Downloads">MicroOS</a> and <a href="https://en.opensuse.org/Portal:Kubic/Downloads">Kubic</a> across all ISO, Cloud, and VM Images.</p>
<p>Thanks and have a lot of fun!</p>
<p><strong>The MicroOS & Kubic Team</strong></p>Richard BrownYou Spoke, We HeardKubic with Kubernetes 1.19.0 released2020-09-09T12:36:00+00:002020-09-09T12:36:00+00:00/blog/k8s-119<h2 id="announcement">Announcement</h2>
<p>The Kubic Project is proud to announce that Snapshot 20200907 has been released containing Kubernetes 1.19.0.</p>
<p>Release Notes are avaialble <a href="https://kubernetes.io/docs/setup/release/notes/#changes">HERE</a>.</p>
<h2 id="upgrade-steps">Upgrade Steps</h2>
<p>All newly deployed Kubic clusters will automatically be Kubernetes 1.19.0 from this point.</p>
<p>For existing clusters, please follow our new documentation our wiki <a href="https://en.opensuse.org/Kubic:Upgrading_kubeadm_clusters">HERE</a></p>
<p>Thanks and have a lot of fun!</p>
<p><strong>The Kubic Team</strong></p>Richard BrownAnnouncementNew default: tmpfs on /tmp2020-07-27T13:39:00+00:002020-07-27T13:39:00+00:00/blog/tmp_on_tmpfs<h2 id="intro">Intro</h2>
<p>We made an important change for our Container Host OS <a href="https://en.opensuse.org/Portal:MicroOS">openSUSE
MicroOS</a>, which our Kubernetes
platform <a href="https://kubic.opensuse.org">openSUSE Kubic</a> will inherit since it is
based on openSUSE MicroOS: we use now <code class="language-plaintext highlighter-rouge">tmpfs</code> for <code class="language-plaintext highlighter-rouge">/tmp</code>.</p>
<p><code class="language-plaintext highlighter-rouge">tmpfs</code> is a temporary filesystem that resides in memory. Mounting directories
as <code class="language-plaintext highlighter-rouge">tmpfs</code> can be an effective way of speeding up accesses to their files and
to ensure that their contents are automatically cleared upon reboot.</p>
<p>A fresh installation will use <code class="language-plaintext highlighter-rouge">tmpfs</code> for <code class="language-plaintext highlighter-rouge">/tmp</code> by default. Old installations
needs to be converted to this manually, but it is still possible to switch
back to use disk space for <code class="language-plaintext highlighter-rouge">/tmp</code>. This is especially useful and important, if
big files are stored in <code class="language-plaintext highlighter-rouge">/tmp</code>.</p>
<p>If temporary files or directories are needed below <code class="language-plaintext highlighter-rouge">/tmp</code>, this can be created
at boot by using <code class="language-plaintext highlighter-rouge">tmpfiles.d</code>.
But never store important files in <code class="language-plaintext highlighter-rouge">/tmp</code>, they will not survive the next
reboot.</p>
<h3 id="converting-old-installations-to-use-tmpfs">Converting old installations to use tmpfs</h3>
<p>As <code class="language-plaintext highlighter-rouge">tmpfs</code> will be mounted on top of <code class="language-plaintext highlighter-rouge">/tmp</code>, existing files will be no longer
accessible. The following steps will cleanup <code class="language-plaintext highlighter-rouge">/tmp</code> and enable <code class="language-plaintext highlighter-rouge">/tmpfs</code>:</p>
<ol>
<li>Backup all important files currently stored in <code class="language-plaintext highlighter-rouge">/tmp</code>!</li>
<li>Remove the line for <code class="language-plaintext highlighter-rouge">/tmp</code> from /etc/fstab</li>
<li>Remove all files in <code class="language-plaintext highlighter-rouge">/tmp</code></li>
<li>Reboot</li>
</ol>
<p>After reboot, <code class="language-plaintext highlighter-rouge">tmpfs</code> should be used for <code class="language-plaintext highlighter-rouge">/tmp</code>.</p>
<h3 id="using-disk-space-for-tmp">Using disk space for <code class="language-plaintext highlighter-rouge">/tmp</code></h3>
<p>After creating a new btrfs subvolume for <code class="language-plaintext highlighter-rouge">/tmp</code> and adding this to
<code class="language-plaintext highlighter-rouge">/etc/fstab</code>, tmpfs will no longer be used for <code class="language-plaintext highlighter-rouge">/tmp</code>.</p>
<p>The easierst way to archive this is to use mksubvolume from snapper 0.8.12 or
newer:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># transactional-update shell
transactional update # mksubvolume /tmp
transactional update # exit
# systemctl reboot
</code></pre></div></div>
<p>Afterwards, all files are stored again on the disk and will survive a reboot.</p>
<h2 id="future-plans">Future plans</h2>
<p>In the future we plan to harden the system by default even more, e.g. by
marking <code class="language-plaintext highlighter-rouge">/tmp</code> and other write-able parts of the filesystem with <code class="language-plaintext highlighter-rouge">noexec</code>.</p>Thorsten KukukIntro