SUSE Linux Enterprise Server 12 SP3
Release Notes #
This document provides guidance and an overview to high level general features and updates for SUSE Linux Enterprise Server 12 SP3. Besides architecture or product-specific information, it also describes the capabilities and limitations of SUSE Linux Enterprise Server 12 SP3.
General documentation can be found at: https://documentation.suse.com/sles/12-SP3/.
- 1 About the Release Notes
- 2 SUSE Linux Enterprise Server
- 2.1 Interoperability and Hardware Support
- 2.2 Support and Life Cycle
- 2.3 What Is New?
- 2.4 Documentation and Other Information
- 2.5 How to Obtain Source Code
- 2.6 Support Statement for SUSE Linux Enterprise Server
- 2.7 General Support
- 2.8 Software Requiring Specific Contracts
- 2.9 Technology Previews
- 2.10 Modules, Extensions, and Related Products
- 2.11 Security, Standards, and Certification
- 3 Installation and Upgrade
- 4 Architecture Independent Information
- 5 AMD64/Intel 64 (x86_64) Specific Information
- 6 POWER (ppc64le) Specific Information
- 6.1 Support for ibmvnic Networking Driver
- 6.2 QEMU-virtualized PReP Partition
- 6.3 512 TB Virtual Address Space on POWER
- 6.4 kdump: Shorter Time to Filter and Save /proc/vmcore
- 6.5 Parameter crashkernel Is Now Used for fadump Memory Reservation
- 6.6 Encryption Improvements Using Hardware Optimizations
- 6.7 Ceph Client Support on IBM Z and POWER
- 6.8 Memory Reservation Support for fadump in YaST
- 6.9 Speed of
ibmveth
Interface Not Reported Accurately
- 7 IBM Z (s390x) Specific Information
- 8 ARM 64-Bit (AArch64) Specific Information
- 8.1 Boot and Driver Enablement for Raspberry Pi 3 Model B
- 8.2 Raspberry Pi 3 Shows Blurry HDMI Output on Some Monitors
- 8.3 AppliedMicro X-C1 Server Development Platform (Mustang) Firmware Requirements
- 8.4 New System-on-Chip Driver Enablement
- 8.5 Support for OpenDataPlane on Cavium ThunderX and Octeon TX Platforms
- 8.6 KVM on AArch64
- 8.7 Toolchain Module Enabled in Default Installation
- 9 Packages and Functionality Changes
- 10 Technical Information
- 11 Legal Notices
- 12 Colophon
1 About the Release Notes #
These Release Notes are identical across all architectures, and the most recent version is always available online at https://www.suse.com/releasenotes/.
Some entries may be listed twice, if they are important and belong to more than one section.
Release notes usually only list changes that happened between two subsequent releases. Certain important entries from the release notes documents of previous product versions are repeated. To make these entries easier to identify, they contain a note to that effect.
However, repeated entries are provided as a courtesy only. Therefore, if you are skipping one or more service packs, check the release notes of the skipped service packs as well. If you are only reading the release notes of the current release, you could miss important changes.
2 SUSE Linux Enterprise Server #
SUSE Linux Enterprise Server is a highly reliable, scalable, and secure server operating system, built to power mission-critical workloads in both physical and virtual environments. It is an affordable, interoperable, and manageable open source foundation. With it, enterprises can cost-effectively deliver core business services, enable secure networks, and simplify the management of their heterogeneous IT infrastructure, maximizing efficiency and value.
The only enterprise Linux recommended by Microsoft and SAP, SUSE Linux Enterprise Server is optimized to deliver high-performance mission-critical services, as well as edge of network, and web infrastructure workloads.
2.1 Interoperability and Hardware Support #
Designed for interoperability, SUSE Linux Enterprise Server integrates into classical Unix as well as Windows environments, supports open standard interfaces for systems management, and has been certified for IPv6 compatibility.
This modular, general purpose operating system runs on four processor architectures and is available with optional extensions that provide advanced capabilities for tasks such as real time computing and high availability clustering.
SUSE Linux Enterprise Server is optimized to run as a high performing guest on leading hypervisors and supports an unlimited number of virtual machines per physical system with a single subscription, making it the perfect guest operating system for virtual computing.
2.2 Support and Life Cycle #
SUSE Linux Enterprise Server is backed by award-winning support from SUSE, an established technology leader with a proven history of delivering enterprise-quality support services.
SUSE Linux Enterprise Server 12 has a 13-year life cycle, with 10 years of General Support and 3 years of Extended Support. The current version (SP3) will be fully maintained and supported until 6 months after the release of SUSE Linux Enterprise Server 12 SP4.
If you need additional time to design, validate and test your upgrade plans, Long Term Service Pack Support can extend the support you get an additional 12 to 36 months in twelve month increments, providing a total of 3 to 5 years of support on any given service pack.
For more information, check our Support Policy page https://www.suse.com/support/policy.html or the Long Term Service Pack Support Page https://www.suse.com/support/programs/long-term-service-pack-support.html.
2.3 What Is New? #
SUSE Linux Enterprise Server 12 introduces many innovative changes compared to SUSE Linux Enterprise Server 11. Here are some of the highlights:
Robustness on administrative errors and improved management capabilities with full system rollback based on Btrfs as the default file system for the operating system partition and the Snapper technology of SUSE.
An overhaul of the installer introduces a new workflow that allows you to register your system and receive all available maintenance updates as part of the installation.
SUSE Linux Enterprise Server Modules offer a choice of supplemental packages, ranging from tools for Web Development and Scripting, through a Cloud Management module, all the way to a sneak preview of upcoming management tooling called Advanced Systems Management. Modules are part of your SUSE Linux Enterprise Server subscription, are technically delivered as online repositories, and differ from the base of SUSE Linux Enterprise Server only by their life cycle. For more information about modules, see Section 2.10.1, “Available Modules”.
New core technologies like systemd (replacing the time-honored System V-based init process) and Wicked (introducing a modern, dynamic network configuration infrastructure).
The open-source database system MariaDB is fully supported now.
Support for open-vm-tools together with VMware for better integration into VMware-based hypervisor environments.
Linux Containers are integrated into the virtualization management infrastructure (libvirt). Docker is provided as a fully supported technology. For more details, see https://www.suse.com/promo/sle/docker/.
Support for the AArch64 architecture (64-bit ARMv8) and the 64-bit Little-Endian variant of the IBM POWER architecture. Additionally, we continue to support the Intel 64/AMD64 and IBM Z architectures.
GNOME 3.20 gives users a modern desktop environment with a choice of several different look and feel options, including a special SUSE Linux Enterprise Classic mode for easier migration from earlier SUSE Linux Enterprise Desktop environments.
For users wishing to use the full range of productivity applications of a Desktop with their SUSE Linux Enterprise Server, we are now offering SUSE Linux Enterprise Workstation Extension (requires a SUSE Linux Enterprise Desktop subscription).
Integration with the new SUSE Customer Center, the new central web portal from SUSE to manage Subscriptions, Entitlements, and provide access to Support.
If you are upgrading from a previous SUSE Linux Enterprise Server release, you should review at least the following sections:
2.4 Documentation and Other Information #
2.4.1 Available on the Product Media #
Read the READMEs on the media.
Get the detailed change log information about a particular package from the RPM (where
<FILENAME>.rpm
is the name of the RPM):rpm --changelog -qp <FILENAME>.rpm
Check the
ChangeLog
file in the top level of the media for a chronological log of all changes made to the updated packages.Find more information in the
docu
directory of the media of SUSE Linux Enterprise Server 12 SP3. This directory includes PDF versions of the SUSE Linux Enterprise Server 12 SP3 Installation Quick Start and Deployment Guides. Documentation (if installed) is available below the/usr/share/doc/
directory of an installed system.
2.4.2 Externally Provided Documentation #
https://documentation.suse.com/sles/12-SP3/ contains additional or updated documentation for SUSE Linux Enterprise Server 12 SP3.
Find a collection of White Papers in the SUSE Linux Enterprise Server Resource Library at https://www.suse.com/products/server/resource-library.
2.5 How to Obtain Source Code #
This SUSE product includes materials licensed to SUSE under the GNU General Public License (GPL). The GPL requires SUSE to provide the source code that corresponds to the GPL-licensed material. The source code is available for download at https://www.suse.com/download-linux/source-code.html. Also, for up to three years after distribution of the SUSE product, upon request, SUSE will mail a copy of the source code. Requests should be sent by e-mail to mailto:sle_source_request@suse.com or as otherwise instructed at https://www.suse.com/download-linux/source-code.html. SUSE may charge a reasonable fee to recover distribution costs.
2.6 Support Statement for SUSE Linux Enterprise Server #
To receive support, you need an appropriate subscription with SUSE. For more information, see https://www.suse.com/products/server/services-and-support/.
The following definitions apply:
- L1
Problem determination, which means technical support designed to provide compatibility information, usage support, ongoing maintenance, information gathering and basic troubleshooting using available documentation.
- L2
Problem isolation, which means technical support designed to analyze data, reproduce customer problems, isolate problem area and provide a resolution for problems not resolved by Level 1 or alternatively prepare for Level 3.
- L3
Problem resolution, which means technical support designed to resolve problems by engaging engineering to resolve product defects which have been identified by Level 2 Support.
For contracted customers and partners, SUSE Linux Enterprise Server 12 SP3 and its Modules are delivered with L3 support for all packages, except the following:
Technology Previews, see Section 2.9.1, “Technology Previews for All Architectures”
sound, graphics, fonts and artwork
packages that require an additional customer contract, see Section 2.8, “Software Requiring Specific Contracts”
packages provided as part of the Software Development Kit (SDK)
SUSE will only support the usage of original (that is, unchanged and un-recompiled) packages.
2.7 General Support #
To learn about supported kernel, virtualization, and file system features, as well as supported Java versions, see Section 10, “Technical Information”.
2.8 Software Requiring Specific Contracts #
The following packages require additional support contracts to be obtained by the customer in order to receive full support:
PostgreSQL Database
LibreOffice
2.9 Technology Previews #
Technology previews are packages, stacks, or features delivered by SUSE which are not supported. They may be functionally incomplete, unstable or in other ways not suitable for production use. They are included for your convenience and give you a chance to test new technologies within an enterprise environment.
Whether a technology preview becomes a fully supported technology later depends on customer and market feedback. Technology previews can be dropped at any time and SUSE does not commit to providing a supported version of such technologies in the future.
Give your SUSE representative feedback, including your experience and use case.
2.9.1 Technology Previews for All Architectures #
2.9.1.1 Support for KVM Guests Using NVDIMM Devices #
As a technology preview, KVM guests can now use NVDIMM devices.
2.9.1.2 QEMU: NVDIMM and Persistent Memory #
As a technical preview, QEMU now supports NVDIMM. To use NVDIMM,
create a memory device with model=nvdimm
. This
functionality can be used directly with the qemu
command line tool or using libvirt
. However, this
functionality is not yet exposed through
virt-manager
.
NVDIMM supports two access modes:
PMEM: NVDIMM is mapped into the CPU's address space, so that the CPU can directly access it like normal memory
BLK: NVDIMM is used as a block device, this avoids occupying the CPU address space.
2.9.1.3 KVM Nested Virtualization #
KVM Nested Virtualization is available in SLE 12 as a technology preview. For more information about nested virtualization, see nested-vmx.txt (https://github.com/torvalds/linux/blob/master/Documentation/virtual/kvm/nested-vmx.txt).
2.9.2 Technology Previews for IBM Z (s390x) #
2.9.2.1 Exploitation of Shared Memory Communications #
As a technology preview, SLES 12 SP3 enables communication through shared memory segments with the 10 GB Ethernet RoCE card:
Support for the networking card itself is included in the kernel.
The package
smc-tools
contains additional user-space tools.
This technology should only be used in a trusted network infrastructure.
2.9.3 Technology Previews for POWER (ppc64le) #
2.9.3.1 Support for KVM #
With SLES 12 SP3, KVM is now available as a technology preview on OpenPower S822LC systems running OPAL firmware.
2.9.3.2 Inclusion of IBM TPM 2.0 Stack #
IBM has developed a TPM 2.0 TSS stack that can exist and be used in parallel to the Intel TPM 2.0 stack.
It is not clear at this time which of them will be the preferable solution on all TPM supporting platforms.
The general guideline of SUSE Linux Enterprise is having one preferred tool to do the job.
The IBM TPM 2.0 stack is shipped as a Technology Preview in addition to the supported Intel TPM 2.0 stack.
2.10 Modules, Extensions, and Related Products #
This section comprises information about modules and extensions for SUSE Linux Enterprise Server 12 SP3. Modules and extensions add parts or functionality to the system.
2.10.1 Available Modules #
Modules are fully supported parts of SUSE Linux Enterprise Server with a different life cycle and update timeline. They are a set of packages, have a clearly defined scope and are delivered via an online channel only. Release notes for modules are contained in this document, see Section 9.5, “Modules”.
The following modules are available for SUSE Linux Enterprise Server 12 SP3:
Name | Content | Life Cycle |
---|---|---|
Advanced Systems Management Module | CFEngine, Puppet, Salt and the Machinery tool | Frequent releases |
Containers Module | Docker, tools, prepackaged images | Frequent releases |
HPC Module | Tools and libraries related to High Performance Computing (HPC) | Frequent releases |
Legacy Module* | Sendmail, old IMAP stack, old Java, … | Until September/October 2017 (except for ksh ) |
Public Cloud Module | Public cloud initialization code and tools | Frequent releases |
Toolchain Module | GNU Compiler Collection (GCC) | Yearly delivery |
Web and Scripting Module | PHP, Python, Ruby on Rails | 3 years, ~18 months overlap |
* Module is not available for the AArch64 architecture.
For more information about the life cycle of packages contained in modules, see https://scc.suse.com/docs/lifecycle/sle/12/modules.
2.10.2 Available Extensions #
Extensions add extra functionality to the system and require their own registration key, usually at additional cost. Extensions are delivered via an online channel or physical media. In many cases, extensions have their own release notes documents that are available from https://www.suse.com/releasenotes/.
The following extensions are available for SUSE Linux Enterprise Server 12 SP3:
SUSE Linux Enterprise Live Patching: https://www.suse.com/products/live-patching
SUSE Linux Enterprise High Availability Extension: https://www.suse.com/products/highavailability
Geo Clustering for SUSE Linux Enterprise High Availability Extension: https://www.suse.com/products/highavailability/geo-clustering
SUSE Linux Enterprise Real Time: https://www.suse.com/products/realtime
SUSE Linux Enterprise Workstation Extension: https://www.suse.com/products/workstation-extension
Additionally, there are the following extensions which are not covered by SUSE support agreements, available at no additional cost and without an extra registration key:
SUSE Package Hub: https://packagehub.suse.com/
SUSE Linux Enterprise Software Development Kit
2.10.3 Derived and Related Products #
This sections lists derived and related products. In many cases, these products have their own release notes documents that are available from https://www.suse.com/releasenotes/.
SUSE Enterprise Storage: https://www.suse.com/products/suse-enterprise-storage
SUSE Linux Enterprise Desktop: https://www.suse.com/products/desktop
SUSE Linux Enterprise Server for SAP Applications: https://www.suse.com/products/sles-for-sap
SUSE Manager: https://www.suse.com/products/suse-manager
SUSE OpenStack Cloud: https://www.suse.com/products/suse-openstack-cloud
2.11 Security, Standards, and Certification #
SUSE Linux Enterprise Server 12 SP3 has been submitted to the certification bodies for:
For more information about certification, see https://www.suse.com/security/certificates.html.
3 Installation and Upgrade #
SUSE Linux Enterprise Server can be deployed in several ways:
Physical machine
Virtual host
Virtual machine
System containers
Application containers
3.1 Installation #
This section includes information related to the initial installation of SUSE Linux Enterprise Server 12 SP3. For information about installing, see Deployment Guide at https://documentation.suse.com/sles/12-SP3/html/SLES-all/book-sle-deployment.html.
3.1.1 FCoE Storage Does Not Work with Cavium or QLogic Storage Controllers with FCoE Offload #
On a default installation of SLES 12 SP3, there is no support for FCoE storage on systems that use Cavium or QLogic storage controllers with support for FCoE offload.
SUSE has created a kISO (Kernel Update ISO) which can be downloaded from https://drivers.suse.com/suse/installer-update/sle-12-sp3-x86_64/1.0/install-readme.html.
For more information about kISOs in general, see https://www.suse.com/communities/blog/kiso-kernel-update-iso/.
3.1.2 Installing Systems from Online Repositories #
To install SLES, you need the installation media. If you also mirror the repositories, for example with SMT, this means that effectively you need to download all packages twice: once as a part of the media and additionally from the online repository.
For such scenarios, we provide packages named
tftpboot-installation-*
in the product repositories.
These packages include an installer prepared for a network boot
environment (PXE).
To use them, configure the PXE environment (DHCP, TFTP servers) and install the package for the respective product and architecture. Make sure to adjust the included configuration, so that the correct local URLs are passed to the installer.
3.1.3 Network Interfaces Configured via linuxrc Take Precendence #
This entry has appeared in a previous release notes document.
For some configurations with many network interfaces, it can take several hours until all network interfaces are initialized (see https://bugzilla.suse.com/show_bug.cgi?id=988157 (https://bugzilla.suse.com/show_bug.cgi?id=988157)). In such cases, the installation is blocked. SLE 12 SP1 and earlier did not offer a workaround for this behavior.
Starting with SLE 12 SP2, you can speed up interactive installations on systems with many network interfaces by configuring them via linuxrc. When a network interface is configured via linuxrc, YaST will not perform automatic DHCP configuration for any interface. Instead, YaST will continue to use the configuration from linuxrc.
To configure a particular interface via linuxrc, add the following to the boot command line before starting the installation:
ifcfg=eth0=dhcp
In the parameter, replace eth0
with the name of the
appropriate network interface. The ifcfg
option can
be used multiple times.
3.1.4 Warning When Enabling Snapshots on Small Root File Systems #
Btrfs file system snapshots take up extra disk space. Previous versions of SLE did not check during installation whether a custom root file system size was appropriate for enabling snapshots.
For Btrfs root file systems with snapshotting, the SLE installer now
verifies that the size of the file system at least matches the value of
root_base
from the product's
control.xml
. For example, for a default SLES
installation, the root file system size is 12 GB. If the file system is
smaller, the installer will display a warning - which can be ignored.
3.1.5 SMT: Upgrading Database Schema and Engine #
This entry has appeared in a previous release notes document.
SMT 12 comes with a new database schema and is standardized on the InnoDB database back-end.
In order to upgrade SMT 11 SPx to SMT 12, it is necessary that SMT 11 is configured against SCC (SUSE Customer Center) before initializing the upgrade of SLES and SMT to version 12 SP1 or newer. If the host is upgraded to SLES 12 SP1 or newer without switching to SCC first, the installed SMT instance will no longer work.
Only SMT 11 SP3 can be configured against SCC. Older versions need to be upgraded to version 11 SP3 first.
Whether the schema or database engine must be upgraded is checked
during package upgrade and displayed as an update notification. Back up
your database before doing the database upgrade. Both the schema and
database engine upgrade are done by the utility
/usr/bin/smt-schema-upgrade
(can be called directly
or via systemctl start smt-schema-upgrade
) or are
done automatically after smt.target restart
(computer reboot or via systemctl restart smt.target
). However, manual database tuning is required for optimal performance.
For details, see
https://mariadb.com/kb/en/mariadb/converting-tables-from-myisam-to-innodb/#non-index-issues (https://mariadb.com/kb/en/mariadb/converting-tables-from-myisam-to-innodb/#non-index-issues)
3.1.6 SMT Supports SCC Exclusively #
This entry has appeared in a previous release notes document.
Support for NCC (Novell Customer Center) was removed from SMT. SMT can still serve SLE 11 clients, but must be configured to receive updates from SCC.
Before migrating from SMT 11 SP3, SMT must be reconfigured against SCC. Migration from older versions of SMT is not possible.
3.1.7 Installing with LVM2, Without a Separate /boot Partition #
This entry has appeared in a previous release notes document.
SUSE Linux Enterprise 12 and newer generally supports the installation
with a linear LVM2 without a separate /boot
partition, for example to use it with Btrfs as the root file system, to
achieve full system snapshot and rollback.
However, this setup is only supported under the following conditions:
Only linear LVM2 setups are supported.
There must be enough space in the partitioning "label" (the partition table) for the grub2 bootloader first stage files. If the installation of the grub2 bootloader fails, you will have to create a new partition table. CAVEAT: Creating a new partition table destroys all data on the given disk!
For a migration from an existing SUSE Linux Enterprise 11 system with
LVM2 to SUSE Linux Enterprise 12 or newer, the /boot
partition must be preserved.
3.2 Upgrade-Related Notes #
This section includes upgrade-related information for SUSE Linux Enterprise Server 12 SP3. For information about general preparations and supported upgrade methods and paths, see the documentation at https://documentation.suse.com/sles/12-SP3/html/SLES-all/cha-update-sle.html.
3.2.1 Product Registration Changes for HPC Customers #
For SUSE Linux Enterprise 12, there was a High Performance Computing subscription named "SUSE Linux Enterprise Server for HPC" (SLES for HPC). With SLE 15, this subscription does not exist anymore and has been replaced. The equivalent subscription is named "SUSE Linux Enterprise High Performance Computing" (SLE-HPC) and requires a different license key. Because of this requirement, a SLES for HPC 12 system will by default upgrade to a regular "SUSE Linux Enterprise Server".
To properly upgrade a SLES for HPC system to a SLE-HPC, the system needs to be converted to SLE-HPC first. SUSE provides a tool to simplify this conversion by performing the product conversion and switch to the SLE-HPC subscription. However, the tool does not perform the upgrade itself.
When run without extra parameters, the script assumes that the SLES for HPC subscription is valid and not expired. If the subscription has expired, you need to provide a valid registration key for SLE-HPC.
The script reads the current set of registered modules and extensions and after the system has been converted to SLE-HPC, it tries to add them again.
Important: Providing a Registration Key to the Conversion Script
The script cannot restore the previous registration state if the supplied registration key is incorrect or invalid.
To install the script, run
zypper in switch_sles_sle-hpc
.Execute the script from the command line as
root
:switch_sles_sle-hpc -e <REGISTRATION_EMAIL> -r <NEW_REGISTRATION_KEY>
The parameters
-e
and-r
are only required if the previous registration has expired, otherwise they are optional. To run the script in batch mode, add the option-y
. It answers all questions with yes.
For more information, see the man page
switch_sles_sle-hpc(8)
and
README.SUSE
.
3.2.2 FreeRADIUS Configuration Needs to Be Merged Manually #
When upgrading a SLES installation that includes
freeradius-server
and a non-standard
/etc/raddb/radiusd.conf
configuration file to SLES
12 SP3, make sure to manually merge the new
radiusd.conf
configuration section into the custom
configuration before running the FreeRADIUS server.
In particular, pay attention to the parameter
correct_escapes
: The default behavior did not
change, but the new default
/etc/raddb/policy.d/filter
only functions with the
setting correct_escapes=true
.
3.2.3 Error on Migration From SP2 to SP3 When HPC Module Is Selected #
When the High Performance Computing module is selected, the following error message may be encountered during Migration from SLES 12 SP2 to SLES 12 SP3:
Can't get available migrations from server: SUSE::Connect::ApiError: The requested products '' are not activated on the system. '/usr/lib/zypper/commands/zypper-migration' exited with status 1
The problem can be resolved by re-registering the HPC module using the following two commands:
rpm -e sle-module-hpc-release-POOL sle-module-hpc-release
SUSEConnect -p sle-module-hpc/12/x86_64
These commands can also be performed before migration as a preventive measure.
3.2.4 Automatic Log Rotation Will Be Disabled After Upgrade #
If the package
logrotate
was installed or
updated before
systemd-presets-branding-SLE
,
automatic log rotation will be disabled after the upgrade to SLES 12
SP3.
Enable the logrotate
systemd timer manually. To do
so, run the following commands as root:
systemctl enable logrotate.timer
systemctl restart logrotate.timer
3.2.5 Online Migration with Live Patching Enabled #
The SLES online migration process reports package conflicts when Live Patching is enabled and the kernel is being upgraded. This applies when crossing the boundary between two Service Packs.
To prevent the conflicts, before starting the migration, execute the following as a super user:
zypper rm $(rpm -qa kgraft-patch-*)
3.2.6 Online Migration: Checking the Status of Registered Products #
It is common that during the lifecycle of a system installation, registered extensions and modules are removed from the system without also deactivating them on the registration server.
To prevent errors and unexpected behavior during an online migration, the status of installed products needs to be checked before the migration to allow reinstalling or deactivating products.
A new step has been added to the current online migration workflow, it will check the registered products that are not currently installed in the system and allows:
Trying to install the products from the available repositories (Install).
Deactivating the products in SCC (Deactivate).
3.2.7 Updating Registration Status After Rollback #
This entry has appeared in a previous release notes document.
When performing a service pack migration, it is necessary to change the configuration on the registration server to provide access to the new repositories. If the migration process is interrupted or reverted (via restoring from a backup or snapshot), the information on the registration server is inconsistent with the status of the system. This may lead to you being prevented from accessing update repositories or to wrong repositories being used on the client.
When a rollback is done via Snapper, the system will notify the
registration server to ensure access to the correct repositories is set
up during the boot process. If the system was restored any other way or
the communication with the registration server failed for any reason
(for example, because the server was not accessible due to network
issues), trigger the rollback on the client manually by calling
snapper rollback
.
We suggest always checking that the correct repositories are set up on
the system, especially after refreshing the service using
zypper ref -s
.
3.2.8 /tmp Cleanup from sysconfig Automatically Migrated into systemd Configuration #
This entry has appeared in a previous release notes document.
By default, systemd cleans temporary directories daily, and
systemd does not honor sysconfig settings in
/etc/sysconfig/cron
such as
TMP_DIRS_TO_CLEAR
. Thus, it is
needed to change sysconfig settings to avoid data loss or unwanted
behavior.
When updating to SLE 12 or newer, the variables in
/etc/sysconfig/cron
will be automatically migrated
into an appropriate systemd configuration (see
/etc/tmpfiles.d/tmp.conf
). The following variables
are affected:
MAX_DAYS_IN_TMP MAX_DAYS_IN_LONG_TMP TMP_DIRS_TO_CLEAR LONG_TMP_DIRS_TO_CLEAR CLEAR_TMP_DIRS_AT_BOOTUP OWNER_TO_KEEP_IN_TMP
3.3 For More Information #
For more information, see Section 4, “Architecture Independent Information” and the sections relating to your respective hardware architecture.
4 Architecture Independent Information #
Information in this section pertains to all architectures supported by SUSE Linux Enterprise Server 12 SP3.
4.1 Kernel #
4.1.1 Unprivileged eBPF usage has been disabled #
A large amount of security issues was found and fixed in the Extended Berkeley Packet Filter (eBPF) code. To reduce the attack surface, its usage has been restricted to privileged users only.
Privileged users include root
. Programs with the
CAP_BPF
capability in the newer versions of the Linux
kernel can still use eBPF as-is.
To check the privileged state, you can check the value of the
/proc/sys/kernel/unprivileged_bpf_disabled
parameter. Value of 0 means "unprivileged enable", and value of 2 means
"only privileged users enabled".
This setting can be changed by the root
user:
to enable it temporarily for all users by running the command
sysctl kernel.unprivileged_bpf_disabled=0
to enable it permanently by adding
kernel.unprivileged_bpf_disabled=0
to the/etc/sysctl.conf
file.
4.1.2 Support for Scalable MCA (SMCA) #
As more functionality is added to hardware beginning with family 0x17, being able to track it requires an enhanced approach to MCA.
SLE 12 SP3 now supports AMD's Scalable MCA (SMCA). SMCA is a specification which enriches the error information logged by the hardware to allow for improved error handling, better diagnosability, and future scalability.
4.1.3 Update Repositories for kGraft Live Patching Are Now Specific to Service Packs #
Starting with SLE 12 SP3, the update repositories supplying kernel patches that can be applied using kGraft are split up by Service Pack version. This allows for easier maintenance and reduces the chance of complications during Service Pack upgrades.
4.1.4 Support for Intel Kaby Lake Processors #
SLE 12 SP3 now contains support for Intel processors from the generation code-named Kaby Lake.
4.1.5 Support for Intel Xeon Phi Knights Landing Coprocessors #
SLE 12 SP3 now supports Intel Xeon Phi coprocessors from the product line code-named Kights Landing.
4.1.6 NVDIMM: Support for Device DAX (Direct Access) #
SLE 12 SP3 now supports Device DAX. Device DAX is the device-centric analogue of File System DAX: It allows memory ranges to be allocated and mapped without need of an intervening file system. This feature can improve the performance of both KVM guests and databases like MSSQL using raw I/O access to NVDIMM.
4.2 Kernel Modules #
An important requirement for every enterprise operating system is the level of support available for specific environments. Kernel modules are the most relevant connector between hardware (“controllers”) and the operating system.
For more information about the handling of kernel modules, see the SUSE Linux Enterprise Administration Guide.
4.2.1 Support for Matrox G200eH3 Graphics Chips #
SLE 12 SP3 includes a driver to enable Matrox G200eH3 graphics chips that will be used in HPE Gen10 servers.
4.2.2 hpwdt Driver (HPE Watchdog) Has Been Updated #
SLE 12 SP3 includes an updated version of the HPE watchdog driver
hpwdt
to enable support for the upcoming HPE Gen10
Servers.
4.2.3 Direct Access to Files in Non-Volatile DIMMs #
This entry has appeared in a previous release notes document.
The page cache is usually used to buffer reads and writes to
files. It is also used to provide the pages which are mapped into
userspace by a call to mmap
.
For block devices that are memory-like, the page cache pages would be
unnecessary copies of the original storage.
The Direct Access (DAX) kernel code avoids the extra copy by directly reading from and writing to the storage device. For file mappings, the storage device is mapped directly into userspace. This functionality is implemented in the XFS and Ext4 file systems.
Non-volatile DIMMs can be "partitioned" into so-called namespaces which are then exposed as block devices by the Linux kernel. Each namespace can be configured in several modes. Although DAX functionality is available for file systems on top of namespaces in both raw or memory modes, SUSE does not support use of the DAX feature in file systems on top of raw mode namespaces as they have unexpected quirks and in future releases the feature is likely to go away completely.
4.3 Security #
4.3.1 SELinux Enablement #
This entry has appeared in a previous release notes document.
SELinux capabilities have been added to SUSE Linux Enterprise Server (in addition to other frameworks, such as AppArmor). While SELinux is not enabled by default, customers can run SELinux with SUSE Linux Enterprise Server if they choose to.
SELinux Enablement includes the following:
The kernel ships with SELinux support.
We will apply SELinux patches to all “common” userland packages.
The libraries required for SELinux (
libselinux
,libsepol
,libsemanage
, etc.) have been added SUSE Linux Enterprise.Quality Assurance is performed with SELinux disabled—to make sure that SELinux patches do not break the default delivery and the majority of packages.
The SELinux-specific tools are shipped as part of the default distribution delivery.
SELinux policies are not provided by SUSE. Supported policies may be available from the repositories in the future.
Customers and Partners who have an interest in using SELinux in their solutions are encouraged to contact SUSE to evaluate their necessary level of support and how support and services for their specific SELinux policies will be granted.
By enabling SELinux in our code base, we add community code to offer customers the option to use SELinux without replacing significant parts of the distribution.
4.3.2 TPM-Capable UEFI Bootloader #
SLES 12 SP3 has TPM support in the bootloader used on UEFI systems.
4.4 Networking #
4.4.1 Support for the IDNA2008 Standard for Internationalized Domain Names #
The original method for implementing Internationalized Domain Names was IDNA2003. This has been replaced by the IDNA2008 standard, the use of which is mandatory for some top-level domains.
The network utilities wget
and
curl
have been updated to support IDNA2008 through
the use of libidn2
. This update also affects
consumers of the libcurl
library.
4.4.2 No Support for Samba as Active Directory-Style Domain Controller #
This entry has appeared in a previous release notes document.
The version of Samba shipped with SLE 12 GA and newer does not include support to operate as an Active Directory-style domain controller. This functionality is currently disabled, as it lacks integration with system-wide MIT Kerberos.
4.4.3 New GeoIP Database Sources #
The GeoIP databases allow approximately geo-locating users by their IP address. In the past, the company MaxMind made such data available for free in its GeoLite Legacy databases. On January 2, 2019, MaxMind discontinued the GeoLite Legacy databases, now offering only the newer GeoLite2 databases for download. To comply with new data protection regulation, since December 30, 2019, GeoLite2 database users are required to comply with an additional usage license. This change means users now need to register for a MaxMind account and obtain a license key to download GeoLite2 databases. For more information about these changes, see the MaxMind blog (https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases/).
SLES includes the GeoIP package of tools that are only compatible with GeoLite Legacy databases. As an update for SLES 12 SP3, we introduce the following new packages to deal with the changes to the GeoLite service:
geoipupdate
: The official Maxmind tool for downloading GeoLite2 databases. To use this tool, set up the configuration file with your MaxMind account details. This configuration file can also be generated on the Maxmind web page. For more information, see https://dev.maxmind.com/geoip/geoip2/geolite2/.geolite2legacy
: A script for converting GeoLite2 CSV data to the GeoLite Legacy format.geoipupdate-legacy
: A convenience script that downloads GeoLite2 data, converts it to the GeoLite Legacy format, and stores it in/var/lib/GeoIP
. With this script, applications developed for use with the legacygeoip-fetch
tool will continue to work.
4.5 Systems Management #
4.5.1 Salt Has Been Updated to Version 3000 #
Salt has been upgraded to upstream version 3000, plus a number of patches, backports and enhancements by SUSE. In particular, CVE-2020-11651 and CVE-2020-11652 fixes are included in our release.
As part of this upgrade, cryptography is now managed by the Python-M2Crypto library (which is itself based on the well-known OpenSSL library).
We intend to regularly upgrade Salt to more recent versions.
For more details about changes in your manually-created Salt states, see the Salt 3000 upstream release notes (https://docs.saltstack.com/en/latest/topics/releases/3000.html).
Salt 3000 is the last version of Salt which will support the old syntax
of the cmd.run
module.
4.5.2 System Clone AutoYaST XML Reflects Btrfs Snapshot State #
In previous versions of SLE 12, when using
yast clone_system
, AutoYaST
would always enable snapshots for Btrfs Volumes, regardless of whether
they were enabled on the original system.
Starting with SLE 12 SP3, yast clone_system
will now
create an AutoYaST XML file that accurately reflects snapshot state of
Btrfs volumes.
4.5.3 "Register Extensions or Modules Again" Has Been Removed from YaST #
The button Register Extensions or Modules Again has been removed from the YaST registration module.
This option was redundant: It is still possible to register modules or extensions again with a different SCC account or using a different registration server (SCC or SMT).
Additionally, the option to filter out beta versions is now only visible if the server provides beta versions, otherwise the check box is hidden.
4.5.4 The YaST Module for SSH Server Configuration Has Been Removed #
The YaST module for configuring an SSH server which was present in SLE 11, is not a part of SLE 12. It does not have any direct successor.
The module SSH Server only supported configuring a small subset of all SSH server capabilities. Therefore, the functionality of the module can be replaced by using a combination of 2 YaST modules: The /etc/sysconfig Editor and the Services Manager. This also applies to system configuration via AutoYaST.
4.5.5 Sudo Has Been Updated from 1.8.10p3 to 1.8.19p2 #
Sudo has been updated from version 1.8.10p3 to 1.8.19p2. This update
fixes many bugs and security vulnerabilities and also brings several
enhancements. For more information, read the changelog file in
/usr/share/doc/packages/sudo/NEWS
.
4.5.6 YaST: Default Auto-Refresh Status for Local Repositories Is "Off" #
In previous versions of SLE 12, when installing from a USB drive or external disk, the repository linking to the installation media was set to auto-refresh. This means that when the USB drive or the external disk had been removed and you are trying to work with YaST or Zypper, you were asked to insert the external medium again.
In the YaST version shipped with SLE 12 SP3, we have changed the
default auto-refresh status for local repositories (USB drives, hard
disks or dir://
) to off
which
avoids checking the now usually unnecessary repository.
4.5.7 All Snapper Commands Support the Option --no-dbus #
Normally, the
snapper
command line tool uses
DBus to connect to snapperd
which does most of the actual work. This allows non-root users to work
with Snapper.
However, there are situations when using DBus is not
possible, for example, when chrooted on the rescue system or when DBus
itself is broken after an update. This can limit the usefulness of
Snapper as a disaster recovery tool. Therefore, some Snapper commands
already supported the --no-dbus
option, bypassing DBus and
snapperd
.
In the version of Snapper shipped with SLE 12 SP3, all Snapper commands
support the --no-dbus
option.
4.5.8 blogd Boot Log Daemon Available as an Alternative to Plymouth #
The blogd
boot log daemon (package
blog
and blog-plymouth
) can be
used as a replacement for Plymouth in situations where a splash screen
or usage of a frame buffer is unwanted. blogd
is
also a Plymouth agent. That means, it can handle requests for a
password prompt by the system password service of systemd.
The blogd
daemon writes out boot log messages to
every terminal device used for /dev/console
and to
the log file /var/log/boot.log
. When halting or
rebooting the system, it moves the log file to
/var/log/boot.old
and appends all log messages up to
the point at which the file systems becomes unavailable.
4.5.9 ntp 4.2.8 #
This entry has appeared in a previous release notes document.
ntp was updated to version 4.2.8.
The ntp server ntpd does not synchronize with its peers anymore and the peers are specified by their host name in
/etc/ntp.conf
.The output of
ntpq --peers
lists IP numbers of the remote servers instead of their host names.
Name resolution for the affected hosts works otherwise.
Parameter changes#
The meaning of some parameters for the sntp command-line tool have
changed or have been dropped, for example sntp -s
is
now sntp -S
. Please review any
sntp
usage in your own scripts for required changes.
After having been deprecated for several years, ntpdc is now disabled
by default for security reasons. It can be re-enabled by adding the
line enable mode7
to
/etc/ntp.conf
, but preferably
ntpq
should be used instead.
4.5.10 Support for Setting Kdump Low-Memory and High-Memory Allocation on the YaST Command Line #
In the past, YaST supported setting a high-memory and
low-memory amounts for the kernel parameter
crashkernel
only from the
ncurses or Qt interfaces.
You can now set these memory amounts on the command line too. To do so,
use, for example, yast kdump startup enable
alloc_mem=256,768
. The first number represents the
low-memory amount, the second number represents the high-memory amount.
Therefore, the example is equivalent to setting
crashkernel=256,low crashkernel=768,high
on the
kernel command line.
4.5.11 Salt Configuration with AutoYaST #
With SLE 12 SP3, it is possible to configure Salt clients using
AutoYaST. To use this feature, you need the package
salt-minion
which is not available in the standard
SLES product. However, you can install this dependency from the SLE
Module Advanced Systems Management.
4.5.12 Zypper Option --plus-content Has Been Enhanced #
The zypper
option --plus-content
was enhanced to allow specifying disabled repositories by name or alias
also. Additionally, it can now be used with the zypper
refresh
command, to refresh either specified or all disabled
repositories without the need to enable them.
4.5.13 YaST: iSCSI Authentication Has Been Redesigned #
In the past, the user interface for iSCSI authentication offered by YaST was not optimal. Additionally, not every option was explained in the help.
In SLE 12 SP3, the YaST module iSCSI Initiator and Target has with the following enhancements:
Clearer terminology:
For discovery sessions, No Authentication is now called No Discovery Authentication.
For login sessions, Use Authentication is now called Use Login Authentication, whereas No Authentication is now called No Login Authentication.
Incoming Authentication is now called Authentication by Initiators on the initiator side, whereas it is called Authentication by Targets on the target side.
Outgoing Authentication is now called Authentication by Targets on the initiator side, whereas it is called Authentication by Initiators on the target side.
No Login Authentication can now be used to log in to targets without authentication.
The help now explains password options.
4.5.14 systemd Daemon #
This entry has appeared in a previous release notes document.
SLE 12 has moved to systemd, a new way of managing services. For more information, see the SUSE Linux Enterprise Admin Guide, Section The systemd Daemon (https://documentation.suse.com/sles/12-SP3/.
4.6 Storage #
4.6.1 Compatibility of Newly Created XFS File Systems With SLE 11 #
This entry has appeared in a previous release notes document.
XFS file systems created with the default settings of SLES 12 SP2 and later cannot be used SLE 11 installations.
In SLE 12 SP2 and later, by default, XFS file systems are created with
the option ftype=1
that changes the superblock
format. Among other things, this helps accommodate Docker. However,
this option is incompatible with SLE 11.
To create a SLE 11-compatible XFS file system, use the parameter
ftype=0
. For example, to format an empty device,
run: :
mkfs.xfs -m crc=0 -n ftype=0 [DEVICE]
4.6.2 Automatic Cleanup of Snapshots Created by Rollbacks #
In SLES 12 SP2 and before, you had to manually delete snapshots created by rollbacks at an appropriate time to avoid filling up the storage.
Starting with SLE 12 SP3, this process has been automated. During a
rollback, Snapper sets the cleanup algorithm number
for the snapshot corresponding to the previous default subvolume and
for the backup snapshot of the previous default subvolume.
For more information, see http://snapper.io/2017/05/10/automatic-cleanup-after-rollback.html (http://snapper.io/2017/05/10/automatic-cleanup-after-rollback.html)
4.6.3 Establishing an NVMe-over-Fabrics Connection #
To be able to establish an NVMe-over-Fabrics connection with the Linux
kernel provided with the SLE 12 SP3 media, you need to delete or rename
the file /etc/nvme/hostid
.
To restore this file when the kernel update that fixes this issue is released, generate a new host ID by running:
uuidgen > /etc/nvme/hostid
4.6.4 Root File System Conversion to Btrfs Not Supported #
This entry has appeared in a previous release notes document.
If it is not the root file system and if the file system has at least 20 % free space available, in-place conversion of an existing Ext2/Ext3/Ext4 or ReiserFS file system is supported for data mount points.
SUSE does not recommend or support in-place conversion of OS root file systems. In-place conversion to Btrfs of root file systems requires manual subvolume configuration and additional configuration changes that are not automatically applied for all use cases.
To ensure data integrity and the highest level of customer satisfaction, when upgrading, maintain existing root file systems. Alternatively, reinstall the entire operating system.
4.6.5 /var/cache on an Own Subvolume for Snapshots and Rollback #
This entry has appeared in a previous release notes document.
/var/cache
contains very volatile data,
like the Zypper cache with RPM packages in different versions for each
update. As a result of storing data that is mostly redundant but highly
volatile, the amount of disk space a snapshot occupies can increase
very fast.
To solve this, move /var/cache
to a separate
subvolume. On fresh installations of SLE 12 SP2 or newer, this is done
automatically. To convert an existing root file system, perform the
following steps:
Find out the device name (
/dev/sda2
,/dev/sda3
etc.) of the root file system:df /
Identify the parent subvolume of all the other subvolumes. For SLE 12 installations, this is a subvolume named
@
. To check if you have a@
subvolume, use:btrfs subvolume list / | grep '@'
. If the output of this command is empty, you do not have a subvolume named@
. In that case, you may be able to proceed with subvolume ID 5 which was used in older versions of SLE.Now mount the requisite subvolume.
If you have a
@
subvolume, mount that subvolume to a temporary mount point:mount <root_device> -o subvol=@ /mnt
If you don't have a
@
subvolume, mount subvolume ID 5 instead:mount <root_device> -o subvolid=5 /mnt
/mnt/var/cache
can already exist and could be the same directory as/var/cache
. To avoid data loss, move it:mv /mnt/var/cache /mnt/var/cache.old
In either case, create a new subvolume:
btrfs subvol create /mnt/var/cache
If there is now a directory
/var/cache.old
, move it to the new location:mv /var/cache.old/* /mnt/var/cache
. If that is not the case, instead do:mv /var/cache/* /mnt/var/cache/
Optionally, remove
/mnt/var/cache.old
:rm -rf /mnt/var/cache.old
Unmount the subvolume from the temporary mount point:
umount /mnt
Add an entry to
/etc/fstab
for the new/var/cache
subvolume. Use an existing subvolume as a template to copy from. Make sure to leave the UUID untouched (this is the root file system's UUID) and change the subvolume name and its mount point consistently to/var/cache
.Mount the new subvolume as specified in /etc/fstab:
mount /var/cache
4.6.6 Support for Arbitrary Btrfs Subvolume Structure in AutoYaST #
To set up a system with a non-default Btrfs subvolume structure with
AutoYaST, you can now specify an arbitrary Btrfs subvolume structure in
autoinst.xml
.
4.6.7 Snapper: Cleanup Rules Based on Fill Level #
This entry has appeared in a previous release notes document.
Some programs do not respect the special disk space characteristics of a Btrfs file system containing snapshots. This can result in unexpected situations where no free space is left on a Btrfs filesystem.
Snapper can watch the disk space of snapshots that have automatic cleanup enabled and can try to keep the amount of disk space used below a threshold.
If snapshots are enabled, the feature is enabled for the root file system by default on new installations.
For existing installations, the system administrator must enable quota and set limits for the cleanup algorithm to use this new feature. This can be done using the following commands:
snapper setup-quota
snapper set-config NUMBER_LIMIT=2-10 NUMBER_LIMIT_IMPORTANT=4-10
For more information, see the man pages of snapper
and snapper-configs
.
4.7 Virtualization #
4.7.1 Supported Offline Migration Scenarios #
The following host operating system combinations will be fully supported (L3) for migrating guests from one host to another for SLES 12 SP3:
SLES 12 GA to SLES 12 SP3
SLES 12 SP1 to SLES 12 SP3
SLES 12 SP2 to SLES 12 SP3
4.7.2 SUSE Virtual Machine Driver Pack 2.5 #
SUSE Linux Enterprise Virtual Machine Driver Pack is a set of paravirtualized device drivers for Microsoft Windows operating systems. These drivers improve the performance of unmodified Windows guest operating systems that are run in virtual environments created using Xen or KVM hypervisors with SUSE Linux Enterprise Server 11 SP4 and SUSE Linux Enterprise Server 12 SP3. Paravirtualized device drivers are installed in virtual machine instances of operating systems and represent hardware and functionality similar to the underlying physical hardware used by the system virtualization software layer.
SLE now comes with SUSE Linux Enterprise Virtual Machine Driver Pack 2.5.
4.7.3 KVM #
4.7.3.1 KVM Now Supports up to 288 vCPUs #
KVM now supports up to 288 vCPUs in a virtual machine.
4.7.3.2 Support for AVIC (Advanced Virtual Interrupt Controller) #
In the past, LAPIC interrupts (Local Advanced Interrupt Controller) on AMD processors had to be virtualized with software which did not yield optimal performance.
The version of KVM shipped with SLE 12 SP3 can use AVIC (Advanced Virtual Interrupt Controller), a hardware feature in recent AMD processors, to provide a virtualized LAPIC to the guest. This improves the virtualization performance.
AVIC is a set of components to present a virtualized LAPIC to guests, thus allowing most LAPIC accesses and interrupt delivery to the guests directly. The AVIC architecture also leverages the existing IOMMU interrupt redirection mechanism to deliver peripheral device interrupts to guests directly.
4.8 Miscellaneous #
4.8.1 Virtual Users support in vsftpd
#
Previously, this functionality was provided by the
pam_userdb
module that was part of the general
pam
package. This module has been removed and the
functionality is now provided as part of the pam-extra
package.
4.8.2 GNOME: Support for Chinese, Japanese, Korean Installed and Configured Automatically #
When first logging in to GNOME on SLES 12 SP3 with the Workstation
Extension or SLED 12 SP3, gnome-initial-setup
will
ask Chinese, Japanese, and Korean users for their preferred input
method.
Because gnome-initial-setup
is set up to run
directly after the first login, it is also set up to not run before the
GDM interface starts. This behavior is configured in the GDM
configuration file /etc/gdm/custom.conf
with the
line InitialSetupEnable=False
. Do not change this
setting, otherwise a system without a normal user will not be able to
provide the expected GDM log-in window.
5 AMD64/Intel 64 (x86_64) Specific Information #
Information in this section pertains to the version of SUSE Linux Enterprise Server 12 SP3 for the AMD64/Intel 64 architectures.
5.1 System and Vendor Specific Information #
5.1.1 Intel* Omni-Path Architecture (OPA) Host Software #
Intel Omni-Path Architecture (OPA) host software is fully supported in SUSE Linux Enterprise Server 12 SP3.
Intel OPA provides Host Fabric Interface (HFI) hardware with initialization and setup for high performance data transfers (high bandwidth, high message rate, low latency) between compute and I/O nodes in a clustered environment.
For instructions on installing Intel Omni-Path Architecture documentation, see https://www.intel.com/content/dam/support/us/en/documents/network-and-i-o/fabric-products/Intel_OP_Software_SLES_12_3_RN_J71758.pdf.
5.1.2 Support for Both TPM 1.2 and 2.0 #
Over the recent years TPM 2.0 variants have become more common. With a different API this needs new libraries, tools and bootloader support.
SLE 12 SP2 and also SP3 provide equal support for TPM 1.2 and TPM 2.0 utilities and booting.
6 POWER (ppc64le) Specific Information #
Information in this section pertains to the version of SUSE Linux Enterprise Server 12 SP3 for the POWER architecture.
6.1 Support for ibmvnic Networking Driver #
The kernel device driver ibmvnic
provides support for
vNIC (virtual Network Interface Controller) which is a PowerVM virtual
networking technology that delivers enterprise capabilities and
simplifies network management on IBM POWER systems. It is an efficient
high-performance technology.
When combined with SR-IOV NIC, it provides bandwidth control Quality of Service (QoS) capabilities at the virtual NIC level. vNIC significantly reduces virtualization overhead resulting in lower latencies and fewer server resources (CPU, memory) required for network virtualization.
For a detailed support statement of ibmvnic in SLES, see https://www.suse.com/support/kb/doc/?id=7023703.
6.2 QEMU-virtualized PReP Partition #
On POWER, the PReP partition which contains the bootloader has no unique identifier other than the serial number of the disk on which it created. When virtualized with QEMU, QEMU does not provide any disk serial number unless you explicitly specify one.
This means that when running under QEMU, the PReP partition of an installation does not have any unique identification. In consequence, the partition name can change when a disk is added or removed from the virtual machine or when the storage configuration otherwise changes. This can lead to system errors when reinstalling or updating the bootloader.
If you expect the storage configuration of a QEMU virtual machine on POWER to change over the lifetime of the installation, we recommend sidestepping this issue: Before the initial installation, assign a unique serial number to each disk in a QEMU virtual machine.
6.3 512 TB Virtual Address Space on POWER #
Certain workloads require a large virtual address space for a single process
The virtual address space limit has been increased from 64 TB to 512 TB on the POWER architecture. To maintain compatibility with older software and hardware, processes are limited to 128 TB virtual address space unless they explicitly map memory above 128 TB.
This functionality is supported starting with kernel 4.4.103. We always recommend using the latest kernel update to get fixes for any issues found over the product lifetime.
6.4 kdump: Shorter Time to Filter and Save /proc/vmcore #
The updated makedumpfile
tool shipped SLES 12 SP3
supports multithreading. This can be leveraged to reduce the time spent
in the capture kernel by following the below steps:
Set
KDUMP_CPUS=[CPUS]
in the file/etc/sysconfig/kdump
. Replace [CPUS] with the number of CPUs to use in the Kdump kernel.Set
MAKEDUMPFILE_OPTIONS="--num-threads [CPUs-1]"
. Using one CPU less than there are total active CPUs can improve performance.Set
KDUMPTOOL_FLAGS=NOSPLIT
in the file/etc/sysconfig/kdump
.Restart
kdump.service
.
6.5 Parameter crashkernel Is Now Used for fadump Memory Reservation #
Starting with SLE 12 SP3, to reserve memory for
fadump
, use the parameter
crashkernel
parameter instead of the deprecated
parameter fadump_reserve_mem
. The offset for
fadump
is calculated in the kernel. Therefore, if you
provide an offset in the parameter crashkernel=
, it
will be ignored.
6.6 Encryption Improvements Using Hardware Optimizations #
The performance of kernel XTS mode on POWER platforms has been improved in SLES 12 SP3 by exploiting instruction set enhancements. On POWER8, it now runs up to 20 times faster than in SLES 12 SP2. Kernel CBC and CTR modes were already optimized in a previous release.
To ensure that your kernel is using the accelerated POWER kernel crypto
implementations, verify that the module vmx_crypto
has been loaded:
lsmod | grep vmx_crypto
6.7 Ceph Client Support on IBM Z and POWER #
On SLES 12 SP2 and SLES 12 SP3, IBM Z and POWER machines can now function as SUSE Enterprise Storage (Ceph) clients.
This support is possible because the kernels for IBM Z and POWER now have the relevant modules for CephFS and RBD enabled. The Ceph client RPMs for IBM Z and POWER are included in SLE 12 SP3. Additionally, the QEMU packages for IBM Z and POWER are now built against librbd.
6.8 Memory Reservation Support for fadump in YaST #
Memory to be reserved for firmware-assisted dumps (also known as
fadump
, available on the POWER architecture) can now
be specified in the Kdump module of YaST.
6.9 Speed of ibmveth
Interface Not Reported Accurately #
The ibmveth
interface is a paravirtualized interface.
When communicating between LPARs within the same system, the interface's
speed is limited only by the system's CPU and memory bandwidth.
When the virtual Ethernet is bridged to a physical network, the
interface's speed is limited by the speed of that physical network.
Unfortunately, the ibmveth
driver has no way of
determining automatically whether it is bridged to a physical network
and what the speed of that link is.
ibmveth
therefore reports its speed as a fixed value
of 1 Gb/s which in many cases will be inaccurate.
To determine the actual speed of the interface, use a benchmark.
7 IBM Z (s390x) Specific Information #
Information in this section pertains to the version of SUSE Linux Enterprise Server 12 SP3 for the IBM Z architecture. For more information, see https://www.ibm.com/developerworks/linux/linux390/documentation_novell_suse.html
IBM zEnterprise 196 (z196) and IBM zEnterprise 114 (z114) further on referred to as z196 and z114.
7.1 Hardware #
7.1.1 Support for New Hardware Instructions in Toolchain #
Support for new hardware instructions in binutils
,
GCC and GDB is available through the Toolchain Module.
7.2 Virtualization #
7.2.1 qeth Device Driver Has Accelerated set_rx_mode Implementation #
Improved initialization of qeth network devices in layer 2 and layer 3 allows for faster booting Linux instances.
7.3 Storage #
7.3.1 parted Augmented with Partitioning Functionality as Provided by IBM Z Tools #
The partitioning utility parted
now includes
partitioning functionality for FBA and ECKD DASDs. This brings
parted
up to par with the functionality provided by
IBM Z tools.
7.3.2 DASD Channel Path-Aware Error Recovery #
The DASD driver can now exclude paths from normal operation if other channel paths are available.
7.3.3 New dasdfmt Quick Format Mode #
With the new quick format mode, you can define DASD volumes with a pre-formatted track layout. This significantly reduces the deployment time of DASD volumes.
7.3.4 Ceph Client Support on IBM Z and POWER #
On SLES 12 SP2 and SLES 12 SP3, IBM Z and POWER machines can now function as SUSE Enterprise Storage (Ceph) clients.
This support is possible because the kernels for IBM Z and POWER now have the relevant modules for CephFS and RBD enabled. The Ceph client RPMs for IBM Z and POWER are included in SLE 12 SP3. Additionally, the QEMU packages for IBM Z and POWER are now built against librbd.
7.3.5 GPFS Partition Type in fdasd #
The new partition type "GPFS" in the fdasd tool supports fast identification and handles partitions that contain GPFS Network Shared Disks.
7.3.6 LUN Scanning Enabled by Default #
This entry has appeared in a previous release notes document.
Unlike in SLES 11, LUN scanning is enabled by default in SLES 12 and newer. Instead of having a user-maintained whitelist of FibreChannel/SCSI disks that are brought online to the guest, the system now polls all targets on a fabric. This is especially helpful on systems with hundreds of zFCP disks and exclusive zoning.
However, on systems with few disks and an open fabric, this can lead to long boot times or access to inappropriate disks. It can also lead to difficulties offlining and removing disks.
To disable LUN scanning, set the boot parameter
zfcp.allow_lun_scan=0
.
For LUN Scanning to work properly, the minimum storage firmware levels are:
DS8000 Code Bundle Level 64.0.175.0
DS6000 Code Bundle Level 6.2.2.108
7.4 Network #
7.4.1 snIPL: Hardening #
Secure connections for snIPL enable Linux to remotely handle a greater variety of environments.
7.5 Security #
7.5.1 libica with DRBG Random Number Generation #
The package libica
now includes a DRBG
(Deterministic Random Bit Generator) that is compliant with the updated
security specification NIST SP 800-90A for pseudo-random number
generation.
7.5.2 Toleration Support for New Cryptography Hardware #
SLES 12 SP3 includes support for using new cryptography hardware in toleration mode. This allows performing cryptographic operations as on older hardware which means an easier migration to new hardware.
7.6 Reliability, Availability, Serviceability (RAS) #
7.6.1 Stable PCI Identifiers Using UIDs #
To maintain persistent configurations for PCI devices, SLES 12 SP3 now provides stable and unique identifiers for PCI functions for as long as the I/O configuration (IOCDS and HCD) remains stable.
7.6.2 Hardware Breakpoint Support in GDB #
When code needs to be treated as read-only, software breakpoints cannot be used. GDB can now use hardware breakpoints for debugging.
7.7 Performance #
7.7.1 Support for 2 GB Memory Pages #
Applications with huge memory sets can use 2 GB large memory pages for improved memory handling.
7.7.2 Extended CPU Topology to Support Drawers #
Addressing CPUs across drawers improves scheduling and performance analysis on IBM z Systems z13 and later hardware.
8 ARM 64-Bit (AArch64) Specific Information #
Information in this section pertains to the version of SUSE Linux Enterprise Server 12 SP3 for the AArch64 architecture.
8.1 Boot and Driver Enablement for Raspberry Pi 3 Model B #
The Raspberry Pi 3 Model B is a single-board computer based on the Broadcom BCM2837 chipset.
SUSE provides a preconfigured image, SUSE Linux Enterprise
Server for ARM 12 SP3 for the Raspberry Pi and a Kiwi
template to derive custom appliances,
kiwi-templates-SLES12-RPi
. The following sections
describe requirements and support limitations of custom and modified
images.
Boot Requirements#
To boot SUSE Linux Enterprise Server 12 SP3 on the Raspberry Pi 3 Model
B, a special MBR partitioning scheme must be used. Firmware files for
the VideoCore IV processor must be installed into a FAT-formatted
partition (type 0x0C
) that should be mounted as
/boot/efi
(alternatively /boot/vc
if separate from /boot/efi
). To provide a
UEFI-compatible environment for booting, as on regular server systems,
configure the Raspberry Pi firmware to load the U-Boot bootloader.
U-Boot in turn can then boot GRUB either from an FAT-/Ext4-formatted SD
card or USB device (Btrfs is not supported by U-Boot) or via network.
Driver Enablement#
Not all connectors or functions of the Raspberry Pi are supported in this version:
VideoCore IV GPU 3D acceleration needs to remain disabled. The SUSE image provides a configuration file to that effect.
Built-in audio is not supported, neither via HDMI nor via audio jack.
The CSI camera connector (for example, the Raspberry Pi Camera Module) is not supported.
The DSI display connector (for example, the Raspberry Pi Touch Display) is not supported.
Expansion Boards#
The Raspberry Pi 3 Model B offers a 40-pin General Purpose I/O connector, with multiple software-configurable functions such as UART, I²C and SPI. This pin mux configuration along with any external devices attached to the pins is defined in the Device Tree which is passed by the bootloader to the kernel.
In the SUSE image, HATs and other expansion boards attached to the GPIO connector are not enabled by default and SUSE does not provide support for their use. However, insofar as drivers for pin functions and for attached chipsets are included in SUSE Linux Enterprise, they can be used. SUSE does not provide support for making changes to the Device Tree but successful changes will not affect the support status of the operating system itself. Be aware that errors in the Device Tree can stop the system from booting successfully or can even damage the hardware.
In SUSE Linux Enterprise Server for ARM 12 SP3 for the Raspberry Pi, the Device Tree is provided by the U-Boot bootloader (not by the Raspberry Pi firmware), and Device Tree Overlays are not supported in this version of U-Boot.
The recommended way to override the Device Tree in SUSE Linux Enterprise
Server for ARM 12 SP3 for the Raspberry Pi is to place a customized
bcm2837-rpi-3-b.dtb
file into one of the directories
U-Boot searches on the second partition of the boot medium (U-Boot
environment variable efi_dtb_prefixes
). For example,
/boot/dtb/bcm2837-rpi-3-b.dtb
from within the system.
This requires the second partition to be readable by U-Boot, hence in
the SUSE Linux Enterprise Server for ARM 12 SP3 for the
Raspberry Pi image that partition does not use Btrfs. The
source package u-boot-rpi3
includes the corresponding
Device Tree sources.
For convenience, you can also access the current Flat Device Tree binary
as /sys/firmware/fdt
and use the Device Tree Compiler
tool (dtc
which is not part of SUSE Linux Enterprise
Server for ARM 12 SP3) to convert the Device Tree binary to source. To
generate a suitable bcm2837-rpi-3-b.dtb
file:
Modify the obtained sources as needed, adding or changing Device Tree nodes according to the Device Tree Bindings in the kernel documentation. Do not forget about
pinctrl
settings!Compile them into the Flat Device Tree binary format.
Note: Procedural Changes
This recommendation is expected to change for future versions.
For More Information#
For more information on how to get started, see the SUSE Best Practices documentation for the Raspberry Pi at https://documentation.suse.com/sbp/all/html/SLES12SP3-rpiquick/index.html.
8.2 Raspberry Pi 3 Shows Blurry HDMI Output on Some Monitors #
On some HDMI monitors, the Raspberry Pi will show blurry output on the screen. You may also see a thin purple line at the edge of the screen.
To work around this issue, perform the following:
Blacklist the
vc4
kernel module: Add the lineblacklist vc4
to/etc/modprobe.d/50-blacklist.conf
Delete the kernel mode setting configuration:
rm /etc/X11/xorg.conf.d/20-kms.conf
8.3 AppliedMicro X-C1 Server Development Platform (Mustang) Firmware Requirements #
In between SUSE Linux Enterprise Server 12 SP2 and SP3, some
AppliedMicro X-Gene drivers and the corresponding Device Tree bindings
were changed in an incompatible way. X-C1 devices that successfully boot
SUSE Linux Enterprise Server 12 SP2 may be unable to install SP3 without
changes. Symptoms include a crash in the
mdio-xgene
network driver.
The updated X-Gene drivers in SP3 require the Device Tree provided by the vendor's firmware version 3.06.25 or later. To install SLES 12 SP3, first need to ensure that the AppliedMicro TianoCore bootloader firmware is updated according to the instructions provided by the vendor. For any questions about obtaining and upgrading this firmware, contact the hardware vendor.
After updating the firmware to a new version, it may in turn no longer be possible to run SLES 12 SP2, unless the firmware is downgraded to a lower version again.
8.4 New System-on-Chip Driver Enablement #
Drivers for the following additional System-on-Chip platforms have been enabled in the SP3 kernel:
AppliedMicro X-Gene 3
Cavium ThunderX2 CN99xx
HiSilicon Hi1616
Marvell Armada 7K/8K
Qualcomm Centriq 2400 series
Rockchip RK3399
8.5 Support for OpenDataPlane on Cavium ThunderX and Octeon TX Platforms #
This release supports OpenDataPlane (ODP) API version 1.11.0.0, also known as Monarch LTS.
Platform Compatilibility#
This release is compatible with the generic AArch64 platforms and the following Cavium platforms:
ThunderX (CN88XX)
Octeon TX (CN81XX, CN83XX)
System Requirements#
Your system needs to meet certain requirements before ODP can be used. The general requirements are as follows:
Cunit
shared library in rootfs (for running unit tests and proper configure)vfio
,thunder-nicpf
and BGX modules loaded into the kernel (typically no driver needs to be loaded since all mentioned modules are compiled into kernel image)Hugetlbfs must be mounted with considerable amount of pages added to it (minimum 256 MB of memory). ODP ThunderX uses huge pages for maximum performance by eliminating TLB misses. For some hardware-related cases, physically contiguous memory is needed. Therefore the ODP ThunderX memory allocator tries to allocate contiguous memory area.
If the ODP application has startup problems, we recommend increasing the hugepage pool by adding more pages to the pool than required.
NIC VFs need to be bound to the VFIO framework
8.6 KVM on AArch64 #
This entry has appeared in a previous release notes document.
KVM virtualization has been enabled and is supported on some system-on-chip platforms for mutually agreed-upon partner-specific use cases. It is only supported on partner certified hardware and firmware. Not all QEMU options and backends are available on AArch64. The same statement is applicable for other virtualization tools shipped on AArch64.
8.7 Toolchain Module Enabled in Default Installation #
This entry has appeared in a previous release notes document.
The system compiler (gcc4.8
) is not supported on the
AArch64 architecture. To work around this issue, you previously had to
enable the Toolchain module manually and use the GCC version from that
module.
On AArch64, the Toolchain Module is now automatically pre-selected after registering SLES during installation. This makes the latest SLE compilers available on all installations. You now only need to make sure to also use that compiler.
Important: When Using AutoYaST, Make Sure to Enable Toolchain Module
Be aware that when using AutoYaST to install, you have to explicitly add the Toolchain module into the XML installation profile.
9 Packages and Functionality Changes #
This section comprises changes to packages, such as additions, updates, removals and changes to the package layout of software. It also contains information about modules available for SUSE Linux Enterprise Server. For information about changes to package management tools, such as Zypper or RPM, see Section 4.5, “Systems Management”.
9.1 New Packages #
9.1.1 Icinga Monitoring Server Shipped as Part of SUSE Manager #
This entry has appeared in a previous release notes document.
Fully supported packages of the Icinga monitoring server for SUSE Linux Enterprise Server 12 are available with a SUSE Manager subscription. Icinga is compatible with a previously included monitoring server.
For more information about Icinga, see the SUSE Manager documentation at https://www.suse.com/documentation/suse-manager-3/singlehtml/book_suma_advanced_topics_31/book_suma_advanced_topics_31.html#advanced.topics.monitoring.with.icinga.
9.2 Updated Packages #
9.2.1 LibreOffice Has Been Updated to Version 6.4 #
LibreOffice has been updated to the new major version 6.4. For information about major changes, see the LibreOffice 6.4 release notes at https://wiki.documentfoundation.org/ReleaseNotes/6.4.
9.2.2 PostgreSQL Has Been Upgraded to Version 10 #
SLES 12 SP4 and SLES 15 ship with PostgreSQL 10 by default. To enable an upgrade path for customers, SLE 12 SP3 now includes PostgreSQL 10 in addition to PostgreSQL 9.6 (the version that was originally shipped).
To upgrade a PostgreSQL server installation from an older version, the database files need to be converted to the new version.
Important: PostgreSQL Upgrade Needs to Be Performed Before Upgrade to New SLES Version
Neither SLES 12 SP4 nor SLES 15 include PostgreSQL 9.6. However, availability of PostgreSQL 9.6 is a requirement for performing the database upgrade to the PostgreSQL 10 format. Therefore, you must upgrade the database to the PostgreSQL 10 format before upgrading to the desired new SLES version.
Major New Features#
The following major new features are included in PostgreSQL 10:
Logical replication: a publish/subscribe framework for distributing data
Declarative table partitioning: convenience in dividing your data
Improved query parallelism: speed up analyses
Quorum commit for synchronous replication: distribute data with confidence
SCRAM-SHA-256 authentication: more secure data access
PostgreSQL 10 also brings an important change to the versioning scheme that is used for PostgreSQL: It now follows the format major.minor. This means that minor releases of PostgreSQL 10 are for example 10.1, 10.2, ... and the next major release will be 11. Previously, both the parts of the version number were significant for the major version. For example, PostgreSQL 9.3 and PostgreSQL 9.4 were different major versions.
For the full PostgreSQL 10 release notes, see https://www.postgresql.org/docs/10/release-10.html (https://www.postgresql.org/docs/10/release-10.html).
Upgrading#
Before starting the migration, make sure the following preconditions are fulfilled:
The packages of your current PostgreSQL version must have been upgraded to their latest maintenance update.
The packages of the new PostgreSQL major version need to be installed. For SLE 12, this means installing
postgresql10-server
and all the packages it depends on. Becausepg_upgrade
is contained in the packagepostgresql10-contrib
, this package must be installed as well, at least until the migration is done.Unless
pg_upgrade
is used in link mode, the server must have enough free disk space to temporarily hold a copy of the database files. If the database instance was installed in the default location, the needed space in megabytes can be determined by running the following command asroot
:du -hs /var/lib/pgsql/data
. If there is little disk space available, run the commandVACUUM FULL
SQL command on each database in the PostgreSQL instance that you want to migrate. This command can take very long.
Upstream documentation about pg_upgrade
including
step-by-step instructions for performing a database migration can be
found locally at
file:///usr/share/doc/packages/postgresql10/html/pgupgrade.html
(if the postgresql10-docs
package is installed), or
online at
https://www.postgresql.org/docs/10/pgupgrade.html (https://www.postgresql.org/docs/10/pgupgrade.html). The online documentation explains how you can install PostgreSQL from
the upstream sources (which is not necessary on SLE) and also uses
other directory names (/usr/local
instead of the
update-alternatives
based path as described above).
9.2.3 GnuTLS Has Been Updated to Version 3.3 #
Some programs require GnuTLS version 3.3 or newer to work.
The upgrade from GnuTLS 3.2 to GnuTLS 3.3 is an update which does not
change of the major version of libgnutls28
, so
existing programs will continue to work.
The library libgnutls-xssl.so
was not used by other
programs and has been removed.
9.2.4 Postfix Has Been Updated to Version 3.2.0 #
Postfix version 2.x is going out of support in the near future.
In SUSE Linux Enterprise 12 SP3, we have upgraded Postfix to version 3.2.0 (from Postfix 2.11.8 in SUSE Linux Enterprise 12 SP2). For information about major changes in the new version of Postfix, see:
9.2.5 Open vSwitch Has Been Updated to Version 2.7.0 #
Open vSwitch has been updated to 2.7.0.
Important changes include:
Various OpenFlow bug fixes
Improved support for OpenFlow
Support for new OpenFlow extensions
Performance improvements
Support for IPsec tunnels has been removed
Changes relating to DPDK:
Support for DPDK 16.11
Support for jumbo frames
Support for rx checksum offload
Support for port hotplugging
For more detailed information about changes between version 2.6.0 and 2.7.0, see https://github.com/openvswitch/ovs/blob/master/NEWS (https://github.com/openvswitch/ovs/blob/master/NEWS).
9.2.6 Upgrading PostgreSQL Installations from 9.1 to 9.4 #
This entry has appeared in a previous release notes document.
To upgrade a PostgreSQL server installation from version 9.1 to 9.4, the database files need to be converted to the new version.
Note: System Upgrade from SLE 11
On SLE 12, there are no PostgreSQL 8.4 or 9.1 packages. This means you must first migrate PostgreSQL from 8.4 or 9.1 to 9.4 on SLE 11 before upgrading the system from SLE 11 to SLE 12.
Newer versions of PostgreSQL come with the
pg_upgrade
tool that simplifies and speeds up the
migration of a PostgreSQL installation to a new version. Formerly, it
was necessary to dump and restore the database files which was much
slower.
To work, pg_upgrade
needs to have the server
binaries of both versions available. To allow this, we had to change
the way PostgreSQL is packaged as well as the naming of the packages,
so that two or more versions of PostgreSQL can be installed in
parallel.
Starting with version 9.1, PostgreSQL package names on SUSE Linux
Enterprise products contain numbers indicating the major version. In
PostgreSQL terms, the major version consists of the first two
components of the version number, for example, 9.1, 9.3, and 9.4. So,
the packages for PostgreSQL 9.3 are named
postgresql93
,
postgresql93-server
, etc. Inside the packages, the
files were moved from their standard location to a versioned location
such as /usr/lib/postgresql93/bin
or
/usr/lib/postgresql94/bin
. This avoids file
conflicts if multiple packages are installed in parallel. The
update-alternatives
mechanism creates and maintains
symbolic links that cause one version (by default the highest installed
version) to re-appear in the standard locations. By default, database
data is stored under /var/lib/pgsql/data
on SUSE
Linux Enterprise.
The following preconditions have to be fulfilled before data migration can be started:
If not already done, the packages of the old PostgreSQL version (9.3) must be upgraded to the latest release through a maintenance update.
The packages of the new PostgreSQL major version need to be installed. For SLE 12, this means installing
postgresql94-server
and all the packages it depends on. Becausepg_upgrade
is contained in the packagepostgresql94-contrib
, this package must be installed as well, at least until the migration is done.Unless
pg_upgrade
is used in link mode, the server must have enough free disk space to temporarily hold a copy of the database files. If the database instance was installed in the default location, the needed space in megabytes can be determined by running the following command as root:du -hs /var/lib/pgsql/data
. If space is tight, it might help to run theVACUUM FULL
SQL command on each database in the PostgreSQL instance to be migrated which might take very long.
Upstream documentation about pg_upgrade
including
step-by-step instructions for performing a database migration can be
found under
file:///usr/share/doc/packages/postgresql94/html/pgupgrade.html
(if the postgresql94-docs package is installed), or online under
http://www.postgresql.org/docs/9.4/static/pgupgrade.html (http://www.postgresql.org/docs/9.4/static/pgupgrade.html). The online documentation explains how you can install PostgreSQL from
the upstream sources (which is not necessary on SLE) and also uses
other directory names (/usr/local
instead of the
update-alternatives
based path as described above).
For background information about the inner workings of
pg_admin
and a performance comparison with the old
dump and restore method, see
http://momjian.us/main/writings/pgsql/pg_upgrade.pdf.
9.2.7 MariaDB Replaces MySQL #
This entry has appeared in a previous release notes document.
MariaDB is a backward-compatible replacement for MySQL.
If you update from SLE 11 to SLE 12 or later, it is advisable to do a manual backup before the system update. This can help if a start of the database has issues with the storage engine's on-disk layout.
After the update to SLE 12 or later, a manual step is required to actually get the database running (this way you quickly see if something goes wrong):
touch /var/lib/mysql/.force_upgrade rcmysql start # => redirecting to systemctl start mysql.service rcmysql status # => Checking for service MySQL: # => ...
9.3 Removed and Deprecated Functionality #
9.3.1 libcgroup1 Removed From SLE 12 SP4 and Later #
Most functionality of
libcgroup1
is also provided by
systemd. In fact, the cgroup handling of
libcgroup1
can conflict with
that of systemd.
Starting with SLE 12 SP4, libcgroup1
has been
removed. Migrate to the equivalent functionality in systemd.
For more information, see https://www.suse.com/support/kb/doc/?id=7018741.
9.3.2 Docker Compose Has Been Removed from the Containers Module #
Docker Compose is not supported as a part of SUSE Linux Enterprise Server 12. While it was temporarily included as a Technology Preview, testing showed that the technology was not ready for enterprise use.
SUSE's focus is on Kubernetes which provides better value in terms of features, extensibility, stability and performance.
9.3.3 Nagios Monitoring Server Has Been Removed #
This entry has appeared in a previous release notes document.
The Nagios monitoring server has been removed from SLES 12.
When upgrading to SLES 12 or later, installed Nagios configuration may be removed. Therefore, we recommend creating backups of the Nagios configuration before the upgrade.
9.3.4 Packages and Features to Be Removed in the Future #
9.3.4.1 Use /etc/os-release Instead of /etc/SuSE-release #
This entry has appeared in a previous release notes document.
Starting with SLE 12, the
/etc/SuSE-release
file has
been deprecated. Do not use it to identify a SUSE Linux Enterprise
system anymore. This file will be removed in a future Service Pack or
release.
To determine the release, use the file
/etc/os-release
instead. This file is a
cross-distribution standard to identify Linux systems. For more
information about the syntax, see the os-release
man page (man os-release
).
9.4 Changes in Packaging and Delivery #
9.4.1 OFED-related Packages Replaced by Packages From New Upstream #
In SLE 12 SP2 and earlier, the OFED (OpenFabric Enterprise Distribution) stack came directly from OFED.
Since the release of SLES 12 SP2, most of this stack has been upstreamed to the Linux RDMA project. This has resulted in an influx of contributions to the project and much improved source.
With SLE 12 SP3, we have updated the OFED stack to the version from the new upstream. This has brought the following package changes:
The package
rdma
is now calledrdma-core
All
-rdmav2
(providers for specific RDMA hardware) are integrated into thelibibverbs
packagelibibverbs
itself is in thelibibverbs1
packagemlx4
andmlx5
are still shipped as separate packages, under the namelibmlx4-1
andlibmlx5-1
, as they can be used standalonelibibcm-devel
,libibumad-devel
,librdmacm-devel
andlibibverbs-devel
are all provided by therdma-core-devel
packageThe static libraries are not provided anymore.
9.4.2 Support for Intel OPA Fabrics Moved to mvapich2-psm2 Package #
This entry has appeared in a previous release notes document.
The version of the package
mvapich2-psm
originally shipped
with SLES 12 SP2 and SLES 12 SP3 exclusively supported Intel Omni-Path
Architecture (OPA) fabrics. In SLES 12 SP1 and earlier, this package
supported the use of Intel True Scale fabrics instead.
This issue is fixed by a maintenance update providing an additional
package named mvapich2-psm2
which only supports
Intel OPA, whereas the original package mvapich2-psm
only supports Intel True Scale fabrics again.
If you are currently using mvapich2-psm
together
with Intel OPA fabrics, make sure to switch to the new package
mvapich2-psm2
after this maintenance update.
9.4.3 Kernel Firmware Only Shipped as Part of the kernel-firmware Package #
In past releases, the kernel-default package used to contain firmware for in-kernel drivers.
Starting with SLES 12 SP3, such firmware is now delivered as part of the package kernel-firmware.
9.5 Modules #
This section contains information about important changes to modules. For more information about available modules, see Section 2.10.1, “Available Modules”.
9.5.1 Support for New Hardware Instructions in Toolchain #
Support for new hardware instructions in binutils
,
GCC and GDB is available through the Toolchain Module.
9.5.2 libgcrypt11 Available from the Legacy Module #
The Legacy module now provides a package for
libgcrypt11
. This enables running applications
built on SLES 11 against libgcrypt11
on SLES 12.
10 Technical Information #
This section contains information about system limits, a number of technical changes and enhancements for the experienced user.
When talking about CPUs, we use the following terminology:
- CPU Socket
The visible physical entity, as it is typically mounted to a motherboard or an equivalent.
- CPU Core
The (usually not visible) physical entity as reported by the CPU vendor.
On IBM Z, this is equivalent to an IFL.
- Logical CPU
This is what the Linux Kernel recognizes as a "CPU".
We avoid the word "thread" (which is sometimes used), as the word "thread" would also become ambiguous subsequently.
- Virtual CPU
A logical CPU as seen from within a Virtual Machine.
10.1 Kernel Limits #
This table summarizes the various limits which exist in our recent kernels and utilities (if related) for SUSE Linux Enterprise Server 12 SP3.
SLES 12 SP3 (Linux 4.4) | AMD64/Intel 64 (x86_64) | IBM Z (s390x) | POWER (ppc64le) | AArch64 (ARMv8) |
---|---|---|---|---|
CPU bits |
64 |
64 |
64 |
64 |
Maximum number of logical CPUs |
8192 |
256 |
2048 |
128 |
Maximum amount of RAM (theoretical/certified) |
> 1 PiB/64 TiB |
10 TiB/256 GiB |
1 PiB/64 TiB |
256 TiB/n.a. |
Maximum amount of user space/kernel space |
128 TiB/128 TiB |
n.a. |
512 TiB 1/2 EiB |
256 TiB/128 TiB |
Maximum amount of swap space |
Up to 29 * 64 GB (x86_64) or 30 * 64 GB (other architectures) | |||
Maximum number of processes |
1048576 | |||
Maximum number of threads per process |
Upper limit depends on memory and other parameters (tested with more than 120,000)2 | |||
Maximum size per block device |
Up to 8 EiB | |||
FD_SETSIZE |
1024 |
1 By default, the userspace memory limit on the POWER architecture is 128 TiB. However, you can explicitly request mmaps up to 512 TiB.
2 The total number of all processes and all threads on a system may not be higher than the “maximum number of processes”.
10.2 KVM Limits #
SLES 12 SP3 Virtual Machine (VM) | Limits |
---|---|
Maximum VMs per host |
Unlimited (total number of virtual CPUs in all guests being no greater than 8 times the number of CPU cores in the host) |
Maximum Virtual CPUs per VM |
288 |
Maximum Memory per VM |
4 TiB |
Virtual Host Server (VHS) limits are identical to those of SUSE Linux Enterprise Server.
10.3 Xen Limits #
Since SUSE Linux Enterprise Server 11 SP2, we removed the 32-bit hypervisor as a virtualization host. 32-bit virtual guests are not affected and are fully supported with the provided 64-bit hypervisor.
SLES 12 SP3 Virtual Machine (VM) | Limits |
---|---|
Maximum number of virtual CPUs per VM |
64 |
Maximum amount of memory per VM |
16 GiB x86_32, 511 GiB x86_64 |
SLES 12 SP3 Virtual Host Server (VHS) | Limits |
---|---|
Maximum number of physical CPUs |
256 |
Maximum number of virtual CPUs |
256 |
Maximum amount of physical memory |
5 TiB |
Maximum amount of Dom0 physical memory |
500 GiB |
Maximum number of block devices |
12,000 SCSI logical units |
PV: Paravirtualization
FV: Full virtualization
For more information about acronyms, see the virtualization documentation provided at https://documentation.suse.com/sles/12-SP3/.
10.4 File Systems #
10.4.1 Unsupported Ext4 Features #
The following Ext4 features are experimental and unsupported:
bigalloc
metadata checksumming
10.4.2 Comparison of Supported File Systems #
SUSE Linux Enterprise was the first enterprise Linux distribution to support journaling file systems and logical volume managers back in 2000. Later, we introduced XFS to Linux, which today is seen as the primary work horse for large-scale file systems, systems with heavy load and multiple parallel reading and writing operations. With SUSE Linux Enterprise 12, we went the next step of innovation and started using the copy-on-write file system Btrfs as the default for the operating system, to support system snapshots and rollback.
+ supported |
– unsupported |
Feature | Btrfs | XFS | Ext4 | OCFS 2 1 | ReiserFS 2 |
---|---|---|---|---|---|
Support in products |
SLE |
SLE |
SLE |
SLE HA |
SLE |
Data/metadata journaling |
N/A 3 |
– / + |
+ / + |
– / + |
– / + |
Journal internal/external |
N/A 3 |
+ / + |
+ / + |
+ / – |
+ / + |
Journal checksumming |
N/A 3 |
+ |
+ |
+ |
– |
Subvolumes |
+ |
– |
– |
– |
– |
Offline extend/shrink |
+ / + |
– / – |
+ / + |
+ / – 4 |
+ / – |
Online extend/shrink |
+ / + |
+ / – |
+ / – |
– / – |
+ / – |
Inode allocation map |
B-tree |
B+-tree |
table |
B-tree |
u. B*-tree |
Sparse files |
+ |
+ |
+ |
+ |
+ |
Tail packing |
– |
– |
– |
– |
+ |
Small files stored inline |
+ (in metadata) |
– |
+ (in inode) |
+ (in inode) |
+ (in metadata) |
Defragmentation |
+ |
+ |
+ |
– |
– |
Extended file attributes/ACLs |
+ / + |
+ / + |
+ / + |
+ / + |
+ / + |
User/group quotas |
– / – |
+ / + |
+ / + |
+ / + |
+ / + |
Project quotas |
– |
+ |
+ |
– |
– |
Subvolume quotas |
+ |
N/A |
N/A |
N/A |
N/A |
Data dump/restore |
– |
+ |
– |
– |
– |
Block size default |
4 KiB 5 | ||||
Maximum file system size |
16 EiB |
8 EiB |
1 EiB |
4 PiB |
16 TiB |
Maximum file size |
16 EiB |
8 EiB |
1 EiB |
4 PiB |
1 EiB |
1 OCFS 2 is fully supported as part of the SUSE Linux Enterprise High Availability Extension.
2 ReiserFS is supported for existing file systems. The creation of new ReiserFS file systems is discouraged.
3 Btrfs is
a copy-on-write file system. Instead of journaling changes before
writing them in-place, it writes them to a new location and then links
the new location in. Until the last write, the changes are not
“committed”. Because of the nature of the file system,
quotas are implemented based on subvolumes
(qgroups
).
4 To extend an OCFS 2 file system, the cluster must be online but the file system itself must be unmounted.
5 The block
size default varies with different host architectures. 64 KiB is used
on POWER, 4 KiB on other systems. The actual size used can be checked
with the command getconf
PAGE_SIZE
.
Additional Notes#
Maximum file size above can be larger than the file system's actual size because of the use of sparse blocks. All standard file systems on SUSE Linux Enterprise Server have LFS, which gives a maximum file size of 263 bytes in theory.
The numbers in the above table assume that the file systems are using a 4 KiB block size which is the most common standard. When using different block sizes, the results are different.
In this document: 1024 Bytes = 1 KiB; 1024 KiB = 1 MiB; 1024 MiB = 1 GiB; 1024 GiB = 1 TiB; 1024 TiB = 1 PiB; 1024 PiB = 1 EiB. See also http://physics.nist.gov/cuu/Units/binary.html.
NFSv4 with IPv6 is only supported for the client side. An NFSv4 server with IPv6 is not supported.
The version of Samba shipped with SUSE Linux Enterprise Server 12 SP3 delivers integration with Windows Active Directory domains. In addition, we provide the clustered version of Samba as part of SUSE Linux Enterprise High Availability Extension 12 SP3.
Some file system features are available in SUSE Linux Enterprise Server
12 SP3 but are not supported by SUSE. By default, the file system
drivers in SUSE Linux Enterprise Server 12 SP3 will refuse mounting
file systems that use unsupported features (in particular, in
read-write mode). To enable unsupported features, set the module
parameter allow_unsupported=1
in
/etc/modprobe.d
or write the value
1
to
/sys/module/MODULE_NAME/parameters/allow_unsupported
.
However, note that setting this option will render your kernel and thus
your system unsupported.
10.4.3 Supported Btrfs Features #
The following table lists supported and unsupported Btrfs features across multiple SLES versions.
+ supported |
– unsupported |
Feature | SLES 11 SP4 | SLES 12 GA | SLES 12 SP1 | SLES 12 SP2 | SLES 12 SP3 |
---|---|---|---|---|---|
Copy on Write | + | + | + | + | + |
Snapshots/Subvolumes | + | + | + | + | + |
Metadata Integrity | + | + | + | + | + |
Data Integrity | + | + | + | + | + |
Online Metadata Scrubbing | + | + | + | + | + |
Automatic Defragmentation | – | – | – | – | – |
Manual Defragmentation | + | + | + | + | + |
In-band Deduplication | – | – | – | – | – |
Out-of-band Deduplication | + | + | + | + | + |
Quota Groups | + | + | + | + | + |
Metadata Duplication | + | + | + | + | + |
Multiple Devices | – | + | + | + | + |
RAID 0 | – | + | + | + | + |
RAID 1 | – | + | + | + | + |
RAID 10 | – | + | + | + | + |
RAID 5 | – | – | – | – | – |
RAID 6 | – | – | – | – | – |
Hot Add/Remove | – | + | + | + | + |
Device Replace | – | – | – | – | – |
Seeding Devices | – | – | – | – | – |
Compression | – | – | + | + | + |
Big Metadata Blocks | – | + | + | + | + |
Skinny Metadata | – | + | + | + | + |
Send Without File Data | – | + | + | + | + |
Send/Receive | – | – | – | + | + |
Inode Cache | – | – | – | – | – |
Fallocate with Hole Punch | – | – | – | + | + |
10.5 Supported Java Versions #
The following table lists Java implementations available in SUSE Linux Enterprise Server 12 SP3:
Name (Package Name) | Version | Part of SUSE Linux Enterprise Server | Support |
---|---|---|---|
OpenJDK (java-1_8_0-openjdk) | 1.8.0 | SLES | SUSE, L3 |
OpenJDK (java-1_7_0-openjdk) | 1.7.0 | SLES | SUSE, L3 |
IBM Java (java-1_8_0-ibm) | 1.8.0 | SLES | External only |
IBM Java (java-1_7_1-ibm) | 1.7.1 | SLES | External only |
IBM Java (java-1_6_0-ibm) | 1.6.0 | Legacy Module | External only |
11 Legal Notices #
SUSE makes no representations or warranties with respect to the contents or use of this documentation, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. Further, SUSE reserves the right to revise this publication and to make changes to its content, at any time, without the obligation to notify any person or entity of such revisions or changes.
Further, SUSE makes no representations or warranties with respect to any software, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. Further, SUSE reserves the right to make changes to any and all parts of SUSE software, at any time, without any obligation to notify any person or entity of such changes.
Any products or technical information provided under this Agreement may be subject to U.S. export controls and the trade laws of other countries. You agree to comply with all export control regulations and to obtain any required licenses or classifications to export, re-export, or import deliverables. You agree not to export or re-export to entities on the current U.S. export exclusion lists or to any embargoed or terrorist countries as specified in U.S. export laws. You agree to not use deliverables for prohibited nuclear, missile, or chemical/biological weaponry end uses. Refer to https://www.suse.com/company/legal/ for more information on exporting SUSE software. SUSE assumes no responsibility for your failure to obtain any necessary export approvals.
Copyright © 2010- 2021 SUSE LLC. This release notes document is licensed under a Creative Commons Attribution-NoDerivs 3.0 United States License (CC-BY-ND-3.0 US, http://creativecommons.org/licenses/by-nd/3.0/us/).
SUSE has intellectual property rights relating to technology embodied in the product that is described in this document. In particular, and without limitation, these intellectual property rights may include one or more of the U.S. patents listed at https://www.suse.com/company/legal/ and one or more additional patents or pending patent applications in the U.S. and other countries.
For SUSE trademarks, see SUSE Trademark and Service Mark list (https://www.suse.com/company/legal/). All third-party trademarks are the property of their respective owners.
12 Colophon #
Thanks for using SUSE Linux Enterprise Server in your business.
The SUSE Linux Enterprise Server Team.