Version 11.2.0.52 (2014-07-08)
Abstract
These release notes are generic for all products of our SUSE Linux Enterprise Server 11 product line. Some parts may not apply to a particular architecture or product. Where this is not obvious, the specific architectures or products are explicitly listed.
Installation Quick Start and Deployment Guides can be found in the
docu
language directories on the media.
Documentation (if installed) is available below the
/usr/share/doc/
directory of an installed system.
This SUSE product includes materials licensed to SUSE under the GNU General Public License (GPL). The GPL requires SUSE to provide the source code that corresponds to the GPL-licensed material. The source code is available for download at http://www.suse.com/download-linux/source-code.html. Also, for up to three years after distribution of the SUSE product, upon request, Novell will mail a copy of the source code. Requests should be sent by e-mail to mailto:sle_source_request@novell.com or as otherwise instructed at http://www.suse.com/download-linux/source-code.html. Novell may charge a reasonable fee to recover distribution costs.
SUSE Linux Enterprise Server is a highly reliable, scalable, and secure server operating system, built to power mission-critical workloads in both physical and virtual environments. It is an affordable, interoperable, and manageable open source foundation. With it, enterprises can cost-effectively deliver core business services, enable secure networks, and simplify the management of their heterogeneous IT infrastructure, maximizing efficiency and value.
The only enterprise Linux recommended by Microsoft and SAP, SUSE Linux Enterprise Server is optimized to deliver high-performance mission-critical services, as well as edge of network, and web infrastructure workloads.
Designed for interoperability, SUSE Linux Enterprise Server integrates into classical Unix as well as Windows environments, supports open standard CIM interfaces for systems management, and has been certified for IPv6 compatibility,
This modular, general purpose operating system runs on five processor architectures and is available with optional extensions that provide advanced capabilities for tasks such as real time computing and high availability clustering.
SUSE Linux Enterprise Server is optimized to run as a high performing guest on leading hypervisors and supports an unlimited number of virtual machines per physical system with a single subscription, making it the perfect guest operating system for virtual computing.
SUSE Linux Enterprise Server is backed by award-winning support from SUSE, an established technology leader with a proven history of delivering enterprise-quality support services.
With the release of SUSE Linux Enterprise Server 11 Service Pack 2 the former SUSE Linux Enterprise Server 11 Service Pack 1 enters the 6 month migration window, during which time SUSE will continue to provide security updates and full support. At the end of the six-month parallel support period, on 2012-08-31, support for SUSE Linux Enterprise Server 11 Service Pack 1 will be discontinued. Long Term Service Pack Support (LTSS) for SUSE Linux Enterprise Server 11 Service Pack 1 is available as a separate option.
For users upgrading from a previous SUSE Linux Enterprise Server release it is recommended to review:
These Release Notes are identical across all architectures, and the most recent version is always available online at http://www.suse.com/releasenotes/. Some entries are listed twice, if they are important and belong to more than one section.
To receive support, customers need an appropriate subscription with SUSE; for more information, see http://www.suse.com/products/server/services-and-support/.
The following definitions apply:
Problem determination, which means technical support designed to provide compatibility information, usage support, on-going maintenance, information gathering and basic troubleshooting using available documentation.
Problem isolation, which means technical support designed to analyze data, duplicate customer problems, isolate problem area and provide resolution for problems not resolved by Level 1 or alternatively prepare for Level 3.
Problem resolution, which means technical support designed to resolve problems by engaging engineering to resolve product defects which have been identified by Level 2 Support.
For contracted customers and partners, SUSE Linux Enterprise Server 11 will be delivered with L3 support for all packages, except the following:
technology previews
sound, graphics, fonts and artwork
packages that require an additional customer contract
packages provided as part of the Software Development Kit (SDK)
SUSE will only support the usage of original (e.g., unchanged or un-recompiled) packages.
Btrfs is a copy-on-write (CoW) general purpose file system. Based on the CoW functionality, btrfs provides snapshoting. Beyond that data and metadata checksums improve the reliability of the file system. btrfs is highly scalable, but also supports online shrinking to adopt to real-life environments. On appropriate storage devices btrfs also supports the TRIM command.
Support
With SUSE Linux Enterprise 11 SP2, the btrfs file system joins ext3, reiserfs, xfs and ocfs2 as commercially supported file systems. Each file system offers disctinct advantages. While the installation default is ext3, we recommend xfs when maximizing data performance is desired, and btrfs as a root file system when snapshotting and rollback capabilities are required. Btrfs is supported as a root file system (i.e. the file system for the operating system) across all architectures of SUSE Linux Enterprise 11 SP2. Customers are advised to use the YaST partitioner (or AutoYaST) to build their systems: YaST will prepare the btrfs file system for use with subvolumes and snapshots. Snapshots will be automatically enabled for the root file system using SUSE's snapper infrastructure. For more information about snapper, its integration into ZYpp and YaST, and the YaST snapper module, see the SUSE Linux Enterprise documentation.
Migration from "ext" File Systems to btrfs
Migration from existing "ext" file systems (ext2, ext3, ext4) is supported "offline" and "in place". Calling "btrfs-convert [device]" will convert the file system. This is an offline process, which needs at least 15% free space on the device, but is applied in place. Roll back: calling "btrfs-convert -r [device]" will roll back. Caveat: when rolling back, all data will be lost that has been added after the conversion into btrfs; in other words: the roll back is complete, not partial.
RAID
Btrfs is supported on top of MD (multiple devices) and DM (device mapper) configurations. Please use the YaST partitioner to achieve a proper setup. Multivolume/RAID with btrfs is not supported yet and will be enabled with a future maintenance update.
Future Plans
We are planning to announce support for btrfs' built-in multi volume handling and RAID in a later version of SUSE Linux Enterprise.
Starting with SUSE Linux Enterprise 12, we are planning to implement bootloader support for /boot on btrfs.
Compression and Encryption functionality for btrfs is currently under development and will be supported once the development has matured.
We are commited to actively work on the btrfs file system with the community, and we keep customers and partners informed about progress and experience in terms of scalability and performance. This may also apply to cloud and cloud storage infrastructures.
Online Check and Repair Functionality
Check and repair functionality ("scrub") is available as part of the btrfs command line tools. "Scrub" is aimed to verify data and metadata assuming the tree structures are fine. "Scrub" can (and should) be run periodically on a mounted file system: it runs as a background process during normal operation.
The tool "fsck.btrfs" tool will soon be available in the SUSE Linux Enterprise update repositories.
Capacity Planning
If you are planning to use btrfs with its snapshot capability, it is advisable to reserve twice as much disk space than the standard storage proposal. This is automatically done by the YaST2 partitioner for the root file system.
Hard Link Limitation
In order to provide a more robust file system, btrfs incorporates back references for all file names, eliminating the classic "lost+found" directory added during recovery. A temporary limitation of this approach affects the number of hard links in a single directory that link to the same file. The limitation is dynamic based on the length of the file names used. A realistic average is approximately 150 hard links. When using 255 character file names, the limit is 14 links. We intend to raise the limitation to a more usable limit of 65535 links in a future maintenance update.
Other Limitations
At the moment, btrfs is not supported as a seed device.
For More Information
For more information about btrfs, see the SUSE Linux Enterprise 11 documentation.
Tomcat6 and related packages are fully supported on the Intel/AMD x86 (32bit), AMD64/Intel64, IBM POWER, and IBM System z architectures.
The SELinux subsystem is supported. Arbitrary SELinux policies running on SLES are not supported, though. Customers and Partners who have an interest in using SELinux in their solutions, are encouraged to contact SUSE to evaluate the level of support that is needed, and how support and services for the specific SELinux policies will be granted.
The following packages require additional support contracts to be obtained by the customer in order to receive full support:
BEA Java (Itanium only)
MySQL Database
PostgreSQL Database
WebSphere CE Application Server
Technology previews are packages, stacks, or features delivered by SUSE. These features are not supported. They may be functionally incomplete, unstable or in other ways not suitable for production use. They are mainly included for customer convenience and give customers a chance to test new technologies within an enterprise environment.
Whether a technical preview will be moved to a fully supported package later, depends on customer and market feedback. A technical preview does not automatically result in support at a later point in time. Technical previews could be dropped at any time and SUSE is not committed to provide a technical preview later in the product cycle.
Please, give your SUSE representative feedback, including your experience and use case. Alternatively, use the Novell Requirements Portal at http://www.novell.com/rms.
The Linux Kernel swaps out rarely accessed memory pages in order to use freed memory pages as cache to speed up file system operations, for instance during backup operations.
Some Enterprise applications, such as SAP solutions, use large amounts of memory for accelerated access to business data. Parts of this memory are very seldom accessed. When a user request needs to access paged out memory, the response time is poor. It is even worse, when a SAP solution running on Java incurs a Java garbage collection. The system starts heavy page-in activity (disc I/O) and incurs poor response time for an extended period of time.
The pagecache_limit
feature is a technology preview
in SUSE Linux Enterprise Server 11 SP1 and SP2, and only supported for
SUSE Linux Enterprise Server for SAP Applications 11 SP1 and later.
For SUSE Linux Enterprise Server 12 we expect an upstream solution based on Control Groups.
Hot-add memory is currently only supported on the following hardware:
IBM eServer xSeries x260, single node x460, x3800, x3850, single node x3950,
certified systems based on recent Intel Xeon Architecture,
certified systems based on recent Intel IPF Architecture,
all IBM servers and blades with POWER5, POWER6, or POWER7 processors and recent firmware.
If your specific machine is not listed, please call SUSE support to confirm whether or not your machine has been successfully tested. Also, regularly check our maintenance update information, which will explicitly mention the general availability of this feature.
Restriction on using IBM eHCA InfiniBand adapters in conjunction with hot-add memory on IBM System p:
The current eHCA Device Driver will prevent dynamic memory operations on a partition as long as the driver is loaded. If the driver is unloaded prior to the operation and then loaded again afterwards, adapter initialization may fail. A Partition Shutdown / Activate sequence on the HMC may be needed to recover from this situation.
The Internet Storage Naming Service (iSNS) package is by design suitable for secure internal networks only. SUSE will continue to work with the community on improving security.
It is possible to run SUSE Linux Enterprise Server 11 on a shared
read-only root file system. A read-only root setup consists of the
read-only root file system, a scratch and a state file system. The
/etc/rwtab
file defines which files and
directories on the read-only root file system are replaced by which
files on the state and scratch file systems for each system instance.
The readonlyroot
kernel command line option
enables read-only root mode; the state=
and
scratch=
kernel command line options determine the
devices on which the state and scratch file systems are located.
In order to set up a system with a read-only root file system, set up a
scratch file system, set up a file system to use for storing persistent
per-instance state,
adjust /etc/rwtab
as needed, add the appropriate
kernel command line options to your boot loader configuration, replace
/etc/mtab
with a symlink to
/proc/mounts
as described below, and (re)boot the
system.
To replace /etc/mtab
with the appropriate
symlinks, call:
ln -sf /proc/mounts /etc/mtab
See the rwtab(5) manual page for further details and http://www.redbooks.ibm.com/abstracts/redp4322.html for limitations on System z.
With SUSE Linux Enterprise 11 SP2 we introduce Linux Kernel 3.0. This kernel is a direct successor of the Linux kernel 2.6 series, thus all applications run without change. However, some applications or installation programs are broken and may check for version "2.6" literally, thus failing to accept the compatibility of our kernel.
We provide two mechanisms to encourage applications to recognize the kernel 3.0 in SUSE Linux Enterprise 11 SP2 as a Linux kernel 2.6 compatible system:
Use the uname26 command line tool, to start a single application in a
2.6 context. Usage is as easy as typing uname26
[PROGRAM]
. More information can be found in the manpage of
"setarch".
Some database systems and enterprise business applications expect
processes and tasks run under a specific user name (not root). The
Pluggable Authentication Modules (PAM) stack in SUSE Linux Enterprise
allows to put a user into a 2.6 context. To achieve this, please add
the username to the file /etc/security/uname26.conf
. For more information, see the manpage for "pam_unix2". Caveat: We do
not support the "root" user to run in a 2.6 context.
If you are running SAP applications, have a look at SAP Note #1310037 for more information on running SAP applications within a Kernel 2.6 compatibility environment.
Known Issues
The current version of the LSI MegaCLI utility needs to be run with the uname26 personality using the "uname26" tool.
The current version of the IBM Online SAS/SATA Hard Disk Drive Update Program needs to be run with a uname26 personality.
This feature addresses the issue that eth0 does not map to em1 (as labeled on server chassis), when a server has multiple network adapters.
This issue is solved for Dell hardware, which has the corresponding BIOS support, by renaming onboard network interfaces to em[1234], which maps to Embedded NIC[1234] as labeled on server chassis. (em stands for ethernet-on-motherboard.)
The renaming will be done by using the biosdevname utility.
biosdevname is automatically installed and used if YaST2 detects hardware suitable to be used with biosdevname. biosdevname can be disabled during installation by using "biosdevname=0" on the kernel commandline. The usage of biosdevname can be enforced on every hardware with "biosdevname=1". If the BIOS has no support, no network interface names are renamed.
SUSE Linux Enterprise Server 11 SP2 is available immediately for use on Amazon Web Services EC2. For more information about Amazon EC2 Running SUSE Linux Enterprise Server, please visit http://aws.amazon.com/suse
SUSE Linux Enterprise Server can be deployed in three ways:
Physical Machine,
Virtual Host,
Virtual Machine in paravirtualized environments.
CJK (Chinese, Japanese, and Korean) languages do not work properly during text-mode installation if the framebuffer is not used (Text Mode selected in boot loader).
There are three alternatives to resolve this issue:
Use English or some other non-CJK language for installation then switch to the CJK language later on a running system using
+ + .Use your CJK language during installation, but do not choose textmode=1 to the boot loader command-line and start the installation.
in the boot loader using . Select one of the other VGA modes instead. Select the CJK language of your choice using , addUse graphical installation (or install remotely via SSH or VNC).
Booting from harddisks larger than 2 TiB in non-UEFI mode (but with GPT partition table) fails.
To successfully use harddisks larger than 2 TiB in non-UEFI mode, but with GPT partition table (i.e., grub bootloader), consider one of the following options:
Use a 4k sector harddisk in 4k mode (in this case, the 2 TiB limit will become a 16 TiB limit).
Use a separate /boot
partition. This partition
must be one of the first 3 partitions and end below the 2 TiB limit.
Switch from legacy mode to UEFI mode, if this is an option for you.
The installer uses persistent device names by default. If you plan to add storage devices to your system after the installation, we strongly recommend you use persistent device names for all storage devices.
To switch to persistent device names on a system that has already been
installed, start the YaST2 partitioner. For each partition, select
/boot/grub/menu.lst
and
/boot/grub/device.map
according to your needs.
This needs to be done before adding new storage devices.
For further information, see the “Storage Administration Guide” about "Device Name Persistence".
If booting over iSCSI, iBFT information cannot be parsed when booting via native UEFI. The system should be configured to boot in legacy mode if iSCSI booting using iBFT is required.
To use iSCSI disks during installation, add the following parameter to
the boot option line: withiscsi=1
.
During installation, an additional screen provides the option to attach iSCSI disks to the system and use them in the installation process.
Booting from an iSCSI server on i386, x86_64 and ppc64 is supported if iSCSI-enabled firmware is used.
QLogic iSCSI Expansion Card for IBM BladeCenter provides both Ethernet and iSCSI functions. Some parts on the card are shared by both functions. The current qla3xxx (Ethernet) and qla4xxx (iSCSI) drivers support Ethernet and iSCSI function individually. In contrast to previous SLES releases, using both functions at the same time is now supported.
If you happen to use brokenmodules=qla3xxx
or
brokenmodules=qla4xxx
before upgrading to SLES 11
SP2, these options can be removed.
EDD information (in
/sys/firmware/edd/<device>
) is used by
default to identify your storage devices.
EDD Requirements:
BIOS provides full EDD information (found in
/sys/firmware/edd/<device>
)
Disks are signed with a unique MBR signature (found in
/sys/firmware/edd/<device>/mbr_signature
).
Add edd=off
to the kernel parameters to disable EDD.
For automatic installation with AutoYaST in an LPAR, the
parmfile
used for such an installation must have
blank characters at the beginning and at the end of each line (the first
line does not need to start with a blank). The number of characters in
one line should not exceed 80.
Adding of DASD or zFCP disks is not only possible during the installation workflow, but also when the installation proposal is shown. To add disks at this stage, please click on the
tab and scroll down. There the DASD and/or zFCP entry is shown. These added disks are not displayed in the partitioner automatically. To make the disks visible in the partitioner, you have to click on and select . This may reset any previously entered information.If you want to carry out a network installation via the IBM eHEA Ethernet Adapter on POWER systems, no huge (16GB) pages may be assigned to the partition during installation.
For more information, see Chapter 11, Infrastructure, Package and Architecture Specific Information.
Lustre 2.1 builds of kernel modules by 3rd parties needed kernel modifications of the previous shipped SUSE Kernel, and thus breaking the support chain.
To allow the build of kernel modules for Lustre 2.1 by 3rd parties without breaking the support chain for the SUSE Kernel, the needed hooks for Lustre were added to the shipped kernel.
This change does not include Lustre modules or packages, nor support.
On systems with large memory, frequent access to the Translation Lookaside Buffer (TLB) may slow down the system significantly.
Transparent huge pages thus are of most use on systems with very large (128GB or more) memory, and help to drive performance. In SUSE Linux Enterprise, THP is enabled by default where it is expected to give a performance boost to a large number of workloads.
There are cases where THP may regress performance, particularly when under memory pressure due to pages being reclaimed in an effort to promote to huge pages. It is also possible that performance will suffer on CPUs with a limited number of huge page TLB entries for workloads that sparsely reference large amounts of memory. If necessary, THP can be disabled via the sysfs file "/sys/kernel/mm/transparent_hugepage/enabled", which accepts one of the values "always", "madvise", or "never".
To disable THP via sysfs and confirm it is disabled, do the following as root:
echo never > /sys/kernel/mm/transparent_hugepage/enabled cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never]
Recent servers provide a small non-volatile storage area. A recent pstore feature can save kernel crashes there, if the pstore file system gets mounted as follows:
mkdir /dev/pstore mount -t pstore - /dev/pstore
The crash can then be examined on the next reboot. By default, the kernel is not able to determine the non-volatile storage area (unless UEFI is used). This is because this information is exposed via APEI (ACPI Platform Error Interface) tables, which are not parsed by default.
Beside the fact that the BIOS has to export the relevant information, the kernel needs the following boot parameter:
apei_enable
Then the kernel is able to detect the non-volatile storage area.
Limiting the maximum CPU usage of a group or VM and ensuring that CPU resources are limited to what the user has paid for.
Providing consistent and repeatable VM performance in a cloud environment.
CFS bandwidth control can be used to set a hard limit on the CPU usage of a group or VM. With this it becomes possible to limit a group's or VM's maximum CPU usage to say 0.5CPU or 2CPUs. The bandwidth is specified as quota/period where a group will not be allowed to consume more than 'quota' milliseconds worth of CPU time in every 'period' interval. If a group or VM's CPU usage exceeds the limit, it will be throttled until the time its quota gets refreshed.
Right now only the memory usage with respect to a cgroup can be controlled. The swap space used by the tasks within a cgroup cannot be controlled.
Cgroup swap control provides a way to control the memory+swap usage with respect to a cgroup. This feature can be exploited by passing the kernel boot parameter "swapaccount=1" and can be disabled by passing "swapaccount=0". If the feature is enabled, an interface by name "memory.memsw.limit_in_bytes" will be present under the memory controller. The value assigned to this control interface should include the limit for the total memory+swap usage limit for that particular cgroup.
GCC 4.3.4
glibc 2.11.1
Linux kernel 3.0.10
perl 5.10
php 5.3
python 2.6.0
ruby 1.8.7
To take advantage of the Real Time extension the extension must be at the same version as the base SUSE Linux Enterprise Server. An updated version for SUSE Linux Enterprise Real Time extension is provided later after the release of SUSE Linux Enterprise Server.
Note: in the following text version numbers do not necessarily give the final patch- and security-status of an application, as SUSE may have added additional patches to the specific version of an application.
In SUSE Linux Enterprise 11 Service Pack 1 and earlier releases, the Tomcat servlet container has been provided as part of the Software Development Kit. We learned that our customers demand full runtime support for this infrastructure.
Starting with SUSE Linux Enterprise Server 11 Service Pack2, Tomcat6 and related packages are part of the Server product. Based on customer and partner feedback we fully support this on the architectures Intel/AMD x86 (32bit), AMD64/Intel64, IBM POWER, IBM System z.
The following packages are affected: tomcat6, tomcat6-servlet-2_5-api, tomcat6-webapps, tomcat6-docs-webapp, tomcat6-admin-webapps, tomcat6-lib, tomcat6-jsp-2_1-api, libtcnative-1-0, apache2-mod_jk, jakarta-taglibs-standard, jakarta-commons-collections, jakarta-commons-dbcp, jakarta-commons-pool, jakarta-commons-httpclient3, jakarta-commons-beanutils, jakarta-commons-codec, jakarta-commons-collections, jakarta-commons-collections-tomcat5, jakarta-commons-daemon, jakarta-commons-dbcp-tomcat5, jakarta-commons-digester, jakarta-commons-discovery, jakarta-commons-el, jakarta-commons-fileupload, jakarta-commons-io, jakarta-commons-lang, jakarta-commons-launcher, jakarta-commons-logging, jakarta-commons-modeler, jakarta-commons-pool-tomcat5, jakarta-commons-validator, tomcat6-javadoc, jakarta-taglibs-standard-javadoc, jakarta-commons-*-javadoc, tomcat_apparmor, ant, ant-junit, ant-trax, and mx4j.
With the changes in the printer market that have happened since SUSE Linux Enterprise 11 SP1 was released, it is highly probable that parts of HPLIP are outdated.
The version upgrade to HPLIP version 3.11.5 keeps SUSE Linux Enterprise 11 SP2 up-to-date regarding to HP printer and all-in-one devices.
Having multiple domains residing in virtual hosts, only the first domain can be served for secure Web browsing. Other domains are prevented from using secure communications. Many servers in virtual hosting environments circumvent this by using a wrong certificate, which causes the browser to warn the user.
An extension to TLS called Server Name Indication (SNI) addresses this issue by sending the name of the virtual domain as part of the TLS negotiation.This enables the server to "switch" to the correct virtual domain early and present the browser with the certificate containing the correct CN. Apache version 2.2.12 has server support for SNI extension.
GNOME 2.28
GNOME was updated and uses PulseAudio for sound.
KDE 4.3.5
KDE was updated.
X.org 7.4
With SP2 LDAP clients default to a stricter default setting for certificate verification. For that to work correctly, the CA certificate used to sign the LDAP server's certificate needs to be available on the client's file system. The YaST LDAP client module was enhanced to provide a way to download the CA certificate from a URL or to configure a file or directory from which the LDAP client should load the CA certificate.
When updating from an SP1 system, this settings is not enabled
automatically. To enable it, start the YaST LDAP client configuration
wizard and configure a valid CA certificate to verify your LDAP
server's certificate. Then make sure that
/etc/openldap/ldap.conf
either contains no
TLS_REQCERT
setting or set it to "demand" or "hard".
For details, see the ldap.conf(5) man page.
There is no single standard for Access Control Lists (ACL) in Linux and Unix beyond the simple user/group/others-rwx flags. One option for finer control are so-called "Draft Posix ACLs", which were never formally standardized by Posix. Another is the NFSv4 ACLs, which were designed to be part of the NFSv4 network file system with the goal of making something that provided reasonable compatibility between Posix systems (like Linux) and WIN32 systems (like Microsoft Windows).
It turned out that NFSv4 ACLs are not sufficient to
correctly implement Draft Posix ACLs. Thus no attempt has been made to
map ACL accesses on an NFSv4 client (using e.g.
setfacl
).
Therefore, when using NFSv4, Draft Posix ACLs cannot be used
even in emulation. NFSv4 ACLs need to be used directly; i.e., while
setfacl
can work on NFSv3, it
cannot work on NFSv4.
To allow NFSv4 ACLs to be used on an NFSv4 file system we provide the "nfs4-acl-tools" package, which contains:
nfs4_getfacl
nfs4_setfacl
nfs4_editfacl
These operate in a generally similar way to getfacl
and setfacl
for examining and modifying NFSv4 ACLs.
Note: This can only be effective if the file system on the NFS server provides full support for NFSv4 ACLs. Any limitation imposed by the server will affect programs running on the client in that some particular combinations of Access Control Entries (ACEs) may not be possible.
A future release of Linux may support "richacls", which are designed to provide access to NFSv4 ACLs in a way that is more integrated with other file sytems. If and when these become available, we will need to transition from using nfs4-acl-tools towards support tools coming with "richacls".
The System Security Services Daemon (sssd) was added to SLE 11 SP2 to provide an alternative method to retrieve user and group information from LDAP directories and to perform authentication through LDAP or Kerberos. It is provided as an alternative to the nss_ldap and pam_ldap (or pam_krb5) Modules. Compared to those modules sssd offers some advantages:
due to it's daemon based architecture possible symbol conflicts between different implementations of LDAP client libraries can be avoided
offline authentication is supported (disabled by default)
builtin support for Kerberos Authentication (no separate PAM module needed)
With SLE 11 SP2 the YaST2 ldap-client module can be used to setup sssd for LDAP (and/or Kerberos) Authentication. The YaST to ldap-client module can also be used to switch from a nss_ldap/pam_ldap based setup to sssd and back.
Some additional notes:
sssd requires a Transport Layer Encryption to be in place when using LDAP based authentication (e.g., LDAPS or StartTLS),
sssd does currently only support the passwd, shadow and group NSS databases
After a new installation of SLES 11 SP2 this feature is enabled, if the mail system is configured with using amavis.
When updating from SLES 11 SP1 this feature must be enabled by editing
/etc/mail/spamassassin/v312.pre
. The comment sign
#
must be removed from the last line. Before:
#loadplugin Mail::SpamAssassin::Plugin::DKIM
After:
loadplugin Mail::SpamAssassin::Plugin::DKIM
The common PAM configuration files
(/etc/pam.d/common-*
) are now created and managed
with pam-config.
In addition to AppArmor, SELinux capabilities have been added to SUSE Linux Enterprise Server. While these capabilities are not enabled by default, customers can run SELinux with SUSE Linux Enterprise Server if they choose to.
What does SELinux enablement mean?
The kernel ships with SELinux support.
We will apply SELinux patches to all “common” userland packages.
The libraries required for SELinux
(libselinux, libsepol,
libsemanage
, etc.) have been added to openSUSE and SUSE
Linux Enterprise.
Quality Assurance is performed with SELinux disabled—to make sure that SELinux patches do not break the default delivery and the majority of packages.
The SELinux specific tools are shipped as part of the default distribution delivery.
Arbitrary SELinux policies running on SLES are not supported, though, and we will not be shipping any SELinux policies in the distribution. Reference and minimal policies may be available from the repositories at some future point.
Customers and Partners who have an interest in using SELinux in their solutions, are encouraged to contact SUSE to evaluate the level of support that is needed, and how support and services for the specific SELinux policies will be granted.
By enabling SELinux in our codebase, we add community code to offer customers the option to use SELinux without replacing significant parts of the distribution.
SUSE Linux Enterprise Server 11 comes with support for Trusted Computing technology. To enable your system's TPM chip, make sure that the "security chip" option in your BIOS is selected. TPM support is entirely passive, meaning that measurements are being performed, but no action is taken based on any TPM-related activity. TPM chips manufactured by Infineon, NSC and Atmel are supported, in addition to the virtual TPM device for Xen.
The corresponding kernel drivers are not loaded automatically. To do so, enter:
find /lib/modules -type f -name "tpm*.ko"
and load the kernel modules for your system manually or via
MODULES_LOADED_ON_BOOT in
/etc/sysconfig/kernel
.
If your TPM chip with taken ownership is configured in Linux and
available for use, you may read PCRs from
/sys/devices/*/*/pcrs
.
The tpm-tools
package
contains utilities to administer your TPM chip, and the
trousers
package provides
tcsd
—the daemon that allows
userland programs to communicate with the TPM driver in the Linux
kernel. tcsd
can be enabled as
a service for the runlevels of your choice.
To implement a trusted ("measured") boot path, use the package
trustedgrub
instead of the
grub
package as your
bootloader. The trustedgrub bootloader does not display any graphical
representation of a boot menu for informational reasons.
SUSE Linux Enterprise Server has successfully completed the USGv6 test program designated by NIST that provides a proof of compliance to IPv6 specifications outlined in current industry standards for common network products.
Being IPv6 Consortium Member and Contributor Novell/SUSE have worked successfully with University of New Hampshire InterOperability Laboratory (UNH-IOL) to verify compliance to IPv6 specifications. The UNH-IOL offers ISO/IEC 17025 accredited testing designed specifically for the USGv6 test program. The devices that have successfully completed the USGv6 testing at the UNH-IOL by March 2012 are SUSE Linux Enterprise Server 11 SP1. Testing for subsequent releases of SUSE Linux Enterprise Server is in progress, and current and future results will be listed at http://www.iol.unh.edu/services/testing/ipv6/usgv6tested.php?company=105&type=#eqplist.
SUSE Linux Enterprise Server can be installed in an IPv6 environment
and run IPv6 applications. When installing via network, do not forget
to boot with "ipv6=1
" (accept v4 and v6) or
"ipv6only=1
" (only v6) on the kernel command line.
For more information, see the Deployment Guide and also
Section 13.6, “IPv6 Implementation and Compliance”.
Support for traceroute over TCP.
FCoE is an implementation of the Fibre Channel over Ethernet working draft. Fibre Channel over Ethernet is the encapsulation of Fibre Channel frames in Ethernet packets. It allows users with a FCF (Fibre Channel over Ethernet Forwarder) to access their existing Fibre Channel storage using an Ethernet adapter. When leveraging DCB's PFC technology to provide a loss-less environment, FCoE can run SAN and LAN traffic over the same link.
Data Center Bridging (DCB) is a collection of Ethernet enhancements designed to allow network traffic with differing requirements (e.g., highly reliable, no drops vs. best effort vs. low latency) to operate and coexist on Ethernet. Current DCB features are:
Enhanced Transmission Selection (aka Priority Grouping) to provide a framework for assigning bandwidth guarantees to traffic classes.
Priority-based Flow Control (PFC) provides a flow control mechanism which can work independently for each 802.1p priority.
Congestion Notification provides a mechanism for end-to-end congestion control for protocols, which do not have built-in congestion management.
The YaST module "FCoE Client Configuration" is a tool to configure FCoE capable network interfaces. During the installation workflow the FCoE configuration can be started on 'Disk Activation' screen. The FCoE interface can be configured and the connected disk will be available for installation.
The FCoE configuration should be automatically offered if the BIOS has activated FCoE. If not, add "withfcoe=1" to the kernel command line.
This feature addresses the issue that eth0 does not map to em1 (as labeled on server chassis), when a server has multiple network adapters.
This issue is solved for Dell hardware, which has the corresponding BIOS support, by renaming onboard network interfaces to em[1234], which maps to Embedded NIC[1234] as labeled on server chassis. (em stands for ethernet-on-motherboard.)
The renaming will be done by using the biosdevname utility.
biosdevname is automatically installed and used if YaST2 detects hardware suitable to be used with biosdevname. biosdevname can be disabled during installation by using "biosdevname=0" on the kernel commandline. The usage of biosdevname can be enforced on every hardware with "biosdevname=1". If the BIOS has no support, no network interface names are renamed.
SUSE Linux Enterprise Server 11 SP2 supports "system containers" with the LXC (LinuX Container) infrastructure to achieve soft partitioning of large physical systems. In this infrastructure, instances of SLES 11 SP2 run within a host instance of SLES 11 SP2. In other words: other than with a hypervisor, all instances share one Linux Kernel, every instance has its own "init" process though.
While the host system has access to the guest instances and their filesystem, the guest instances do not see the host or the other guests other than via network or explicitly share storage (if configured). Thus, Linux Containers should not be used as the primary or only security measure around or inbetween highly secure environments.
For more information about LXC, see the SUSE Linux Enterprise 11 documentation.
LXC now comes with support for network gateway detection. This feature
will prevent a container from starting, if the network configuration
setup of the container is incorrect. For instance, you must make sure
that the network address of the container is within the host ip range,
if it was set up as brigded on host. You might need to specify the
netmask of the container network address (using the syntax
"lxc.network.ipv4 = X.Y.Z.T / cidr
") if the netmask
is not the network class default netmask).
When using DHCP to assign a container network address, ensure
"lxc.network.ipv4 = 0.0.0.0
" is used in your
configuration template.
Previously a container would have been started but the network would
not have been working properly. Now a container will refuse to start,
and print an error message stating that the gateway could not be set
up. For containers created before this update we recommend running
rcnetwork restart
to reestablish a container network
connection.
After installing LXC maintenance update, we recommend clearing the LXC
SLES cache template (stored by default in
/var/cache/lxc/sles/rootfs-*
) to ensure changes
in the SLES template are available in newly created containers.
For containers created before the update, we recommend to install the packages "supportconfig", "sysconfig", and "iputils" using zypper.
SUSE Linux Enterprise Server 11 provides an improved update stack and the new command line tool zypper to manage the repositories and install or update packages.
SUSE Linux Enterprise Server provides CIM/WBEM enablement with the SFCB CIMOM.
The following CIM providers are available:
cmpi-pywbem-base
cmpi-pywbem-power-management (DSP1027)
cmpi-pywbem-software (DSP1023)
libvirt-cim (DSP1041, DSP1043, DSP1045, DSP1057, DSP1059, DSP1076, DSP1081)
sblim-cmpi-base
sblim-cmpi-dhcp
sblim-cmpi-ethport_profile (DSP1014)
sblim-cmpi-fsvol
sblim-cmpi-network
sblim-cmpi-nfsv3
sblim-cmpi-nfsv4
sblim-cmpi-sysfs
sblim-gather-provider
smis-providers
sblim-cmpi-dns
sblim-cmpi-samba
sblim-cmpi-smbios
The WS-Management protocol is supported via Openwsman, providing client (package: openwsman-client) and server (package: openwsman-server) implementations.
This allows for interoperable management with the Windows 'winrm' stack.
WebYaST is an easy to use, web-based administration tool targeted at casual Linux administrators.
SUSE Linux Enterprise Server 11 SP2 adds WebYaST via an online software repository. After successful registration you can install and start WebYaST by following these steps:
Enable online repositories:
zypper mr -e SLE11-WebYaST-SP2-Pool zypper mr -e SLE11-WebYaST-SP2-Updates
Install via pattern:
zypper in -t pattern WebYaST-UI WebYaST-Service
Open firewall ports:
SuSEfirewall2 open EXT TCP 54984 SuSEfirewall2 restart
Start services:
rccollectd start rcyastws start rcyastwc start
The last command will display the URL to connect to with a Web browser.
YaST wagon offers to perform the on-line migration from SLES 11 SP1 to SLES 11 SP2. Wagon provides two possibilities to perform the migration:
Minimal migration provides an update of installed packages to a newer version, which is provided by SLES 11 SP2. It makes sure that your system is updated to SLES 11 SP2, but does not assure that after the migration your system is fully up-to-date.
Full migration performs, in addition to the minimal migration, also the application of all patches, which are relevant to your system. This assures that your system is fully up-to-date, up to the latest possible patch level.
To make sure that your system has all security patches installed and therefore is not vulnerable, either run the full migration, or make sure to run on-line update after performing the minimal migration.
With SUSE Linux Enterprise Server 11, the default file system in new installations has been changed from ReiserFS to ext3. A public statement can be found at http://www.suse.com/products/server/technical-information/#FileSystem .
SUSE supports the Linux Foundation's Carrier Grade Linux (CGL) specification. SUSE Linux Enterprise 11 meets the latest CGL 4.0 standard, and is CGL registered. For more information, see http://www.suse.com/products/server/cgl/.
Hot-add memory and CPU is supported and tested for both 32-bit and 64-bit systems when running vSphere 4.1 or newer. For more information, see the VMware Compatibility Guide at http://www.vmware.com/resources/compatibility/detail.php?device_cat=software&device_id=11287~16&release_id=24.
Updated bnx driver to version 2.0.4
Updated bnx2x driver to version 1.52.1-7
Updated e100 driver to version 3.5.24-k2
Updated tg3 driver to version 3.106
Added bna driver for Brocade 10Gbit LAN card in version 2.1.2.1
Updated bfa driver to version 2.1.2.1
Updated qla3xxx driver to version 2.03.00-k5
Updated sky2 driver to version 1.25
This new ixgbe driver version adds support for the following devices: 82599EB 10 Gigabit Network Connection 82599EB 10 Gigabit TN Network Connection X540-AT2 Ethernet Controller 10 Gigabit 82599 10 Gigabit Dual Port Backplane Connection with FCoE 82599 10 Gigabit Dual port Network Connection with FCoE 82599EB 10 Gigabit SFP+ Network Connection 82599 10 Gigabit Dual Port Network Connection
This is a new virtual function driver added for SR-IOV support with the Intel ixgbe 10 Gigabit devices.
Added support for the following devices: 82580 Gigabit Network Connection 82580 Gigabit Fiber Network Connection 82580 Gigabit Backplane Connection 82580 Gigabit SFP Connection 82580 Gigabit Network Connection I350 Gigabit Network Connection I350 Gigabit Fiber Network Connection I350 Gigabit Backplane Connection I350 Gigabit Connection 82576 Gigabit Network Connection 82580 Gigabit Fiber Network Connection
This Service Pack adds SR-IOV support for the Intel(R) I350 devices.
This new version of the e1000e driver adds support for the following devices: 82567LM Gigabit Network Connection 82574L Gigabit Network Connection 82567V-3 Gigabit Network Connection 82579LM Gigabit Network Connection 82579V Gigabit Network Connection 82583V Gigabit Network Connection 82567V-4 Gigabit Network Connection 82566DC-2 Gigabit Network Connection
The Chelsio T4 adapter with the cxgb4, cxgb4i, and iw_cxgb4 drivers support 10Ge NIC, iSCSI, and iWARP functions respectively. IBM Power systems support Enhanced Error Handling (EEH) and Hotplug removal. When hotplug operations are performed on a running adapter, a crash, hang or failure to remove the adapter may occur.
A permanent solution in the device drivers is being investigated but may not be ready in time for GM. Until the maintenance driver is released, it is necessary to unload all of the cxgb4, cxgb4i, and iw_cxgb4 drivers prior to running any of the hotplug commands such as 'drmgr -r'.
Once the drivers are unloaded, the adapter can be hotplug moved to another partition or removed from the system as necessary.
The bna 3.0.2.2 driver supports all Brocade FC/FCOE adapters. Below is a list of adapter models with corresponding PCIIDs:
PCIID Model 1657:0014:1657:0014 1010 10Gbps single port CNA - LL 1657:0014:1657:0014 1020 10Gbps dual port CNA - LL 1657:0014:1657:0014 1007 10Gbps dual port CNA - LL 1657:0014:1657:0014 1741 10Gbps dual port CNA - LL 1657:0022:1657:0023 1860 10Gbps CNA - LL 1657:0022:1657:0023 1860 10Gbps NIC - LL
Firmware Download: The latest Firmware package for 3.0.2.2 bna driver can be found at: http://www.brocade.com/services-support/drivers-downloads/adapters/Linux.page and then click following respective util package link: Version Link v3.0.2.0 Linux Adapter Firmware package for RHEL 6.2, SLES 11SP2
Configuration and Management utility download: The latest driver configuration & management utility for 3.0.2.2 bna driver can be found at http://www.brocade.com/services-support/drivers-downloads/adapters/Linux.page and then click version v3.0.2.0, "Linux Adapter Util package for RHEL 6.2, SLES 11SP2".
Documentation: The latest Administration's Guide, Installation and Reference Manual, Troubleshooting Guide, and Release Notes for the corresponding out-of-box driver can be found at http://www.brocade.com/services-support/drivers-downloads/adapters/Linux.page and use the following inbox and out-of-box driver version mapping to find the corresponding documentation:
Inbox Version Out-of-box Version v3.0.2.2 v3.0.0.0
Support: For general product and support info, go to the Brocade website at http://www.brocade.com/services-support/index.page.
For QLogic 82XX based CNA, update the firmware to the latest from the QLogic website or whatever is recommended by the OEM in case you are running 4.7.x FW version.
SP2 scans for the functions on a PCI device in a new way using ARI. This can cause some of the functions on the Broadcom 57712 adapter to be missing after upgrading to SP2.
Contact your system vendor to receive the latest firmware for the 57712 adapter that resolves this issue. Alternatively, upgrading to kernel 3.0.26 and adding the boot parameter 'pci=noari' will allow all the functions on the 57712 adapter to become visible under SLES 11 SP2.
Updated qla2xxx to version 8.03.01.04.11.1-k8
Updated qla4xxx to version v5.01.00.00.11.01-k13
Updated megaraid_mbox driver to version 2.20.5.1
Updated megaraid_sas to version 4.27
Updated MPT Fusion to version 4.22.00.00
Updated mpt2sas driver to version 04.100.01.02
Updated lpfc driver to version 8.3.5.7
Added bnx2i driver for Broadcom NetXtreme II in version 2.1.1
Updated bfa driver to version 2.1.2.1
The enic driver was updated to version 1.4.2 to support newer Cisco UCS systems. This update also replaces LRO (Large Receive Offload) to GRO (Generic Receive Offload).
Rapid Storage Technology enterprise 3.0 for the Linux allows users to install/boot to Intel BIOS initialized SW RAID. New features supported with this version includes Disk Coercion, Email Alerting, RAID5 Xor, Hot Spare Disk, Read Patrol, On Line Capacity Expansion, RAID Level Migrations, Check Pointing, smart alerting , Expanded Stripe Size, SAS & SATA drive roaming, and Auto Rebuild.
This service pack includes the proper upstream md raid userspace (mdadm/mdmon) software raid utilities to ensure full feature functionality including install/boot support.
The Intel 6 Series/C200 Series Chipset Platform Controller Hub (PCH) for mainstream Servers requires the isci driver for the Intel SAS Controller Unit (SCU).
This service pack includes the official SCU "isci.c" driver to ensure full SCU support including install/boot support.
Instructions to setup iSCSI initiator over DCB:
The iSCSI initiator will automatically set packet priority based on the DCB iSCSI application priority in effect on the egress interface. The priority is set once at session establishment. If the DCB priority is to be changed, it will be necessary to reestablish the session to apply the changed priority.
Because the priority is set based on the egress interface, the priority cannot be set until the egress interface is known. This means that by default, the initial TCP packets to establish the session will not have a priority set, but subsequent packets will. If a session is bound to an interface, then the priority associated with that interface will be used even for the initial packet exchange. If a routing change results in a different egress interface being used, the same priority will continue to be used unless or until the session is re-established.
It is specifically recommended to bind to a VLAN interface. This allows the DCB-iSCSI priority to be communicated in the VLAN header. Without a VLAN header to convey the priority, the priority will only affect packet scheduling within the host. Commands such as the following demonstrate binding to a VLAN interface:
iscsiadm -m iface -I iface3 --op=new iscsiadm -m iface -I iface3 --op=update -n iface.net_ifacename -v eth3.3260
By binding to the interface, every packet will carry the correct priority.
Make sure that the app tlv for iSCSI is enabled on the system and that the switchport is configured to use iscsi-default cee map AND lldp iscsi-priority-bits 0x10 is set:
For example, to configure the switchport on Brocade:
no cee cee iscsi-default lldp iscsi-priority-bits 0x10
This sets iSCSI to use priority 4. Assuming that the host is willing (will accept DCB configuration from the switch), iSCSI should then operate at priority 4.
The following will set the app tlv in CEE mode from the host:
dcbtool sc ethX app:1 e:1 a:1 w:1 appcfg:10
To enable app tlv in IEEE mode from the host:
lldptool -T -i eth2 -V APP app=4,2,3260
Note that the 3260 above is the well-known port number for iSCSI. The iSCSI app priority is always communicated using the well-known port number and will be used even if iSCSI has been configured to operate on a non-standard port. A non-standard port number is never used to determine the iSCSI initiator priority.
There probably is a lot that could be said about DCB. It would probably simplify things to make some assumptions about how it will be used. For example, I expect that the DCB parameters will be nearly always managed from the switch, so perhaps the only real host configuration that should be needed is just turning on DCB. The rest of it probably just adds confusion.
FCoE target setup:
For more information, see http://www.open-fcoe.org.
Open-iSCSI support is added to the QLogic iSCSI qla4xxx driver in SUSE Linux Enterprise Server 11 Service Pack 2. Using iscsiadm the features supported for qla4xxx are:
Network configuration
iSCSI Target management enabling Discovery, Login/Logout of iSCSI targets
For more details, see Open-iSCSI README at http://www.open-iscsi.org/docs/README.
Note: The IOCTL support in qla4xxx is dropped and hence QLogic Applications are not supported with this Inbox driver. This is being targeted for a future release. The qla4xxx driver compatible with QLogic Applications can also be obtained from the QLogic Web site.
Using bnx2fc driver for installation:
Broadcom's NetXtreme II 57712 device provides networking as well as storage functionality. Boot from SAN on this device is supported over FCoE network using bnx2fc driver. Add "withfcoe=1" to the boot option line. Since the DCBX protocol is offloaded and performed by the device firmware, 'dcb' feature should be turned off during installation when prompted.
Note that FCoE boot from SAN on Broadcom 10G devices is only supported using the bnx2fc driver. Boot from SAN using the software fcoe driver is not supported.
For detailed information, refer to "Broadcom NetXtreme II(tm) Network Adapter User Guide".
Using iSCSI Disks When Installing:
Note: The installer for SLES 11 SP2 now supports iscsi install using software iscsi method and native Broadcom offload method on Broadcom NetXtreme II devices.
To use Broadcom offload iSCSI during install, the iSCSI option ROM on the Broadcom device must be set to HBA mode. Refer to "iSCSI Boot Broadcom NetXtreme II(tm) Network Adapter User Guide" for detailed information on iSCSI install/boot for Broadcom devices.
To use software iSCSI install, disable HBA mode in the Broadcom iSCSI option ROM.
Storage Drivers:
Added bnx2i driver for Broadcom NetXtreme II in version 2.7.0.3
Added new bnx2fc driver for Broadcom NetXtreme II 57712
Bnx2fc is a FCoE offload driver, that uses open-fcoe's stack and fcoeutils. Note that SLES 11 SP2 only supports offload FCoE on NetXtreme II 57712. Refer to Documentation/scsi/bnx2fc.txt in linux kernel source for the driver usage information.
The bfa 3.0.2.2 driver supports all Brocade FC/FCOE adapters. Below is a list of adapter models with corresponding PCIIDs:
PCIID Model 1657:0013:1657:0014 425 4Gbps dual port FC HBA 1657:0013:1657:0014 825 8Gbps PCIe dual port FC HBA 1657:0013:103c:1742 HP 82B 8Gbps PCIedual port FC HBA 1657:0013:103c:1744 HP 42B 4Gbps dual port FC HBA 1657:0017:1657:0014 415 4Gbps single port FC HBA 1657:0017:1657:0014 815 8Gbps single port FC HBA 1657:0017:103c:1741 HP 41B 4Gbps single port FC HBA 1657:0017:103c 1743 HP 81B 8Gbps single port FC HBA 1657:0021:103c:1779 804 8Gbps FC HBA for HP Bladesystem c-class 1657:0014:1657:0014 1010 10Gbps single port CNA - FCOE 1657:0014:1657:0014 1020 10Gbps dual port CNA - FCOE 1657:0014:1657:0014 1007 10Gbps dual port CNA - FCOE 1657:0014:1657:0014 1741 10Gbps dual port CNA - FCOE 1657:0022:1657:0024 1860 16Gbps FC HBA 1657:0022:1657:0022 1860 10Gbps CNA - FCOE
Firmware Download: The latest Firmware package for the 3.0.2.2 bfa driver can be found at http://www.brocade.com/services-support/drivers-downloads/adapters/Linux.page, then click version v3.0.2.0, "Linux Adapter Firmware package for RHEL 6.2, SLES 11SP2".
Configuration and Management Utility Download: The latest driver configuration and management utility for 3.0.2.2 bfa driver can be found at http://www.brocade.com/services-support/drivers-downloads/adapters/Linux.page, then click version v3.0.2.0 "Linux Adapter Firmware package for RHEL 6.2, SLES 11SP2".
Documentation: The latest Administration's Guide, Installation and Reference Manual, Troubleshooting Guide, and Release Notes for the corresponding out-of-box driver can be found at http://www.brocade.com/services-support/drivers-downloads/adapters/Linux.page and use the following inbox and out-of-box driver version mapping to find the corresponding documentation:
Inbox Version Out-of-box Version v3.0.2.2 v3.0.0.0
Support: For general product and support info, go to the Brocade website at http://www.brocade.com/services-support/index.page.
Once link is up, LLDP query QoS to get the new PFC, send FCoE incapable right away, which is right.
After negotiating with neighbor, we got lldp frame with un-recognized ieee dcbx, so we declare link is CEE incapable, and send out FCoE Capable event with PFC = 0 to fcoe kernel.
Then neighbor adjusts its version to match our CEE version, now we find right DCBX tlv in incoming LLDP frame, we declare link CEE capable. At this time we did not send FCoE capable again since we already sent it in step 2.
To solve this, upgrade the switch firmware to v6.4.3 or above.
Updated CIFS to version 1.74
Updated intel-i810 driver
Added X11 driver for AMD Geode LX 2D (xorg-x11-driver-video-amd)
Updated X11 driver for Radeon cards
Updated XFS and DMAPI driver
Updated Wacom driver to version 1.46
USB 3.0 is the third major revision of the USB standard, which brings faster data transfer and increases power savings. More and more USB 3.0 consumer products are launched in market. Intel starts to support USB 3.0 in the Intel(R) 7 Series/C216 Chipset Family.
This SP introduces support for USB 3.0 by adding patches for xHCI (eXtensible Host Controller Interface), USB 3.0 hub support and USB 3.0 support for Intel(R) 7 Series/C216 Chipset Family.
The processor graphics is provided in the 2nd Generation Intel(R) Core™ i7/i5/i3 processor family.
This service pack adds support for the processor graphics in the 2nd Generation Intel(R) Core™ i7/i5/i3 processor family by updating the required kernel module, xserver, xf86-video-intel driver, Mesa and dri driver.
Firefox was updated to version 24 ESR.
This update also brings updates of Mozilla NSPR and Mozilla NSS libraries. Mozilla NSS libraries contain cryptographic enhancements, including TLS 1.2 support.
It comes with PDF.js, which now replaces the Acroread PDF plugin.
gawk as delivered in SUSE Linux Enterprise 11 SP1, has a low performance with respect to multibyte string operations.
Carefully considering the changes from 3.1.6 to 3.1.8 we decided that a version upgrade will significantly help in other areas as well. Find below the list of important changes:
The zero flag no longer applies to %c and %s.
Failure to open a socket is no longer a fatal error.
The ' flag (%'d) is now just ignored on systems that cannot support it.
Gawk now handles multibyte strings better in [s]printf with field widths and such.
A getline from a directory is no longer fatal; instead it returns -1.
Several bugfixes for gdb version 7.1 accumulated, and upstream gdb gained better support for some languages (e.g. Fortran and C++). Backporting those changes to gdb 7.1 is not worthwhile.
Update gdb to Version 7.3
Added support for installation from an NFSv4 server.
Updated binutils to version 2.21.1
Updated bluez to version 4.51
Updated clamav to version 0.97.3
Updated crash to version 5.1.9
Updated dhcp to version 4.2.3.P2
Updated gdb to version 7.3
Updated hplip to version 3.11.10
Updated ipsec-tools to version 0.7.3
Updated IBM Java 1.4.2 (java-1_4_2-ibm) to SR13 FP11
Updated IBM Java 1.6.0 (java-1_6_0-ibm) to SR9.3
Updated libcgroup1 to version 0.37.1
Updated libcmpiutil to version 0.5.6
Updated libelf to version 0.8.12
Updated QT4 (libqt4) to version 4.6.3
Updated libvirt to version 0.9.6
Updated libvirt-cim to version 0.5.12
Updated mdadm to version 3.2.2
Updated module-init-tools to version 3.11.1
Updated MozillaFirefox to version 10
Added mt_st version 0.9b
Added netlabel version 0.19
Updated numactl to version 2.0.7
Updated openCryptoki to version 2.4
Updated openldap2 to version 2.4.26
Added openvas version 3.0
Added perf: Performance Counters For Linux
Added perl-WWW-Curl version 4.09
Added rng-tools: Support daemon for hardware random device
Updated sblim-cim-client2 to version 2.1.3
Updated sblim-cmpi-base to version 1.6.1
Updated sblim-cmpi-fsvol to version 1.5.0
Updated sblim-cmpi-network to version 1.4.0
Updated sblim-cmpi-nfsv3 to version 1.1.0
Updated sblim-cmpi-nfsv4 to version 1.1.0
Updated sblim-cmpi-params to version 1.3.0
Updated sblim-cmpi-sysfs to version 1.2.0
Updated sblim-gather to version 2.2.0
Updated sblim-sfcb to version 1.3.11
Updated sblim-sfcc to version 2.2.1
Updated sblim-wbemcli to version 1.6.1
Updated strongswan to version 4.4.0
Added stunnel version 4.36
Updated virt-viewer to version 0.4.1
Updated virt-manager to version 0.9.0
Updated kvm to version 0.15.1
Updated Xen (xen) to version 4.1.2
Updated dcbd to version 0.9.24
Updated e2fsprogs to version 1.41.9
Updated iprutils to version 2.3.7
Updated iscsitarget to version 1.4.20
Updated nfs-utils to version 1.2.3 for improved IPv6 support
Added apport, a tool to collect data automatically from crashed processes
Performance Co-Pilot (package pcp
) was updated to
version 3.6.10. This update obsoletes libpcp.so.2
.
As a result, any in-house or third-party applications developed using
libpcp.so.2
will need to be re-based against
libpcp.so.3
.
The new library and corresponding development header files are provided
as part of the libpcp3-3.6.10
and
pcp-devel-3.6.10
packages.
SUSE provides a Software Development Kit (SDK) for SUSE Linux Enterprise 11 Service Pack 2. This SDK contains libraries, development environments and tools along the following patterns:
C/C++ Development
Certification
Documentation Tools
GNOME Development
Java Development
KDE Development
Linux Kernel Development
Programming Libraries
.NET Development
Miscellaneous
Perl Development
Python Development
Qt 4 Development
Ruby on Rails Development
Ruby Development
Version Control Systems
Web Development
YaST Development
This section includes update-related information for this release.
With SUSE Linux Enterprise 11 SP2 we introduce Linux Kernel 3.0. This kernel is a direct successor of the Linux kernel 2.6 series, thus all applications run without change. However, some applications or installation programs are broken and may check for version "2.6" literally, thus failing to accept the compatibility of our kernel.
We provide two mechanisms to encourage applications to recognize the kernel 3.0 in SUSE Linux Enterprise 11 SP2 as a Linux kernel 2.6 compatible system:
Use the uname26 command line tool, to start a single application in a
2.6 context. Usage is as easy as typing uname26
[PROGRAM]
. More information can be found in the manpage of
"setarch".
Some database systems and enterprise business applications expect
processes and tasks run under a specific user name (not root). The
Pluggable Authentication Modules (PAM) stack in SUSE Linux Enterprise
allows to put a user into a 2.6 context. To achieve this, please add
the username to the file
/etc/security/uname26.conf
. For more information,
see the manpage for "pam_unix2". Caveat: We do not support the "root"
user to run in a 2.6 context.
If you are running SAP applications, have a look at SAP Note #1310037 for more information on running SAP applications within a Kernel 2.6 compatibility environment.
Known Issues
The current version of the LSI MegaCLI utility needs to be run with the uname26 personality using the "uname26" tool.
The current version of the IBM Online SAS/SATA Hard Disk Drive Update Program needs to be run with a uname26 personality.
To upgrade a PostgreSQL server installation from version 8.3 to 9.1, the database files need to be converted to the new version.
Newer versions of PostgreSQL come with the
pg_upgrade
tool that simplifies and speeds up the
migration of a PostgreSQL installation to a new version. Formerly dump
and restore was needed that was much slower.
pg_upgrade
needs to have the server binaries of both
versions available. To allow this, we had to change the way PostgreSQL
is packaged as well as the naming of the packages, so that two or more
versions of PostgreSQL can be installed in parallel.
Starting with version 9.1, PostgreSQL package names contain numbers
indicating the major version. In PostgreSQL terms the major version
consists of the first two components of the version number, i.e. 8.3,
8.4, 9.0, or 9.1. So, the packages for Postgresql 9.1 are named
postgresql91, postgresql91-server, etc. Inside the packages the files
were moved from their standard locations to a versioned location such
as /usr/lib/postgresql83/bin
or
/usr/lib/postgresql91/bin
to avoid file conflicts if
packages are installed in parallel. The
update-alternatives
mechanism creates and maintains
symbolic links that cause one version (by default the highest installed
version) to re-appear in the standard locations. By default, database
data are stored under /var/lib/pgsql/data
on SUSE
Linux.
The following preconditions have to be fulfilled before data migration can be started:
If not already done, the packages of the old PostgreSQL version must be upgraded to the new packaging scheme through a maintenance update. For SLE 11 this means to install the patch that upgrades PostgreSQL from version 8.3.14 to 8.3.19 or higher.
The packages of the new PostgreSQL major version need to be
installed. For SLE11 this means to install postgresql91-server and
all the packages it depends on. As pg_upgrade
is
contained in postgresql91-contrib, that one has to be installed as
well, at least until the migration is done.
Unless pg_upgrade
is used in link mode, the server
must have enough free disk space to temporarily hold a copy of the
database files. If the database instance was installed in the default
location, the needed space in megabytes can be determined by running
the follwing command as root: "du -hs /var/lib/pgsql/data". If space
is tight, it might help to run the "VACUUM FULL" SQL command on each
database in the instance to be migrated, but be aware that it might
take very long.
Upstream documentation about pg_upgrade
including
step by step instructions for performing a database migration can be
found under
file:///usr/share/doc/packages/postgresql91/html/pgupgrade.html
(if the postgresql91-docs package is installed), or online under
http://www.postgresql.org/docs/9.1/static/pgupgrade.html. NOTE: The online documentation starts with explaining how you can
install PostgreSQL from the upstream sources (which is not necessary on
SLE) and also uses other directory names (/usr/local
instead of the
update-alternatives
based path as described above).
For background information about the inner workings of
pg_admin
and a performance comparison with the old
dump and restore method, see
http://momjian.us/main/writings/pgsql/pg_upgrade.pdf.
For an automated upgrade from SLES 10 SP4 or SLES 11 SP1 using AutoYaST see the Deployment Guide, Part "Automated Installations". The Deployment Guide is part of the system documentation that comes with the product.
The online migration from SP1 to SP2 is supported via the "YaST wagon" module.
Online migration from SP1 to SP2 is not supported if debuginfo packages are installed.
To migrate the system to the Service Pack 2 level with zypper, proceed as follows:
Open a root shell.
Run zypper ref -s to refresh all services and repositories.
Run zypper up -t patch to install package management updates.
Now it is possible to install all available updates for SLES/SLED 11 SP1; run zypper up -t patch again.
Now the installed products contain information about distribution
upgrades and which migration products should be installed to perform
the migration. Read the migration product information from
/etc/products.d/*.prod
and install them.
Enter the following command:
grep '<product' /etc/products.d/*.prod
A sample output could be as follows:
<product>sle-sdk-SP2-migration</product> <product>SUSE_SLES-SP2-migration</product>
Install these migration products (example):
zypper in -t product sle-sdk-SP2-migration SUSE_SLES-SP2-migration
Run suse_register -d 2 -L /root/.suse_register.log to register the products in order to get the corresponding SP2 Update repositories.
Run zypper ref -s to refresh services and repositores.
Check the repositories using zypper lr. Only if needed, disable repositories manually (note that the SP1-Pool and SP1-Updates repos need to stay enabled!) and enable the new SP2 (SP2-Core, SP2-Updates) repositories:
zypper mr --disable <repo-alias> zypper mr --enable <repo-alias>
Then perform a distribution upgrade by entering the following command :
zypper dup --from SLES11-SP2-Core --from SLES11-SP2-Updates \ --from SLE11-WebYaST-SP2-Pool --from SLE11-WebYaST-SP2-Updates
Add more SP2 catalogs here if needed, e.g. in case addon products are installed.
zypper will report that it will delete the migration product and update the main products. Confirm the message to continue updating the RPM packages.
To do a full update, run zypper patch.
After the upgrade is finished, register the new products again:
suse_register -d 2 -L /root/.suse_register.log
Reboot the system.
Migration is supported from SUSE Linux Enterprise Server 10 SP4 via bootable media (incl. PXE boot).
There are supported ways to upgrade from SLES 10 GA and SPx or SLES 11 GA to SLES 11 SP2, which may require intermediate upgrade steps:
SLES 10 GA -> SLES 10 SP1 -> SLES 10 SP2 -> SLES 10 SP3 -> SLES 10 SP4 -> SLES 11 SP2, or
SLES 11 GA -> SLES 11 SP1 -> SLES 11 SP2
The upgrade or the automated migration from SLES 10 to SLES 11 SP2 may fail if the root file system of the machine is located on iSCSI because of missing boot options.
There are two approaches to solve it, if you are using AutoYaST (adjust IP addresses and hostnames according to your environment!):
Use as boot options:
withiscsi=1 autoupgrade=1
autoyast=http://myserver/autoupgrade.xml
Then, in the dialog of the iSCSI initiator, configure the iSCSI device.
After successful configuration of the iSCSI device, YaST will find the installed system for the upgrade.
Add or modify the <iscsi-client> section in your
autoupgrade.xml
as follows:
<iscsi-client> <initiatorname>iqn.2012-01.com.example:initiator-example</initiatorname> <targets config:type="list"> <listentry> <authmethod>None</authmethod> <iface>default</iface> <portal>10.10.42.84:3260</portal> <startup>onboot</startup> <target>iqn.2000-05.com.example:disk01-example</target> </listentry> </targets> <version>1.0</version> </iscsi-client>
Then, run the automated upgrade with these boot options:
autoupgrade=1
autoyast=http://myserver/autoupgrade.xml
With SUSE Linux Enterprise Server 11 the kernel RPMs are split in different parts:
kernel-flavor-base
Very reduced hardware support, intended to be used in virtual machine images.
kernel-flavor
Extends the base package; contains all supported kernel modules.
kernel-flavor-extra
All other kernel modules which may be useful but are not supported. This package will not be installed by default.
SUSE Linux Enterprise Server uses tickless timers. This can be disabled
by adding nohz=off
as a boot option.
SUSE Linux Enterprise Server will no longer contain any development packages, with the exception of some core development packages necessary to compile kernel modules. Development packages are available in the SUSE Linux Enterprise Software Development Kit.
The man command now asks which manual page the user wants to see if manual pages with the same name exist in different sections. The user is expected to type the section number to make this manual page visible.
If you want to revert back to the previously used method, please set
MAN_POSIXLY_CORRECT=1
in a shell initialization file
such as ~/.bashrc
.
The YaST LDAP Server module no longer stores the configuration of the
LDAP Server in the file /etc/openldap/slapd.conf
.
It uses OpenLDAP's dynamic configuration backend, which stores the
configuration in an LDAP database itself. That database consists of a
set of .ldif
files in the directory
/etc/openldap/slapd.d
. You should - usually - not
need to access those files directly. To access the configuration you
can either use the yast2-ldap-server module or any
capable LDAP client (e.g., ldapmodify, ldapsearch, etc.). For details
on the dynamic configuration of OpenLDAP, refer to the OpenLDAP
Administration Guide.
This release of SUSE Linux Enterprise Server ships with AppArmor. The
AppArmor intrusion prevention framework builds a firewall around your
applications by limiting the access to files, directories, and POSIX
capabilities to the minimum required for normal operation. AppArmor
protection can be enabled via the AppArmor control panel, located in
YaST under Security and Users. For detailed information about using
AppArmor, see the documentation in
/usr/share/doc/packages/apparmor-docs
.
The AppArmor profiles included with SUSE Linux have been developed with our best efforts to reproduce how most users use their software. The profiles provided work unmodified for many users, but some users may find our profiles too restrictive for their environments.
If you discover that some of your applications do not function as you expected, you may need to use the AppArmor Update Profile Wizard in YaST (or use the aa-logprof(8) command line utility) to update your AppArmor profiles. Place all your profiles into learning mode with the following: aa-complain /etc/apparmor.d/*
When a program generates many complaints, the system's performance is degraded. To mitigate this, we recommend periodically running the Update Profile Wizard (or aa-logprof(8)) to update your profiles even if you choose to leave them in learning mode. This reduces the number of learning events logged to disk, which improves the performance of the system.
Note: Before updating, check the configuration of your boot loader to assure that it is not configured to modify any system areas (MBR, settings active partition or similar). This will reduce the amount of system areas that you need to restore after update.
Updating a system where an alternative boot loader (not grub) or an additional boot loader is installed in the MBR (Master Boot Record) might override the MBR and place grub as the primary boot loader into the system.
In this case, we recommend the following: First backup your data. Then
either do a fresh installation and restore your data, or run the update
nevertheless and restore the affected system areas (in particular, the
MBR). It is always recommended to keep data separated from the system
software. In other words, /home
,
/srv
, and other volumes containing data should be
on separate partitions, volume groups or logical volumes. The YaST
partitioning module will propose doing this.
Other update strategies (except booting the install media) are safe if the boot loader is configured properly. But the other strategies are not available, if you update from SUSE Linux Enterprise Server 10.
During the upgrade to SUSE Linux Enterprise Server 11 MySQL is also upgraded to the latest version. To complete this migration you may have to upgrade your data as described in the MySQL documentation.
SuSEfirewall2 is enabled by default, which means you cannot log in from remote systems. This also interferes with network browsing and multicast applications, such as SLP and Samba ("Network Neighborhood"). You can fine-tune the firewall settings using YaST.
Windows 2008 VMs (32/64/R2) and newer running on Xen that use VMDP drivers 1.7 need a VMDP update to 2.0. The currently available VMDP drivers 1.7 do not recognize the new viridian cpuid strings. By not recognizing the viridian cpuid strings, the VMDP drivers fail to load. The VM will still be able to boot up and function utilizing the emulated qemu devices. The updated VMDP drivers 2.0 have been enhanced to also recognize the viridian cpuid strings and is backwards compatible to pre SLES 11 SP2 hosts.
Windows 2003 VMs are not required to update the VMDP drivers. Windows 2003 VMs do not use the viridian enhancements.
With VMDP 2.0, hibernate is no longer supported for Windows VMs running on Xen.
We have improved the network configuration: If you install SUSE Linux Enterprise Server 11 SP2 and configure Xen, you get a bridged setup through YaST.
However, if you upgrade from SUSE Linux Enterprise Server 10 SP4 to SUSE Linux Enterprise Server 11 SP2, the upgrade does not configure the bridged setup automatically.
To start the bridge proposal for networking, start the "YaST Control Center", choose "Virtualization", then "Install Hypervisor and Tools". Alternatively, call yast2 xen on the commandline.
The configuration of the LILO boot loader on the x86 and x86_64 architecture via YaST or AutoYaST is deprecated, and not supported anymore. For more information, see Novell TID 7003226 http://www.novell.com/support/documentLink.do?externalID=7003226.
SUSE Linux Enterprise Server 10 and SUSE Linux Enterprise Server 11 set
net.ipv4.conf.all.rp_filter = 1
in
/etc/sysctl.conf
with the intention of enabling
route path filtering. However, the kernel fails to enable routing path
filtering, as intended, by default in these products.
Since SLES 11 SP1, this bug is fixed and most simple single-homed unicast server setups will not notice a change. But it may cause issues for applications that relied on reverse path filtering being disabled (e.g., multicast routing or multi-homed servers).
For more details, see http://ifup.org/2011/02/03/reverse-path-filter-rp_filter-by-example/.
Starting with SUSE Linux Enterprise Server 11 Service Pack 1 the configuration files for recompiling the kernel were moved into their own sub-package:
This package contains only the configuration for one kernel type
(“flavor”), such as default
or
desktop
.
The multi-volume tape dump support will be removed from zipl and zgetdump. The reason for this decision is that current tape cartridges have hundreds of gigabyte capacity and therefore the multi-volume support is not needed any more.
To prepare the move of novfs into an external repository together with NCL the novfs kernel module is dropped from the SLES media. Customers can find the new novfs and NCL packages (novfs-kmp and novell-client) on the SUSE Linux Enterprise Desktop media.
In SUSE Linux Enterprise (up version 11 SP2) we provided "rpcbind", which is compatible with portmap. "rpcbind" now provides full IPv6 support. Thus support for portmap ended with the release of SLE 11 SP3.
With SP2 we are switching from xpdf-tools to poppler-tools for PDF rendering. This is based on xpdf-tools, but more stable and better maintained and it is a seamless replacement.
L3 support for Openswan is scheduled to expire. This decision is driven by the fact that Openswan development stalled substantially and there are no tangible signs that this will change in the future.
In contrast to this the strongSwan project is vivid and able to deliver a complete implementation of current standards. Compared to Openswan all relevant features are available by the package strongSwan plus strongSwan is the only complete Open Source implementation of the RFC 5996 IKEv2 standard whereas Openswan only implements a small mandatory subset. For now and the expected future only strongSwan qualifies to be an enterprise-ready solution for encrypted TCP/IP connectivity.
IBM Java 1.4.2 is supported with SUSE Linux Enterprise Server 11 specifically for migration purposes. We will however remove support for this specific Java version with SUSE Linux Enterprise Server 11 SP3 and SUSE Linux Enterprise Server 12. We recommend to upgrade your environments.
Intel Active Management (IAMT) drivers have been removed from SUSE Linux Enterprise due to incompatibilities and no longer being maintained. Refer to the Intel documentation on how to access newer versions of IAMT drivers for SUSE Linux Enterprise.
Based on significant customer demand, we are shipping PHP 5.3 parallel to PHP 5.2 with SUSE Linux Enterprise 11 SP2.
PHP 5.2 is deprecated though, and has been removed with SLE 11 SP3.
To facilitate the migration of an ext4 file system to another, supported file system, the SLE 11 SP2 kernel now contains a fully supported ext4 file system module, which provides solely read-only access to the file system.
If read-write access to an ext4 file system is still required, you may
install the ext4-writeable
KMP (kernel module
package). This package is available in the online repository
"SLES11-Extras" and contains a kernel module that provides read-write
access to an ext4 file system. Be aware, that this kernel module is
unsupported.
ext4 is not supported for the installation of the SUSE Linux Enterprise operating system files
With SUSE Linux Enterprise 11 SP2 we support offline migration from ext4 to the supported btrfs filesystem.
The following packages were removed with the release of SUSE Linux Enterprise Server 11 Service Pack 2:
hyper-v-kmp has been removed.
The 32-bit Xen hypervisor as a virtualization host is not supported anymore. 32-bit virtual guests are not affected and fully supported with the provided 64-bit hypervisor.
The following packages were removed with the release of SUSE Linux Enterprise Server 11 Service Pack 1:
The brocade-bfa kernel module is now part of the main kernel package.
The enic kernel module is now part of the main kernel package.
The fnic kernel module is now part of the main kernel package.
The KVM kernel modules are now part of the main kernel package.
The following packages were removed with the major release of SUSE Linux Enterprise Server 11:
The JFS file system is no longer supported and the utilities have been removed from the distribution.
Replaced with LVM2.
The mapped-base functionality, which is used by 32-bit applications that need a larger dynamic data space (such as database management systems), has been replaced with flexmap.
The following packages and features are deprecated and will be removed with the next Service Pack or major release of SUSE Linux Enterprise Server:
The reiserfs file system is fully supported for the lifetime of SUSE Linux Enterprise Server 11 specifically for migration purposes. We will however remove support for creating new reiserfs file systems starting with SUSE Linux Enterprise Server 12.
The sendmail
package is
deprecated and might be discontinued with SUSE Linux Enterprise Server
12.
The lprng
package is
deprecated and will be discontinued with SUSE Linux Enterprise Server
12.
The dhcpv6
package is
deprecated and will be discontinued with SUSE Linux Enterprise Server
12.
The qt3
package is
deprecated and will be discontinued with SUSE Linux Enterprise Server
12.
syslog-ng
will be replaced
with rsyslog
.
The smpppd
package is
deprecated and will be discontinued with one of the next Service Packs
or SUSE Linux Enterprise Server 12.
The raw block devices (major 162) are deprecated and will be discontinued with one of the next Service Packs or SUSE Linux Enterprise Server 12.
Remote systems can now be served with xrdp. Windows clients are able to administer such servers.
Find the AppArmor Configuration module now in the "Security and Users" section of the YaST Control Center.
The YaST Repair Tool as available from the boot medium does not detect
pseudo devices like /dev/btrfs
and writes a
warning about missing partitions instead.
You should skip the repair of such a device, because for such pseudo devices the availability of a partition is not expected.
Effective on 2009-01-13, provisional registrations have been disabled in the Novell Customer Center. Registering an instance of SUSE Linux Enterprise Server or Open Enterprise Server (OES) products now requires a valid, entitled activation code. Evaluation codes for reviews or proofs of concept can be obtained from the product pages and from the download pages on novell.com.
If a device is registered without a code at setup time, a provisional code is assigned to it by Novell Customer Center (NCC), and it will be entered in your NCC list of devices. No update repositories are assigned to the device at this time.
Once you are ready to assign a code to the device, start the YaST Novell Customer Center registration module and replace the un-entitled provisional code that NCC generated with the appropriate one to fully entitle the device and activate the related update repositories.
Operation under the Subscription Management Tool (SMT) package and registration proxy is not affected. Registration against SMT will assign codes automatically from your default pool in NCC until all entitlements have been assigned. Registering additional devices once the pool is depleted will result in the new device being assigned a provisional code (with local access to updates) The SMT server will notify the administrator that these new devices need to be entitled.
The minimal pattern provided in YaST's Software Selection dialog targets experienced customers and should be used as a base for your own specific software selections.
Do not expect a minimal pattern to provide a useful basis for your business needs without installing additional software.
This pattern does not include any dump or logging tools. To fully support your configuration, Novell Technical Services (NTS) will request installation of all tools needed for further analysis in case of a support request.
Intel's AES-NI is a new set of Single Instruction Multiple Data (SIMD) instructions that is introduced in Intel(R) processor since 2009. These instructions enable fast and secure data encryption and decryption, using the Advanced Encryption Standard (AES), defined by FIPS Publication number 197.
This service pack adds patches to OpenSSL to support Intel's AES-NI.
Problem (Abstract)
Java applications that use synchronization extensively might perform poorly on Linux systems that include the Completely Fair Scheduler. If you encounter this problem, there are two possible workarounds.
Symptom
You may observe extremely high CPU usage by your Java application and very slow progress through synchronized blocks. The application may appear to hang due to the slow progress.
Cause
The Completely Fair Scheduler (CFS) was adopted into the mainline Linux kernel as of release 2.6.23. The CFS algorithm is different from previous Linux releases. It might change the performance properties of some applications. In particular, CFS implements sched_yield() differently, making it more likely that a thread that yields will be given CPU time regardless. More information on CFS can be found here: "Multiprocessing with the Completely Fair Scheduler", http://www.ibm.com/developerworks/linux/library/l-cfs/?ca=dgrlnxw06CFC4Linux
The new behavior of sched_yield() might adversely affect the performance of synchronization in the IBM JVM.
Environment
This problem may affect IBM JDK 5.0 and 6.0 (all versions) running on Linux kernels that include the Completely Fair Scheduler, including Linux kernel 2.6.27 in SUSE Linux Enterprise Server 11.
Resolving the Problem
If you observe poor performance of your Java application, there are two possible workarounds:
Either invoke the JVM with the additional argument
"-Xthr:minimizeUserCPU"
.
Or configure the Linux kernel to use the more backward-compatible
heuristic for sched_yield() by setting the sched_compat_yield tunable
kernel property to 1
. For example:
echo "1" > /proc/sys/kernel/sched_compat_yield
You should not use these workarounds unless you are experiencing poor performance.
Simple database engines like Berkeley DB use memory mappings (mmap(2)) to manipulate database files. When the mapped memory is modified, those changes need to be written back to disk. In SUSE Linux Enterprise 11, the kernel includes modified mapped memory in its calculations for deciding when to start background writeback and when to throttle processes which modify additional memory. (In previous versions, mapped dirty pages were not accounted for and the amount of modified memory could exceed the overall limit defined.) This can lead to a decrease in performance; the fix is to increase the overall limit.
The maximum amount of dirty memory is 40% in SUSE Linux Enterprise 11 by default. This value is chosen for average workloads, so that enough memory remains available for other uses. The following settings may be relevant when tuning for database workloads:
vm.dirty_ratio
Maximum percentage of dirty system memory (default 40).
vm.dirty_background_ratio
Percentage of dirty system memory at which background writeback will start (default 10).
vm.dirty_expire_centisecs
Duration after which dirty system memory is considered old enough to be eligible for background writeback (in centiseconds).
These limits can be observed or modified with the sysctl utility (see sysctl(1) and sysctl.conf(5)).
The host protected area (HPA), is an area of a hard drive that is not normally visible to an operating system and usually used by system vendors to store recovery data. The Linux kernel offers mechanisms to make the host protected area visible to the OS.
SUSE Linux Enterprise defaults to the host protected area being visible.
In rare cases this might be an unwanted setup (for example when using some RAID solutions etc.). In that case please use the option "Keep HPA" during installation or boot an already installed system using this kernel parameter:
libata.ignore_hpa=0
Note: Changing handling of host protected area for already installed systems may lead to data loss and should therefore be used with cautions.
Future SUSE Linux Enterprise releases will change the default to honor the host protected area.
Setting permissions/ownership on multipath devices is becoming a problem as raw devices are now deprecated in the Linux kernel and database systems such as Oracle. Setting permissions on raw devices is pretty straightforward as you can write udev rules for that. Doing the same for multipath devices is challenging since all you have at the udev level is dm-X as device name, but the associated WWID is not known.
To set Permission/Ownership on Multipath Devices, please copy the file "/usr/share/doc/packages/device-mapper/12-dm-permissions.rules" to /etc/udev/rules.d and adopt it to your needs. This file has four parts for different device type: PLAIN DM, LVM, ENCRYPTED, MULTIPATH. Add the parameters suitable to your envinronment here. Changes to udev rules might only become active after a reboot of the system.
Some storage devices, e.g. IBM DS4K, require special handling for path failover and failback. In SUSE Linux Enterprise Server 10 SP2, dm layer served as hardware handler.
One drawback of this implementation was that the underlying SCSI layer did not know about the existence of the hardware handler. Hence, during device probing, SCSI would send I/O on the passive path, which would fail after a timeout and also print extraneous error messages in the console.
In SUSE Linux Enterprise Server 11, this problem is resolved by moving the hardware handler to the SCSI layer, hence the term SCSI Hardware Handler. These handlers are modules created under the SCSI directory in the Linux Kernel.
In SUSE Linux Enterprise Server 11, there are four SCSI Hardware
Handlers: scsi_dh_alua
,
scsi_dh_rdac
, scsi_dh_hp_sw
,
scsi_dh_emc
.
These modules need to be included in the initrd image so that SCSI knows about the special handling during probe time itself.
To do so, carry out the following steps:
Add the device handler modules to the
INITRD_MODULES
variable in
/etc/sysconfig/kernel
Create a new initrd with:
mkinitrd -k /boot/vmlinux-<flavour> \ -i /boot/initrd-<flavour>-scsi_dh \ -M /boot/System.map-<flavour>
Update the grub.conf/lilo.conf/yaboot.conf
file
with the newly built initrd.
Reboot.
The system time of a guest will drift several seconds per day.
To maintain an accurate system time it is recommended to run
ntpd in a guest. The ntpd daemon can be configured
with the YaST "NTP Client" module. In addition to such a configuration,
the following two variables must be set manually to
"yes
" in /etc/sysconfig/ntp
:
NTPD_FORCE_SYNC_ON_STARTUP="yes" NTPD_FORCE_SYNC_HWCLOCK_ON_STARTUP="yes"
SLES 11 SP2 has a newer block device driver, which presents all
configured virtual disks as SCSI devices. Disks, which used to appear
as /dev/hda
in SLES 11 SP1 will from now on appear
as /dev/sda
.
The Windows Server Manager GUI allows to take snapshots of a Hyper-V guest. After a snapshot is taken the guest will fail to reboot. By default, the guest's root file system is referenced by the serial number of the virtual disk. This serial number changes with each snapshot. Since the guest expects the initial serial number, booting will fail.
The solution is to either delete all snapshots using the Windows GUI, or configure the guest to mount partitions by file system UUID. This change can be made with the YaST partitioner and boot loader configurator.
Installing a guest hosted on Windows 8 Server may fail when a large
virtual disk image (larger than 50 GB) in .vhdx
format is assigned to the guest. To workaround this issue use either
virtual disk images with a fixed size, or create the dynamically sized
disk image using Powershell.
The .vhd
and .vhdx
images are
sparse files. When a dynamic .vhdx
is created
with a maximum size of 127 GB, the initial size is about 256 KB.
Because the default block size for .vhdx
files is
32 MB, writing one 512 byte sector will result in a 32 MB section of
the sparse file being allocated. When ext3
is
allocating the MBR, the super block, the backup super blocks,
inodes, directories, etc., space is being allocated in the sparse
file. Because of ext3
's suboptimal IO, how the
data structures are laid out on disk, and the default block size, a
large partition of the .vhdx
file is allocated
just by formatting. The workaround is to create a
.vhdx
file with a 1 MB block size rather than the
default 32 MB.
Changing the block size in the UI is not implemented. It can only be changed when the VHDx file is created through Powershell. To create a VHD with a modified block size, use this Powershell script (all in one line):
New-VHD -Path C:\MyVHDs\test.vhdx -SizeBytes (127GB) -Dynamic -BlockSizeBytes (1MB) -VHDFormat vhdx
ntp was updated to version 4.2.8.
The ntp server ntpd does not synchronize with its peers
anymore and the peers are specified by their host name in
/etc/ntp.conf
.
The output of ntpq
--peers
lists IP numbers of the remote servers
instead of their host names.
Name resolution for the affected hosts works otherwise.
Parameter changes
The meaning of some parameters for the sntp commandline tool have
changed or have been dropped, for example sntp -s
is now sntp -S
. Review any sntp usage in your own
scripts for required changes.
After having been deprecated for several years, ntpdc is now disabled
by default for security reasons. It can be re-enabled by adding the
line enable mode7
to
/etc/ntp.conf
, but preferably
ntpq
should be used instead.
Firefox was updated to version 24 ESR.
This update also brings updates of Mozilla NSPR and Mozilla NSS libraries. Mozilla NSS libraries contain cryptographic enhancements, including TLS 1.2 support.
It comes with PDF.js, which now replaces the Acroread PDF plugin.
To support video and stream processing the v4l tools and gstreamer-plugins were added.
New Packages (compared with SLES 11 SP1 GA):
apache2-mod_jk
apache2-mod_php53
augeas-lenses
axis
bcel
cft
cifs-utils
classpathx-mail
compat-libldap-2_3-0
cpupower
ebtables
ecj
gnu-jaf
haveged
hyper-v
intel-SINIT
iotop
iscsitarget-kmp-trace
jakarta-commons-beanutils
jakarta-commons-beanutils-javadoc
jakarta-commons-codec
jakarta-commons-collections
jakarta-commons-collections-javadoc
jakarta-commons-collections-tomcat5
jakarta-commons-daemon
jakarta-commons-daemon-javadoc
jakarta-commons-dbcp
jakarta-commons-dbcp-javadoc
jakarta-commons-dbcp-tomcat5
jakarta-commons-digester
jakarta-commons-digester-javadoc
jakarta-commons-discovery
jakarta-commons-discovery-javadoc
jakarta-commons-el
jakarta-commons-el-javadoc
jakarta-commons-fileupload
jakarta-commons-fileupload-javadoc
jakarta-commons-httpclient3
jakarta-commons-io
jakarta-commons-lang
jakarta-commons-launcher
jakarta-commons-launcher-javadoc
jakarta-commons-logging
jakarta-commons-logging-javadoc
jakarta-commons-modeler
jakarta-commons-modeler-javadoc
jakarta-commons-pool
jakarta-commons-pool-javadoc
jakarta-commons-pool-tomcat5
jakarta-commons-validator
jakarta-commons-validator-javadoc
jakarta-taglibs-standard
jakarta-taglibs-standard-javadoc
java-1_6_0-ibm-plugin
kernel-firmware
ledmon
libcap-ng0
libcap-ng-utils
libcares2
libcollection2
libcxgb4-rdmav2
libdhash1
libgcc46
libgomp46
libgudev-1_0-0
libibmad5
libibumad3
libica-2_1_0
libini_config2
libjack0
libldb1
liblzma5
libnetcontrol0
libnewt0_52
liborc-0_4-0
libpath_utils1
libraw1394-11
libref_array1
librelp0
libservicelog-1_1-1
libsnapper1
libstdc++46
libtalloc2
libtcnative-1-0
libtevent0
libudev0
libv4l
libv4l1-0
libv4l2-0
libv4lconvert0
libvirt-client
libyajl1
lio-mibs
lio-utils
log4j
makedumpfile
mcstrans
mozilla-kde4-integration
mozilla-xulrunner192
mozilla-xulrunner192-gnome
mozilla-xulrunner192-translations
mx4j
mx4j-javadoc
mx4j-manual
mysql-tools
nagios-nrpe
nagios-nrpe-doc
nagios-plugins-nrpe
netcat-openbsd
network-autoconfig
newt
nfs4-acl-tools
ofed-kmp-trace
openssl-ibmpkcs11
oracleasm-kmp-trace
oro
perl-apparmor
perl-Authen-SASL
perl-Config-General
perl-Convert-BinHex
perl-IO-Socket-INET6
perl-NetAddr-IP
perl-Net-XMPP
perl-Socket6
perl-Unicode-String
perl-XML-Stream
php53
php53-bcmath
php53-bz2
php53-calendar
php53-ctype
php53-curl
php53-dba
php53-dom
php53-exif
php53-fastcgi
php53-fileinfo
php53-ftp
php53-gd
php53-gettext
php53-gmp
php53-iconv
php53-intl
php53-json
php53-ldap
php53-mbstring
php53-mcrypt
php53-mysql
php53-odbc
php53-openssl
php53-pcntl
php53-pdo
php53-pear
php53-pgsql
php53-pspell
php53-shmop
php53-snmp
php53-soap
php53-suhosin
php53-sysvmsg
php53-sysvsem
php53-sysvshm
php53-tokenizer
php53-wddx
php53-xmlreader
php53-xmlrpc
php53-xmlwriter
php53-xsl
php53-zip
php53-zlib
python-argparse
python-augeas
python-curl
python-dmidecode
python-ethtool
python-lxml
python-newt
python-setools
python-sssd-config
regexp
regexp-javadoc
rhnlib
rsyslog-diag-tools
rsyslog-doc
rsyslog-module-gssapi
rsyslog-module-gtls
rsyslog-module-mysql
rsyslog-module-pgsql
rsyslog-module-relp
rsyslog-module-snmp
rsyslog-module-udpspoof
ruby-ffi
ruby-rb-inotify
ruby-rpm
sces-client
sendxmpp
servletapi5
setools-console
setools-gui
setools-java
setools-libs
setools-tcl
sles-autoyast_en-pdf
sles-deployment_ko
sles-deployment_ko-pdf
sles-hardening_en-pdf
sles-kvm_en-pdf
sles-lxcquick_en-pdf
sles-tuning_en-pdf
snapper
snapper-zypp-plugin
spacewalk-check
spacewalk-client-setup
spacewalk-client-tools
spacewalksd
squashfs
squid3
squidGuard
squidGuard-doc
sssd
sssd-tools
subscription-tools
suse-ami-tools
suseRegisterInfo
tboot
tomcat6
tomcat6-admin-webapps
tomcat6-docs-webapp
tomcat6-javadoc
tomcat6-jsp-2_1-api
tomcat6-lib
tomcat6-servlet-2_5-api
tomcat6-webapps
tomcat_apparmor
translation-update
translation-update-ar
translation-update-cs
translation-update-da
translation-update-de
translation-update-es
translation-update-fi
translation-update-fr
translation-update-hu
translation-update-it
translation-update-ja
translation-update-ko
translation-update-nb
translation-update-nl
translation-update-pl
translation-update-pt
translation-update-pt_BR
translation-update-ru
translation-update-sv
translation-update-zh_CN
translation-update-zh_TW
usb_modeswitch
usb_modeswitch-data
wsdl4j
xml-commons
xorg-x11-server-dmx
xorg-x11-server-rdp
xrdp
xz
xz-lang
yast2-fcoe-client
yast2-rdp
yast2-snapper
zypper-log
zypp-plugin-python
zypp-plugin-spacewalk
Removed Packages (compared with SLES 11 SP1 GA):
brocade-bna-kmp-default
brocade-bna-kmp-pae
brocade-bna-kmp-xen
btrfs-kmp-default
btrfs-kmp-pae
btrfs-kmp-xen
cifs-mount
cxgb3-firmware
ext4dev-kmp-default
ext4dev-kmp-pae
ext4dev-kmp-ppc64
ext4dev-kmp-xen
hyper-v-kmp-default
hyper-v-kmp-pae
intel-iamt
itrace
itrace-kmp-ppc64
iwl1000-ucode
iwl3945-ucode
iwl4965-ucode
iwl5000-ucode
iwl5150-ucode
iwl6000-ucode
libgcc43
libgomp43
libstdc++43
libstdc++43-doc
libtalloc1
libvolume_id1
lsvpd
mozilla-xulrunner190
mozilla-xulrunner190-gnomevfs
mozilla-xulrunner190-translations
mozilla-xulrunner191
mozilla-xulrunner191-gnomevfs
mozilla-xulrunner191-translations
NetworkManager-kde
perl-libapparmor
qlogic-firmware
s390-32
sles-manuals_en-pdf
systemtap-client
xpdf-tools
With global IPv4 addresses getting scarce, the switch to IPv6 is inevitable and needs compatible software. Squid2 does not support IPv6.
Squid version 3.1 has been added, which provides native IPv6 support.
The configuration file /etc/squid/squid.conf has changed in an incompatible manner, some options do not exist anymore, others are not backward compatible. For complete details on changes, refer to the Squid 3.1 release notes at http://www.squid-cache.org/Versions/v3/3.1/RELEASENOTES.html.
With SUSE Linux Enterprise 11 SP3, Squid 2.7 packages will be deprecated and unsupported.
For some locales the date format has changed from ISO date
(YYYY-MM-DD) to a locale date format. Most significantly, this applies
for the American English locale (LANG=en_US.UTF-8
). This means SUSE Linux Enterprise 11 SP2 will output the date as
SUSE Linux Enterprise 10 did and thus ensure long-term backwards
compatibility.
To keep the ISO date, set the environment variable export
TIME_STYLE=long-iso
.
With the SUSE Linux Enterprise High Availability Extension 11, SUSE offers the most modern open source High Availability Stack for Mission Critical environments.
While this functionality is welcomed in most environments, it requires about 1% of memory. Memory allocation is done at boot time and is using 40 Bytes per 4 KiB page which results in 1% of memory.
In virtualized environments, specifically but not exclusively on s390x systems, this may lead to a higher basic memory consumption: e.g., a 20GiB host with 200 x 1GiB guests consumes 10% of the real memory.
This memory is not swappable by Linux itself, but the guest cgroup memory is pageable by a z/VM host on an s390x system and might be swappable on other hypervisors as well.
Cgroup memory support is activated by default but it can be
deactivated by adding the Kernel Parameter
cgroup_disable=memory
A reboot is required to deactivate or activate this setting.
Up to SLE 11 GA, the kernel development files
(.config
, Module.symvers
,
etc.) for all flavors were packaged in a single kernel-syms package.
Starting with SLE 11 SP1, these files are packaged in individual
kernel-$flavor-devel packages, allowing to build KMPs for only the
required kernel flavors. For compatibility with existing spec files,
the kernel-syms package still exists and depends on the individual
kernel-$flavor-devel packages.
Hot-plugging a device (network, disk) works fine for a KVM guest on a SLES 11 host since SP1. However, migrating the same guest with the hotplugged device (available on the destination host) fails.
Since SLES 11 SP1, supports the hotplugging of the device to the KVM guest, but migrating the guest with the hot-plugged device is not supported and expected to fail.
openSSH now makes use of cryptographic hardware acceleration. As a result, the transfer of large quantities of data through a ssh connection is considerably faster. As an additional benefit, the CPU of the system with cryptographic hardware will see a significant reduction in load.
To allow a specific user (“joe”) to mount removable media, run the following command as root:
polkit-auth --user joe \ --grant org.freedesktop.hal.storage.mount-removable
To allow all locally logged in users on the active console to mount removable media, run the following commands as root:
echo 'org.freedesktop.hal.storage.mount-removable no:no:yes' \ >> /etc/polkit-default-privs.local /sbin/set_polkit_default_privs
The Apache Web server offers HTTPS protocol support via mod_ssl, which in turn uses the openssl shared libraries. SUSE Linux Enterprise Server 11 SP2 and SP3 come with openssl version 0.9.8j. This openssl version supports TLS version up to and including TLSv1.0, support for newer TLS versions like 1.1 or 1.2 is missing.
Recent recommendations encourage the use of TLSv1.2, specifically to support Perfect Forward Secrecy. To overcome this limitation, the SUSE Linux Enterprise Server 11 SP2, SP3, and SP4 are supplied with upgrades to recent versions of the mozilla-nss package and with the package apache2-mod_nss, which makes use of mozilla-nss for TLSv1.2 support for the Apache Web server.
An additional mod_nss
module is supplied for
apache2
, which can coexist with all existing
libraries and apache2 modules. This module uses the mozilla
netscape security services
library, which supports TLS 1.1
and TLS 1.2 protocols. It is not a drop-in replacement; configuration
and certificate storages are different. It can coexist with
mod_ssl
if necessary.
The package includes a sample configuration and a README-SUSE.txt for setup guidance.
The iSCSI being used in SLES 11 SP2 is 2.0.872. After discussion with the open-iscsi maintainer, it was determined that upgrading the SUSE version to 2.0.873, the latest stable version at the time this investigation began, was the best approach.
This also allows us to supported an updated iscsiuio package from Broadcom, which they have requested.
It was felt that this stable version, with SUSE patches added, was safe to use, partly based on the fact that it is present in SLES 11 SP3 already.
Open-iscsi was updated from version 2.0.872 to 2.0.873, which is the latest upstream stable version and fully supports IPv6. It has been successfully tested for IPv6 compliance as well as core functionality, and is backwards compatible with the previous version.
The cachefilesd has been included with a SLE 11 SP2 maintenance update.
The cachefilesd user-space daemon manages persistent disk-based caching of files that are used by network filesystems such as NFS. cachefilesd can help with reducing the load on the network and on the server because some of the network file access requests get served by the local cache.
The DNS Server Bind has been updated to the long term supported version 9.9 for longer stability going forward. In version 9.9, the commands 'dnssec-makekeyset' and 'dnssec-signkey' are not available anymore.
DNSSEC tools provided by Bind 9.2.4 are not compatible with Bind 9.9 and later and have been replaced where applicable. Specifically, DNSSEC-bis functionality removes the need for dnssec-signkey(1M) and dnssec-makekeyset(1M); dnssec-keygen(1M) and dnssec-signzone(1M) now provide alternative functionality.
For more information, see TID 7012684 (https://www.suse.com/support/kb/doc.php?id=7012684).
Support for NFS 4.1 is now available.
The parameter NFS4_SERVER_MINOR_VERSION
is now
available in /etc/nfs/syconfig
for setting the
supported minor version of NFS 4.
Mounting NFS volumes locally on the exporting server is not supported on SUSE Linux Enterprise systems, as it is the case on all Enterprise class Linux systems.
There is a reported problem that the Mellanox ConnectX2 Ethernet
adapter does not trigger the automatic load of the mlx4_en adapter
driver. If you experience problems with the mlx4_en driver not
automatically loading when a Mellanox ConnectX2 interface is
available, create the file mlx4.conf
in the
directory /etc/modprobe.d
with the following
command:
install mlx4_core /sbin/modprobe --ignore-install mlx4_core \ && /sbin/modprobe mlx4_en
If upgrading from SP1 to SP2 on a system with an ATI Radeon ES1000
video chip, there may be issues with the color palette when running
Xorg. To avoid this issue, regenerate a new
xorg.conf
file after the installation with:
sax2 -a -r
This will allow the Xorg vesa driver to control the video chip.
SUSE Linux Enterprise 11 (x86, x86_64 and IA64) is using the Myri10GE driver from mainline Linux kernel. The driver requires a firmware file to be present, which is not being delivered with SUSE Linux Enterprise 11.
Download the required firmware at http://www.myricom.com.
This Service Pack will ensure support for the following new Intel processors:
The 2nd Generation Intel(R) Core™ i7/i5/i3 processor family
The 3rd Generation Intel(R) Core™ processor family
Intel(R) Xeon(R) processor E3-1200 series
Intel(R) Xeon(R) processors E5-4600/2600/2400/1600 series
Future planned Intel(R) Xeon(R) processor code named Ivy Bridge
Intel(R) Platforms based on Intel(R) Xeon(R) Processor E5-4600/2600/2400/1600 and Intel(R) C600 chipset product family will introduce PCI Express Gen3.
This Service Pack adds support for PCI Express Gen3 (ID-based Ordering, Latency Tolerance Reporting, Optimized Buffer Flush/Fill (OBFF)).
This Service Pack adds support for the following Intel(R) platforms:
Intel(R) platforms based on Intel(R) Xeon(R) Processor E3-1200 and Intel(R) C200 chipset product family.
Intel(R) platforms based on Intel(R) Xeon(R) Processor E5-4600/2600/2400/1600 and Intel(R) C600 chipset product family.
Intel(R) TXT provides the solution of protecting IT infrastructure against software-based attacks within a server or PC at startup.
This Service Pack adds basic support for Intel(R) TXT by adding patches to the kernel and integrating tboot.
This hardware flaw ("AMD Erratum #121") is described in "Revision Guide for AMD Athlon 64 and AMD Opteron Processors" (http://support.amd.com/us/Processor_TechDocs/25759.pdf):
The following 130nm and 90nm (DDR1-only) AMD processors are subject to this erratum:
First-generation AMD-Opteron(tm) single and dual core processors in either 939 or 940 packages:
AMD Opteron(tm) 100-Series Processors
AMD Opteron(tm) 200-Series Processors
AMD Opteron(tm) 800-Series Processors
AMD Athlon(tm) processors in either 754, 939 or 940 packages
AMD Sempron(tm) processor in either 754 or 939 packages
AMD Turion(tm) Mobile Technology in 754 package
This issue does not affect Intel processors.
(End quoted text.)
As this is a hardware flaw. It is not fixable except by upgrading your hardware to a newer revision, or not allowing untrusted 64-bit guest systems, or accepting that someone stops your machine. The impact of this flaw is that a malicious PV guest user can halt the host system.
The SUSE XEN updates will fix it via disabling the boot of XEN GUEST systems. The HOST will boot, just not start guests. In other words: If the update is installed on the above listed AMD64 hardware, the guests will no longer boot by default.
To reenable booting, the "allow_unsafe
" option
needs to be added to XEN_APPEND
in
/etc/sysconfig/bootloader
as follows:
XEN_APPEND="allow_unsafe"
Due to limitations in the legacy x86/x86_64 BIOS implementations, booting from devices larger than 2 TiB is technically not possible using legacy partition tables (DOS MBR).
Since SUSE Linux Enterprise Server 11 Service Pack 1 we support installation and boot using uEFI on the x86_64 architecture and certified hardware.
Depending on the workload, i586 and i686 machines with 16GB-48GB of
memory can run into instabilities. Machines with more than 48GB of
memory are not supported at all. Lower the memory with the
mem=
kernel boot option.
In such memory scenarios, we strongly recommend using a x86-64 system with 64-bit SUSE Linux Enterprise Server, and run the (32-bit) x86 applications on it.
When running SLES on an x86 machine, the kernel can only address 896MB of memory directly. In some cases, the pressure on this memory zone increases linearly according to hardware resources such as number of CPUs, amount of physical memory, number of LUNs and disks, use of multipath, etc.
To workaround this issue, we recommend running an x86_64 kernel on such large server machines.
When installing SUSE Linux Enterprise Server 11 on a HS12 system with
a "NetXen Incorporated BladeCenter-H 10 Gigabit Ethernet High Speed
Daughter Card", the boot parameter pcie_aspm=off
should be added.
Ethernet interfaces on some hardware do not get enumerated in a way that matches the marking on the chassis.
The hpilo driver is included in SUSE Linux Enterprise Server 11. Therefore, no hp-ilo package will be provided in the Linux ProLiant Support Pack for SUSE Linux Enterprise Server 11.
For more details, see Novell TID 7002735.
The desktop in SUSE Linux Enterprise Server 11 now recognizes the HP High Performance Mouse for iLO Remote Console and is configured to accept and process events from it. For the desktop mouse and the HP High Performance Mouse to stay synchronized, it is necessary to turn off mouse acceleration. As a result, the HP iLO2 High-Performance mouse (hpmouse) package is no longer needed with SUSE Linux Enterprise Server 11 once one of the following three options are implemented.
In a terminal run xset m 1
— this setting will
not survive a reset of the desktop.
(Gnome) In a terminal run gconf-editor
and go to
desktop->gnome->peripherals->mouse. Edit the "motion
acceleration" field to be 1.
(KDE) Open "Personal Settings (Configure Desktop)" in the menu and go to "Computer Administration->Keyboard&Mouse->Mouse->Advanced" and change "Pointer Acceleration" to 1.
(Gnome) In a terminal run "gnome-mouse-properties" and adjust the "Pointer Speed" slide scale until the HP High Performance Mouse and the desktop mouse run at the same speed across the screen. The recommended adjustment is close to the middle, slightly on the "Slow" side.
After acceleration is turned off, sync the desktop mouse and the ILO mouse by moving to the edges and top of the desktop to line them up in the vertical and horizontal directions. Also if the HP High Performance Mouse is disabled, pressing the <Ctrl> key will stop the desktop mouse and allow easier synching of the two pointers.
For more details, see Novell TID 7002735.
32-bit (x86) compatibility libraries like "libstdc++-libc6.2-2.so.3" have been available on x86_64 in the package "compat-32-bit" with SUSE Linux Enterprise Server 9, SUSE Linux Enterprise Server 10, and are also available on the SUSE Linux Enterprise Desktop 11 medium (compat-32-bit-2009.1.19), but are not included in SUSE Linux Enterprise Server 11.
Background
The respective libraries have been deprecated back in 2001 and shipped in the compatibility package with the release of SUSE Linux Enterprise Server 9 in 2004. The package was still shipped with SUSE Linux Enterprise Server 10 to provide a longer transition period for applications requiring the package.
With the release of SUSE Linux Enterprise Server 11 the compatibility package is no longer supported.
Solution
In an effort to enable a longer transition period for applications still requiring this package, it has been moved to the unsupported "Extras" channel. This channel is visible on every SUSE Linux Enterprise Server 11 system, which has been registered with the Novell Customer Center. It is also mirrored via SMT alongside the supported and maintained SUSE Linux Enterprise Server 11 channels.
Packages in the "Extras" channel are not supported or maintained.
The compatibility package is part of SUSE Linux Enterprise Desktop 11 due to a policy difference with respect to deprecation and deprecated packages as compared to SUSE Linux Enterprise Server 11.
We encourage customers to work with SUSE and SUSE's partners to resolve dependencies on these old libraries.
Example: libpcap0-devel-32-bit package was available in Software Development Kit 10, but is missing from Software Development Kit 11
Background
SUSE supports running 32-bit applications on 64-bit architectures; respective runtime libraries are provided with SUSE Linux Enterprise Server 11 and fully supported. With SUSE Linux Enterprise 10 we also provided 32-bit devel packages on the 64-bit Software Development Kit. Having 32-bit devel packages and 64-bit devel packages installed in parallel may lead to side-effects during the build process. Thus with SUSE Linux Enterprise 11 we started to remove some (but not yet all) of the 32-bit devel packages from the 64-bit Software Development Kit.
Solution
With the development tools provided in the Software Development Kit 11, customers and partners have two options to build 32-bit packages in a 64-bit environment (see below). Beyond that, SUSE's appliance offerings provide powerful environments for software building, packaging and delivery.
Use the "build" tool, which creates a chroot environment for building packages.
The Software Development Kit contains the software used for the Open Build Service. Here the abstraction is provided by virtualization.
vmw_balloon driver has been added to SLE11-SP2. The driver is supported externally and the currently available version is 1.2.1.2-k. The driver is available only for i386 and x86_64 architectures.
Existing methods of exporting a file system from host to the guest include NFS and CIFS, which were not designed with virtualized environments in mind. There is need for a mechanism that provides faster access to exported file systems by exploiting the fact that the guest (client) is running on the same physical hardware as the host (server) that is exporting the file system.
SUSE Linux Enterprise Server 11 SP2 provides VirtFS, which is a new way to export file systems from the host and mount it on the QEMU/KVM guest. VirtFS exploits virtio infrastructure provided by QEMU and hence provides the guest fast access to the exported file system. Conceptually, VirtFS is similar to running NFS server on the host and NFS mounting the exported file system on the guest. For more information about using VirtFS, refer to QEMU wiki at http://wiki.qemu.org/Documentation/9psetup.
SUSE Linux Enterprise Server 11 SP2 is available immediately for use on Amazon Web Services EC2. For more information about Amazon EC2 Running SUSE Linux Enterprise Server, please visit http://aws.amazon.com/suse
With SLE 11 SP2, we removed the 32-bit hypervisor as a virtualization host. 32-bit virtual guests are not affected and are fully supported with the provided 64-bit hypervisor.
If you had a boot entry for it when upgrading from SP1, you need to manually remove it.
Since SUSE Linux Enterprise Server 11 SP1, KVM is fully supported on the x86_64 architecture. KVM is designed around hardware virtualization features included in both AMD (AMD-V) and Intel (VT-x) CPUs produced within the past few years, as well as other virtualization features in even more recent PC chipsets and PCI devices. For example, device assignment using IOMMU and SR-IOV.
The following websites identify processors, which support hardware virtualization:
The KVM kernel modules will not load if the basic hardware virtualization features are not present and enabled in the BIOS. If KVM does not start, please check the BIOS settings.
KVM allows for memory overcommit and disk space overcommit. It is up to the user to understand the impact of doing so. Hard errors resulting from exceeding available resources will result in guest failures. CPU overcommit is supported but carries performance implications.
KVM supports a number of storage caching strategies which may be
employed when configuring a guest VM. There are important data
integrity and performance implications when choosing a caching mode.
As an example, cache=writeback
is not as safe as
cache=none
. See the online "SUSE Linux Enterprise
Server Virtualization with KVM" documentation for details.
The following guest operating systems are supported:
Starting with SLES 11 SP2, Windows guest operating systems are fully supported on the KVM hypervisor, in addition to Xen. For the best experience, we recommend using WHQL-certified virtio drivers, which are part of SLE VMDP.
SUSE Linux Enterprise Server 11 SP1 and SP2 as fully virtualized. The following virtualization aware drivers are available: kvm-clock, virtio-net, virtio-block, virtio-balloon
SUSE Linux Enterprise Server 10 SP3 and SP4 as fully virtualized. The following virtualization aware drivers are available: kvm-clock, virtio-net, virtio-block, virtio-balloon
SUSE Linux Enterprise Server 9 SP4 as fully virtualized. For 32-bit
kernel, specify clock=pmtmr
on the Linux boot
line; for 64-bit kernel, specify
ignore_lost_ticks
on the Linux boot line.
For more information, see
/usr/share/doc/packages/kvm/kvm-supported.txt
.
VMware, SUSE and the community improved the kernel infrastructure in a way that VMI is no longer necessary. Starting with SUSE Linux Enterprise Server 11 SP1, the separate VMI kernel flavor is obsolete and therefore has been dropped from the media. When upgrading the system, it will be automatically replaced by the PAE kernel flavor. The PAE kernel provides all features, which were included in the separate VMI kernel flavor.
Unless the hardware supports Pause Loop Exiting (Intel) or Pause Intercept Filter (AMD) there might be issues with fully virtualized guests with CPU overcommit in place becoming unresponsive or hang under heavy load.
Paravirtualized guests work flawlessly with CPU overcommit under heavy load.
This issue is currently being worked on.
When installing SUSE Linux Enterprise Server 11 on IBM System X x3850/x3950 with ATI Radeon 7000/VE video cards, the boot parameter 'vga=0x317' needs to be added to avoid video corruption during the installation process.
Graphical environment (X11) in Xen is not supported on IBM System X x3850/x3950 with ATI Radeon 7000/VE video cards.
In a few cases, following the installation of Xen, the hypervisor does
not boot into the graphical environment. To work around this issue,
modify /boot/grub/menu.lst
and replace
vga=<number>
with
vga=mode-<number>
. For example, if the
setting for your native kernel is vga=0x317, then for Xen you will
need to use vga=mode-0x317.
Paravirtualized (PV) DomUs usually receive the time from the hypervisor. If you want to run "ntp" in PV DomUs, the DomU must be decoupled from the Dom0's time. At runtime, this is done with:
echo 1 > /proc/sys/xen/independent_wallclock
To set this at boot time:
either append "independent_wallclock=1" to kernel cmd line in DomU's grub configuration file
or append "xen.independent_wallclock = 1" to
/etc/sysctl.conf
in the DomU.
If you encounter time synchronization issues with Paravirtualized Domains, we encourage you to use NTP.
Today's business has come to rely on the uninterrupted availability of platforms and services, thus the feature of "Reliability", "Availability" and "Serviceability" (RAS) has been growing more and more critical to the real-time, always-on enterprise environment. Adding physical processors and memory to a running system without ending the Operation System or powering down the system is supported at Intel(R) Xeon(R) Processor 7500 series-based Platforms.
This Service Pack adds proper support for the mentioned Platforms.
While the number of LUNs for a running system is virtually unlimited, we suggest not having more than 64 LUNs online while installing the system, to reduce the time to initialize and scan the devices and thus reduce the time to install the system in general.
IBM Power 7 systems running firmware 7.2.0 SP1 or later along with version 2.2.0.11-FP24 SP01 or later of the Virtual I/O Server and HMC v7r7.2.0 or later include support for long term suspension of logical partitions. Logical partitions can be suspended and resumed from the HMC. All I/O resources must be virtual I/O resources at the time of suspending. Once suspended, the memory and processor resources associated with the suspended logical partition are free to be used by other logical partitions.
SLES 11 SP2 has been enhanced to support logical partition suspend and resume.
The kernel is able to capture the most recent oops or panic report from the dmesg buffer into NVRAM, where it can be examined after reboot.
PowerVM release 7.4 includes a new memory optimization feature called Active Memory Deduplication. This feature applies to logical partitions which are assigned to an Active Memory Sharing (AMS) pool. With Active Memory Deduplication, the PowerVM Hypervisor automatically detects memory pages in the pool that have identical contents, and remaps those pages to a single physical page, freeing up the duplicate pages for other purposes in the AMS pool.
SLES 11 SP2 has been enhanced to provide the PowerVM Hypervisor with page hints to indicate which pages are good candidates for merging. This feature is automatically enabled in the kernel for AMS LPARs with Active Memory Deduplication enabled. Statistics on page merging is available through the amsstat utility.
The virtual fibre channel for IBM Power systems has been updated to support the 5729 PCIe 4-Port 8Gb FC adapter.
The ppc64-specific instruction tracing tool, ITrace, is no longer available.
The IBM Power Virtual Ethernet driver (ibmveth) has been updated with various performance enhancements, including support for IPv6 checksum offload.
A shared storage pool is a server based storage virtualization that is clustered and is an extension of existing storage virtualization on the Virtual I/O Sever for IBM Power systems. Support for shared storage pools requires the latest Virtual I/O Server software, which can be obtained from http://www14.software.ibm.com/webapp/set2/sas/f/vios/home.html.
SLES 11 SP2 adds multipath support for virtual disks backed by shared storage pools.
Previous versions of GCC limited the size of the TOC to 64kB. Options like -mminimal-toc and the linker automatic multiple TOC section support extended the effective size of the TOC, but some very large programs required source changes to break up large functions in order to compile and link.
PowerPC64 GCC now supports -mcmodel=small, -mcmodel=medium and -mcmodel=large. The latter two generate code for a 2G TOC. -mcmodel=medium optimizes accesses to local data but limits the total size of all data sections to 2G, in most cases giving a speed improvement over -mminimal-toc and may even give a speed improvement over the default -mcmodel=small. The linker supports mixing of object files compiled with any of these options.
All POWER3, POWER4, PPC970 and RS64–based models that were supported by SUSE Linux Enterprise Server 9 are no longer supported.
With SUSE Linux Enterprise Server 11 the bootfile
DVD1/suseboot/inst64
can not be booted directly
via network anymore, because its size is larger than 12MB. To load the
installation kernel via network, copy the files
yaboot.ibm
, yaboot.cnf
and
inst64
from the DVD1/suseboot
directory to the TFTP server. Rename the
yaboot.cnf
file to
yaboot.conf
. yaboot can also load config files for
specific Ethernet MAC addresses. Use a name like
yaboot.conf-01-23-45-ab-cd-ef
to match a MAC
address. An example yaboot.conf
for TFTP booting
looks like this:
default=sles11 timeout=100 image[64-bit]=inst64 label=sles11 append="quiet install=nfs://hostname/exported/sles11dir"
Huge Page Memory (16GB pages, enabled via HMC) is supported by the
Linux kernel, but special kernel parameters must be used to enable this
support. Boot with the parameters "hugepagesz=16G
hugepages=N
" in order to use the 16GB huge pages, where N is
the number of 16GB pages assigned to the partition via the HMC. The
number of 16GB huge pages available can not be changed once the
partition is booted. Also, there are some restrictions if huge pages
are assigned to a partition in combination with eHEA / eHCA adapters:
IBM eHEA Ethernet Adapter:
The eHEA module will fail to initialize any eHEA ports if huge pages are assigned to the partition and Huge Page kernel parameters are missing. Thus, no huge pages should be assigned to the partition during a network installation. To support huge pages after installation, the huge page kernel parameters need to be added to the boot loader configuration before huge pages are assigned to the partition.
IBM eHCA InfiniBand Adapter:
The current eHCA device driver is not compatible with huge pages. If huge pages are assigned to a partition, the device driver will fail to initialize any eHCA adapters assigned to the partition.
The installation on a vscsi client will fail with old versions of the AIX VIO server. Please upgrade the AIX VIO server to version 1.5.2.1-FP-11.1 or later.
After installing SLES 11 SP1 on an iSCSI target, the system boots properly, network is up and the iSCSI root device is found as expected. The install completes (firstboot part) as usual. However, at the end of firstboot, the network is shut down before the root file system is unmounted, leading to read failures accessing the root (iSCSI) device; the system hangs.
Solution: reboot the system.
Customers using SLES 9 or SLES 10 to serve Virtual SCSI to other LPARs, using the ibmvscsis driver, who wish to migrate from these releases, should consider migrating to the IBM Virtual I/O server. The IBM Virtual I/O server supports all the IBM PowerVM virtual I/O features and also provides integration with the Virtual I/O management capabilities of the HMC. It can be downloaded from: http://www14.software.ibm.com/webapp/set2/sas/f/vios/download/home.html
When using IBM Power Virtual Fibre Channel devices utilizing N-Port ID Virtualization, the Virtual I/O Server may need to be updated in order to function correctly. Linux requires VIOS 2.1, Fixpack 20.1, and the LinuxNPIV I-Fix for this feature to work properly. These updates can be downloaded from: http://www14.software.ibm.com/webapp/set2/sas/f/vios/home.html
When using virtual tape devices served by an AIX VIO server, the Virtual I/O Server may need to be updated in order to function correctly. The latest updates can be downloaded from: http://www14.software.ibm.com/webapp/set2/sas/f/vios/home.html
For more information about IBM Virtual I/O Server, see http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/home.html.
The Chelsio hardware supports ~16K packet size (the exact value depends
on the system configuration). It is recommended that you set the
parameter MaxRecvDataSegmentLength
in
/etc/iscsid.conf
to 8192.
For the cxgb3i driver to work properly, this parameter needs to be set to 8192.
In order to use the cxgb3i offload engine, the cxgb3i module needs to be loaded manually after open-scsi has been started.
For additional information, refer to
/usr/src/linux/Documentation/scsi/cxgb3i.txt
in
the kernel source tree.
When attempting to netboot yaboot, users may see the following error message:
Can't claim memory for TFTP download (01800000 @ 01800000-04200000)
and the netboot will stop and immediately display the yaboot "boot:" prompt. Use the following steps to work around the problem.
Reboot the system and at the IBM splash screen select '8' to get to an Open Firmware prompt "0>"
At the Open Firmware prompt, type the following commands:
setenv load-base 4000 setenv real-base c00000 dev /packages/gui obe
The second command will take the system back to the IBM splash screen and the netboot can be attempted again.
If you do a remote installation in text mode, but want to connect to the machine later in graphical mode, be sure to set the default runlevel to 5 via YaST. Otherwise xdm/kdm/gdm might not be started.
To disable SDP on IBM hardware set SDP=no
in
openib.conf so that by default SDP is not loaded. After you have set
this setting in openib.conf to 'no' run openibd
restart or reboot the system for this setting to take effect.
If your system is configured as an NFS over RDMA server, the system may hang during a shutdown if a remote system has an active NFS over RDMA mount. To avoid this problem, prior to shutting down the system, run "openibd stop"; run it in the background, because the command will hang and otherwise block the console:
/etc/init.d/openibd stop &
A shutdown can now be run cleanly.
Note: the steps to configure and start NFS over RDMA are as follows:
On the server system:
Add an entry to the file /etc/exports
, for
example:
/home 192.168.0.34/255.255.255.0(fsid=0,rw,async,insecure,no_root_squash)
As the root user run the commands:
/etc/init.d/nfsserver start echo rdma 20049 > /proc/fs/nfsd/portlist
On the client system:
Run the command: modprobe xprtrdma.
Mount the remote file system using the command
/sbin/mount.nfs. Specify the ip address of the
ip-over-ib network interface (ib0, ib1...) of the server and the
options: proto=rdma,port=20049
, for example:
/sbin/mount.nfs 192.168.0.64:/home /mnt \ -o proto=rdma,port=20049,nolock
Look at http://www.ibm.com/developerworks/linux/linux390/documentation_novell_suse.html for more information.
IBM zEnterprise 196 (z196) and IBM zEnterprise 114 (z114) further on referred to as z196 and z114.
Performance improvement through exploitation of new System z196 processor instructions by binutils and alternate GCC on the SDK. This feature will be active when the GNU assembler is invoked with -march=z196.
User space processes can delay I/O operations until all pending requests against the common I/O layer have been completed, eg. a user process wants to wait until a device is useable after a CP ATTACH command.
Updated version with several corrections: previous versions of libdfp exhibited minor bugs in printf_dfp and strtod[32|64|128], inconsistencies with POSIX with regard to classification functions, dfp header override include order problems, and missing classification function exports.
Fast shutdown and resume of Linux for System z in z/VM and LPAR.
Suspend to disk allows fast suspend (freeze) of a system and resume work where it stopped.
Two new fields in /proc/sysinfo now export the contents of Capacity-Change Reason (CCR) and Capacity-Adjustment Indication (CAI) of SYSIB 1.1.1 introduced by the IBM zEnterprise. They provide additional information for enhanced problem analysis.
The System z196 & z114 hardware adds another level to the CPU cache hierarchy. Enhancements have been added to allow more efficient task scheduling to optimize the Linux scheduler, increases cache hits and therefore overall performance.
z10 has added more complexity for memory accesses and a faster processor. Pre-fetching instructions can be used to enhance memory access like all sorts of implementations of copying memory, zeroing out memory and predictable loops resulting in increased performance and better exploitation of the System z hardware. Requires System z optimizations from GCC 4.6 (available on the SDK).
In raw-track access mode, the DASD device driver accesses full ECKD tracks, including record zero and the count and key data fields. With this mode, Linux can access an ECKD device regardless of the track layout. In particular, the device does not need to be formatted for Linux. This includes Linux ECKD disks that are used with LVM, Linux ECKD disks that are used directly, and z/OS ECKD disks.
This feature provides a new kernel device driver for receiving z/VM CP special messages (SMSG) and delivering these messages to user space as udev events (uevents). The device driver registers with the existing CP special message device driver to only receive messages starting with "APP". The created uevents contain message sender and content as environmental data.
A CMS minidisk can be mounted to Linux (cmsfs-fuse). The files on the minidisk can now be accessed by common Linux tools. Text files and configuration files can be accessed and automatically converted from EBCDIC to ASCII without eg. the restriction to shutdown Linux before access. cmsfs-fuse support for CMS file systems is limited to EDF, other CMS file systems like SFS, CFS and BFS are not supported. This feature is used to eg. provide config data and personalization to Linux guest in a HA/DR scenario (machine, LPAR, guest name, IP addr data, etc).
This feature enhances snIPL to take a remote SCSI dump using the snIPL interface.
Large scale server consolidation requires a way to deal with limited memory resources. Ideally this is done by the hypervisor or by optimizing the individual guest in terms of memory utilization. 'cpuplugd' has a rule based scheme to control the size of the CMM1 memory balloon. An enhanced default set of rules allows the administrator to define a virtual machine with a larger memory size and have cpuplugd deal with the surplus automatically.
snIPL offers command line support for remote system management of LPARs and z/VM . This feature offers socket-based (AF_INET) remote system management of z/VM 6 guests with snipl and stonith if SMAPI support is available.
Live guest relocation (LGR) with z/VM 6.2 on SLES 11 SP2 requires z/VM
service applied, especially with Collaborative Memory Management
(CMMA) active (cmma=on
).
Apply z/VM APAR VM65134.
Storage servers may provide solid state disks, which are transparent in use to the DASD device driver. A new flag in the device characteristics will show if a device is a solid state disk. The device characteristics are already exported per ioctl and can be read as binary data with the dasdview tool.
The DASD device driver tolerates dynamic Parallel Access Volume (PAV) changes for base PAV. PAV changes in the hardware configuration are detected and the mapping of base and alias devices in Linux is adjusted accordingly. The user is informed about the change by a kernel message with log level info.
Enables the DASD device driver to generate multi-track High Performance FICON (zHPF) requests. If the storage systems support multi-track High Performance FICON requests, read or write data can be done to more than one track to enhance I/O performance.
Logging I/O subchannel status information: a Linux interface for the store-I/O-operation-status-and-initiate-logging (SIOSL) CHSC command and its exploitation by the FCP device driver. It enhances the service toolset for determining field scenarios without interrupting operation, and can be used to synchronize log gathering between the operating system and the channel firmware.
This feature prevents unintentional write requests and subsequent I/O errors, by detecting if a z/VM attached device is read-only using the z/VM DIAG 210 interface and setting the respective Linux block device to read-only as well.
This provides a new sysfs interface to specify the timeout for missing interrupts for standard I/O operations. The default value for this timeout was 300 seconds for standard ECKD and FBA I/O operations and 50 seconds for DIAG I/O operations. For ECKD devices the timeout value provided from the storage server is used as default instead of the generic 300 seconds. The timeout value can be read and set through a new DASD device attribute 'expires' in sysfs.
Allows the DASD device driver to determine the reservation status of a given DASD in relation to the current Linux instance.
Service and control of data flow between adapter and storage device was improved by introducing the zFCP specific part of the enhanced SCSI standard for E2E data consistency checking.
This feature introduces OSA adapter support for the checksum calculations which TCP and UDP use to ensure data integrity. Offloading this calculation to the OSA adapter (HW) will reduce the processor load compared to the current implementation where it is done in SW.
This feature enhances the qethconf tool by providing improved information messages.
zEnterprise Unified Resource Manager is responsible for OSX- and OSM-setup. It defines MAC-addresses for OSX and OSM devices. The qeth driver retrieves those MAC-addresses during activation of OSX and OSM devices. They must not be changed afterwards. This means the YaST-created ifcfg-files must not contain an LLADDR-definition.
Remove the LLADDR entry from the ifcfg configuration file for an OSX- or OSM-device.
This feature adapts qeth to the standard Linux kernel network interface: NAPI. The qdio interface is extended to allow direct processing of inbound data in qeth. Using NAPI, the device driver can disable interrupts to reduce CPU load under high network traffic. It provides increased throughput and less CPU consumption for high speed network connections.
This feature enhances the qeth driver with a meaningful message for the case that an OSA-connection fails due to an active OLM-connection on the shared OSA-adapter. OLM may be activated by z/OS on an OSA Express3 adapter, which reduces the number of allowed concurrent connections, if adapter is used in shared mode.
This feature adds IPv6 support to the qetharp tool for inspection and modification of the ARP cache of HiperSockets (real and virtual) operated in layer 3 mode.
For Real Hipersockets, it queries and shows the IPv6 address and for guest LAN Hipersockets, it queries and shows IPv6 to MAC address.
This feature exploits OSA support for VLAN tagging and null tagging (VLAN ID 0 can be used in tags). Such frames can carry priority information and improve the communication capabilities with z/OS.
This feature adds IPv6 support to the qetharp tool for inspection and modification of the ARP cache of OSA cards or HiperSockets (real and virtual) operated in layer 3 mode.
In rare occasions Hipersocket devices in layer 2 mode may remain in softsetup state when configured via YaST.
Perform ifup manually.
OSA devices in layer 2 mode remain in softsetup state when "Set default MAC address" is used in Yast
Do not select "Set default MAC address" in YaST. If default MAC
address got selected in YaST remove the line
LLADR='00:00:00:00:00:00'
from the
ifcfg
file in
/etc/sysconfig/network
.
Deleting: An ARP entry, which is part of Shared OSA should not get deleted from the arp cache.
Current Behavior: An ARP entry, which is part of shared OSA is getting deleted from the arp cache.
Purging: It should remove all the remote entries, which are not part of shared OSA.
Current Behavior: It is only flushing out the remote entries, which are not part of shared OSA for first time. Then, if the user pings any of the purged ip address, the entry gets added back to the arp cache. Later, if the user runs purge for a second time, that particular entry is not getting removed from the arp cache.
This feature adds support to the kernel and libica to exploit new algorithms from Message Security Assist (CPACF) extension 4.
This feature extends the support for current hardware acceleration of RSA encryption and decryption from 2048-bit keys to the new maximum of 4096-bit keys in zcrypt Linux device driver. This new support will allow to handle with a zEnterprise Crypto Express3 card RSA mod expo operations with 4096-bit RSA keys in ME (Modulus Exponent) and CRT (Chinese Remainder Theorem) format.
Exploit z196 hardware accelerated crypto algorithms and Elliptic Curve cryptography features of the IBM PCIe Cryptographic Coprocessor.
Added support for new CPACF algorithms in z196, AES-CTR mode for key lengths 128, 192 and 256. Also added support for Elliptic Curve crypto for customers with the IBM PCIe Cryptographic Coprocessor.
The existing data execution protection for Linux on System z relies on the System z hardware to distinguish instructions and data through the secondary memory space mode. As of System z10, new load-relative-long instructions do not make this distinction. As a consequence, applications that have been compiled for System z10 or later fail when running with the existing data execution protection.
Therefore, data execution protection for Linux on System z has been removed.
For FCP subchannels running in NPIV mode, this features allows the Linux SCSI midlayer to scan and automatically attach SCSI devices that are available for the NPIV WWPN. To enable automated LUN scanning, boot with:
zfcp.allow_lun_scan=1
The manual configuration of LUNs in zfcp is now only required for non-NPIV FCP subchannels. With this feature the behaviour of zfcp in NPIV mode is now similar to all other Linux SCSI drivers.
This feature provides the kernel infrastructure needed for a Linux tool called "hyptop" which provides a dynamic real-time view of a System z hypervisor environment. It works with either the z/VM or the LPAR hypervisor. Depending on the available data it shows for example CPU and memory consumption of active LPARs or z/VM guests. It provides a curses based user interface similar to the popular Linux "top" command.
Upgrading from SUSE Linux Enterprise Server 11 SP1 to SP2 does not preserve the qdio performance statistics under /proc/qdio_perf. The corresponding file /sys/bus/ccw/qdio_performance_stats is also removed. SP2 adds support for qdio performance statistics by device. These statistics are located under <debugfs mount point>/qdio/<device bus id>/statistics. Writing 1 to the statistics file of a qdio device starts the collection of performance data for that device. Writing 0 to the statistics file of a qdio device stops the collection of performance data for that device. By default the statistics are disabled. For more information, see Chapter 8 of "Device Drivers, Features, and Commands on SUSE Linux Enterprise Server 11 SP2".
This feature records breaking-event-addresses for user space processes using the PER-3 facility introduced with z10. There is one restriction in regard to the useable address range for the user space program. Any breaking-event in the range from 0 to 8MB will not be recorded. Useful for application development.
This feature provides four extensions to the chreipl tool: a) add support to re-IPL from device-mapper devices, including mirror devices and multipath devices, b) add support to re-IPL from named saved systems (NSS), c) add support to specify additional kernel parameters for the next re-IPL, d) add "auto target" support. This improves the usability experience, by enhancing and simplifying the interface to setup how and what to reboot.
This feature will relax the need for a default address for the initial ramdisk on the boot device. The address is now calculated dependent on the locations of the other components. If the user provides an initrd_addr then this one is used. If the user does not provide an initrd_addr then - instead of a fixed value (0x800000) - a suitable calculated value is used.
This feature adds support for automatic menu generation to IBM's zipl package.
Improves cio resume handling to cope with devices that were attached on different subchannels prior to the suspend operation.
This feature improves handling of unit checks reported during CIO-internal operations. Control units such as the DS8000 storage server are using Unit Checks as a means to inform Linux of events which may affect the operational state of the devices provided.
Enhancements in the common I/O layer (CIO) that enable Linux in LPAR installation to handle dynamic IODF changes in the channel-path related setup and changed capabilities of channel paths, eg. the number of inbound/outbound queues of an OSA adapter or the maximum transmission unit.
Improves the DASD error recovery procedures used in the early phases of IPL and DASD device initialization with additional error recovery procedures.
Allows to specify a policy for the DASD device driver behavior in case of a lost device reservation. The policy can be specified via a new DASD sysfs attribute reservation_policy. Possible values are: ignore, fail.
This feature provides tooling of a configurable time delay (activation of this trigger). A new keyword DELAY_MINUTES is introduced in the dumpconf configuration file. Using this keyword the activation of dumpconf can be delayed in order to prevent potential re-IPL loops.
Linux on System z kernel crash dumps have traditionally not been in ELF core format. We now have infrastructure to convert the Linux on System z dumps to ELF core format. 'makedumpfile' can be used to compress system dumps by filtering out memory pages like free, user space or cache pages that are not necessary for dump analysis. Additionally, the 'crash' utility has been enhanced to read compressed/filtered s390x dumpfiles generated by 'makedumpfile'.
The multi-volume tape dump support will be removed from zipl and zgetdump. The reason for this decision is that current tape cartridges have hundreds of gigabyte capacity and therefore the multi-volume support is not needed any more.
This feature integrates a new tool 'ttyrun', which safely starts getty programs and prevents re-spawns through the init program if a terminal is not available. This is very useful when integrated in inittab. Depending on your setup, Linux on System z might or might not provide a particular terminal or console.
This feature enables collective problem analysis through consolidated dumps of software and hardware. A command can be used to generate qeth/qdio trace data as well as trigger the internal dump of an OSA device.
This feature enables for dynamic changes in the GDPS environment definition to avoid possible failures from manual or non applied changes. GDPS changed to retrieve CPC and LPAR information dynamically - with the new function, GDPS is now able to always reset exactly the LPAR in which the OS is running.
I/O statistics gathering are essential tools for performance analysis and problem determination. Various parts of the infrastructure have been improved to allow better serviceability by introducing enhanced trace support for FCP.
zPXE provide a similar function to the PXE boot on x86/x86-64: have a parameter driven executable, retrieving installation source and instance specific parameters from specified network location, download automatically the respective kernel, initrd, and parameter files for that instance and start an automated (or manual) installation.
It is very useful for customers to erase the customer sensitive data like security keys and other confidential data in the dumpfile before sending it to the support team for analysis.
makedumpfile has been updated to version 1.4.0 which introduces a filter config file where, using filter commands, the user can specify desired kernel data symbols and its members that need to be filtered out while creating the dumpfile. The Syntax for the filter commands are provided in the makedumpfile.conf(5) man page.
s390x kernel dumps may now be filtered by the makedumpfile tool. The crash dump analysis tool must be able to analyze these filtered dumps.
The crash dump analysis tool was modified to recognize Linux on System z dumps filtered by makedumpfile
This feature delivers optimized default settings for several qeth parameters. See 'Device Drivers, Features, and Commands on SUSE Linux Enterprise Server 11 SP2 ' chap. 8, 'Setting up the qeth device driver' for details.
Depending on the usage of mutexes, thread scheduling and the status of the physical and virtual processors, additional information provided to the scheduler allows for more efficient and less costy decisions optimizing processor cycles. The status of a thread owning a locked mutex is examined and waiting threads are not scheduled unless the first is scheduled on a virtual and physical processor.
cpuplugd is supposed to optimize the processor utilization if the workload does not need the full capacity. The latest Linux scheduler is optimized to achieve the same result without the cost intensive operation of CPU plug and unplug. If the use case is not fully exploited, it is advisable to disable the cpuplugd by default.
With SLES11SP1 openSSL compresses data before encryption with impact on throughput (down) and CPU load (up) on platforms with cryptographic hardware. The behavior is now adjustable by the environment variable "OPENSSL_NO_DEFAULT_ZLIB" depending on customer requirements. Set this environment variable per application or in a global config file.
To exploit new IBM System z architecture capabilities during the lifecycle of SUSE Linux Enterprise Server 11, support for machines of the types z900, z990, z800, z890 is deprecated in this release. SUSE plans to introduce an ALS earliest with SUSE Linux Enterprise Server 11 Service Pack 1 (SP1), latest with SP2. After ALS, SUSE Linux Enterprise Server 11 only executes on z9 or newer processors.
With SUSE Linux Enterprise Server 11 GA, only machines of type z9 or newer are supported.
When developing software, we recommend to switch gcc to z9/z10 optimization:
install gcc
install gcc-z9 package (change gcc options to -march=z9-109 -mtune=z10)
For LUN Scanning to work properly, the minimum storage firmware level should be:
DS8000 Code Bundle Level 64.0.175.0
DS6000 Code Bundle Level 6.2.2.108
Large Page support allows processes to allocate process memory in chunks of 1 MiB instead of 4 KiB. This works through the hugetlbfs.
SLES 11 SP2 supports CMM2 Lite for optimized memory usage and to
handle memory overcommitment via memory page state transitions based
on "stable" and "unused" memory pages of z/VM guests using the
existing arch_alloc_page
and
arch_free_page
callbacks.
Bugfixes
This Service Pack contains all the latest bugfixes for each package released via the maintenance Web since the GA version.
Security Fixes
This Service Pack contains all the latest security fixes for each package released via the maintenance Web since the GA version.
Program Temporary Fixes
This Service Pack contains all the PTFs (Program Temporary Fix) for each package released via the maintenance Web since the GA version which were suitable for integration into the maintained common codebase.
This section contains information about system limits, a number of technical changes and enhancements for the experienced user.
When talking about CPUs we are following this terminology:
The visible physical entity, as it is typically mounted to a motherboard or an equivalent.
The (usually not visible) physical entity as reported by the CPU vendor.
On System z this is equivalent to an IFL.
This is what the Linux Kernel recognizes as a "CPU".
We avoid the word "thread" (which is sometimes used), as the word "thread" would also become ambiguous subsequently.
A logical CPU as seen from within a Virtual Machine.
http://www.suse.com/products/server/technical-information/#Kernel
This table summarizes the various limits which exist in our recent kernels and utilities (if related) for SUSE Linux Enterprise Server 11.
SLES 11 (3.0) | x86 | ia64 | x86_64 | s390x | ppc64 |
---|---|---|---|---|---|
CPU bits |
32 |
64 |
64 |
64 |
64 |
max. # Logical CPUs |
32 |
4096 |
4096 |
64 |
1024 |
max. RAM (theoretical / certified) |
64/16 GiB |
1 PiB/8+ TiB |
64 TiB/16 TiB |
4 TiB/256 GiB |
1 PiB/512 GiB |
max. user-/kernelspace |
3/1 GiB |
2 EiB/φ |
128 TiB/128 TiB |
φ/φ |
2 TiB/2 EiB |
max. swap space |
up to 29 * 64 GB (i386 and x86_64) or 30 * 64 GB (other architectures) | ||||
max. #processes |
1048576 | ||||
max. #threads per process |
tested with more than 120000; maximum limit depends on memory and other parameters | ||||
max. size per block device |
up to 16 TiB |
and up to 8 EiB on all 64-bit architectures | |||
FD_SETSIZE |
1024 |
Guest RAM size |
512 GiB |
Virtual CPUs per guest |
64 |
Maximum number of NICs per guest |
8 |
Block devices per guest |
4 emulated, 20 para-virtual |
Maximum number of guests |
Limit is defined as the total number of vCPUs in all guests being no greater than eight times the number of CPU cores in the host. |
SLES 11 SP2 | x86 |
---|---|
CPU bits |
64 |
Logical CPUs (Xen Hypervisor) |
255 |
Virtual CPUs per VM |
32 |
Maximum supported memory (Xen Hypervisor) |
2 TiB |
Maximum supported memory (Dom0) |
512 GiB |
Virtual memory per VM |
128 MiB-256 GiB |
Total virtual devices per host |
2048 |
Maximum number of NICs per host |
8 |
Maximum number of vNICs per guest |
8 |
Maximum number of guests per host |
128 |
In Xen 4.1, the hypervisor bundled with SUSE Linux Enterprise Server 11 SP2, dom0 is able to see and handle a maximum of 512 logical CPUs. The hypervisor itself, however, can access up to logical 256 logical CPUs and schedule those for the VMs.
With SUSE Linux Enterprise Server 11 SP2, we removed the 32-bit hypervisor as a virtualization host. 32-bit virtual guests are not affected and are fully supported with the provided 64-bit hypervisor.
https://www.suse.com/products/server/technical-information/#FileSystem
SUSE Linux Enterprise was the first enterprise Linux distribution to support journaling file systems and logical volume managers back in 2000. Today, we have customers running XFS and ReiserFS with more than 8TiB in one file system, and our own SUSE Linux Enterprise engineering team is using all 3 major Linux journaling file systems for all its servers.
We are excited to add the OCFS2 cluster file system to the range of supported file systems in SUSE Linux Enterprise.
We propose to use XFS for large-scale file systems, on systems with heavy load and multiple parallel read- and write-operations (e.g., for file serving with Samba, NFS, etc.). XFS has been developed for such conditions, while typical desktop use (single write or read) will not necessarily benefit from its capabilities.
Due to technical limitations (of the bootloader), we do not support XFS
to be used for /boot
.
Feature | Ext 3 | Reiserfs 3.6 | XFS | Btrfs * | OCFS 2 ** |
---|---|---|---|---|---|
Data/Metadata Journaling |
•/• |
○/• |
○/• |
n/a * |
○/• |
Journal internal/external |
•/• |
•/• |
•/• |
n/a * |
•/○ |
Offline extend/shrink |
•/• |
•/• |
○/○ |
○/○ |
•/○ |
Online extend/shrink |
•/○ |
•/○ |
•/○ |
•/• |
•/○ |
Sparse Files |
• |
• |
• |
• |
• |
Tail Packing |
○ |
• |
○ |
• |
○ |
Defrag |
○ |
○ |
• |
• |
○ |
Extended Attributes/ Access Control Lists |
•/• |
•/• |
•/• |
•/• |
•/• |
Quotas |
• |
• |
• |
^ |
• |
Dump/Restore |
• |
○ |
• |
○ |
○ |
Blocksize default |
4KiB | ||||
max. File System Size |
16 TiB |
16 TiB |
8 EiB |
16 EiB |
16 TiB |
max. Filesize |
2 TiB |
1 EiB |
8 EiB |
16 EiB |
1 EiB |
|
* Btrfs is supported in SUSE Linux Enterprise Server 11 Service Pack 2; Btrfs is a copy-on-write logging-style file system. Rather than journaling changes before writing them in-place, it writes them to a new location, then links it in. Until the last write, the new changes are not "committed". Due to the nature of the filesystem, Quotas will be implemented based on subvolumes in a future release. | ||||
|
** OCFS2 is fully supported as part of the SUSE Linux Enterprise High Availability Extension. |
The maximum file size above can be larger than the file system's actual size due to usage of sparse blocks. Note that unless a file system comes with large file support (LFS), the maximum file size on a 32-bit system is 2 GB (2^31 bytes). Currently all of our standard file systems (including ext3 and ReiserFS) have LFS, which gives a maximum file size of 2^63 bytes in theory. The numbers in the above tables assume that the file systems are using 4 KiB block size. When using different block sizes, the results are different, but 4 KiB reflects the most common standard.
In this document: 1024 Bytes = 1 KiB; 1024 KiB = 1 MiB; 1024 MiB = 1 GiB; 1024 GiB = 1 TiB; 1024 TiB = 1 PiB; 1024 PiB = 1 EiB. See also http://physics.nist.gov/cuu/Units/binary.html.
NFSv4 with IPv6 is only supported for the client side. A NFSv4 server with IPv6 is not supported.
This version of Samba delivers integration with Windows 7 Active Directory Domains. In addition we provide the clustered version of Samba as part of SUSE Linux Enterprise High Availability 11 SP 1.
Btrfs is a copy-on-write (CoW) general purpose file system. Based on the CoW functionality, btrfs provides snapshoting. Beyond that data and metadata checksums improve the reliability of the file system. btrfs is highly scalable, but also supports online shrinking to adopt to real-life environments. On appropriate storage devices btrfs also supports the TRIM command.
Support
With SUSE Linux Enterprise 11 SP2, the btrfs file system joins ext3, reiserfs, xfs and ocfs2 as commercially supported file systems. Each file system offers disctinct advantages. While the installation default is ext3, we recommend xfs when maximizing data performance is desired, and btrfs as a root file system when snapshotting and rollback capabilities are required. Btrfs is supported as a root file system (i.e. the file system for the operating system) across all architectures of SUSE Linux Enterprise 11 SP2. Customers are advised to use the YaST partitioner (or AutoYaST) to build their systems: YaST will prepare the btrfs file system for use with subvolumes and snapshots. Snapshots will be automatically enabled for the root file system using SUSE's snapper infrastructure. For more information about snapper, its integration into ZYpp and YaST, and the YaST snapper module, see the SUSE Linux Enterprise documentation.
Migration from "ext" File Systems to btrfs
Migration from existing "ext" file systems (ext2, ext3, ext4) is supported "offline" and "in place". Calling "btrfs-convert [device]" will convert the file system. This is an offline process, which needs at least 15% free space on the device, but is applied in place. Roll back: calling "btrfs-convert -r [device]" will roll back. Caveat: when rolling back, all data will be lost that has been added after the conversion into btrfs; in other words: the roll back is complete, not partial.
RAID
Btrfs is supported on top of MD (multiple devices) and DM (device mapper) configurations. Please use the YaST partitioner to achieve a proper setup. Multivolume/RAID with btrfs is not supported yet and will be enabled with a future maintenance update.
Future Plans
We are planning to announce support for btrfs' built-in multi volume handling and RAID in a later version of SUSE Linux Enterprise.
Starting with SUSE Linux Enterprise 12, we are planning to implement bootloader support for /boot on btrfs.
Compression and Encryption functionality for btrfs is currently under development and will be supported once the development has matured.
We are commited to actively work on the btrfs file system with the community, and we keep customers and partners informed about progress and experience in terms of scalability and performance. This may also apply to cloud and cloud storage infrastructures.
Online Check and Repair Functionality
Check and repair functionality ("scrub") is available as part of the btrfs command line tools. "Scrub" is aimed to verify data and metadata assuming the tree structures are fine. "Scrub" can (and should) be run periodically on a mounted file system: it runs as a background process during normal operation.
The tool "fsck.btrfs" tool will soon be available in the SUSE Linux Enterprise update repositories.
Capacity Planning
If you are planning to use btrfs with its snapshot capability, it is advisable to reserve twice as much disk space than the standard storage proposal. This is automatically done by the YaST2 partitioner for the root file system.
Hard Link Limitation
In order to provide a more robust file system, btrfs incorporates back references for all file names, eliminating the classic "lost+found" directory added during recovery. A temporary limitation of this approach affects the number of hard links in a single directory that link to the same file. The limitation is dynamic based on the length of the file names used. A realistic average is approximately 150 hard links. When using 255 character file names, the limit is 14 links. We intend to raise the limitation to a more usable limit of 65535 links in a future maintenance update.
Other Limitations
At the moment, btrfs is not supported as a seed device.
For More Information
For more information about btrfs, see the SUSE Linux Enterprise 11 documentation.
An important requirement for every Enterprise operating system is the
level of support a customer receives for his environment. Kernel modules
are the most relevant connector between hardware ("controllers") and the
operating system. Every kernel module in SUSE Linux Enterprise Server 11
has a flag 'supported' with three possible values: "yes",
"external", ""
(empty, not set, "unsupported").
The following rules apply:
All modules of a self-recompiled kernel are by default marked as unsupported.
Kernel Modules supported by SUSE partners and delivered using SUSE's Partner Linux Driver process are marked "external".
If the "supported"
flag is not set, loading this
module will taint the kernel. Tainted kernels are not supported. To
avoid this, not supported Kernel modules are included in an extra RPM
(kernel-<flavor>-extra) and will not be loaded by default
("flavor"=default|smp|xen|...). In addition, these unsupported modules
are not available in the installer, and the package
kernel-$flavor-extra is not on the SUSE Linux Enterprise Server media.
Kernel Modules not provided under a license compatible to the license
of the Linux kernel will also taint the kernel; see
/usr/src/linux/Documentation/sysctl/kernel.txt
and the state of /proc/sys/kernel/tainted
.
Technical Background
Linux Kernel
The value of /proc/sys/kernel/unsupported defaults to 2 on SUSE Linux
Enterprise Server 11 ("do not warn in syslog when loading unsupported
modules"). This is the default used in the installer as well as in the
installed system. See
/usr/src/linux/Documentation/sysctl/kernel.txt
for more information.
modprobe
The modprobe utility for checking module
dependencies and loading modules appropriately checks for the value of
the "supported" flag. If the value is "yes"
or
"external"
the module will be loaded, otherwise it
will not. See below, for information on how to override this behavior.
Note: SUSE does not generally support removing of storage modules via modprobe -r.
Working with Unsupported Modules
While the general supportability is important, there might occur situations where loading an unsupported module is required (e.g., for testing or debugging purposes, or if your hardware vendor provides a hotfix):
You can override the default by changing the variable
allow_unsupported_modules
in
/etc/modprobe.d/unsupported-modules
and set the
value to "1
".
If you only want to try loading a module once, the
--allow-unsupported-modules
command-line switch can
be used with modprobe
. (For more information, see
man modprobe).
During installation, unsupported modules may be added through driver update disks, and they will be loaded.
To enforce loading of unsupported modules during boot and afterwards,
please use the kernel command line option
oem-modules
.
While installing and initializing the
module-init-tools
package,
the kernel flag "TAINT_NO_SUPPORT"
(/proc/sys/kernel/tainted
) will be evaluated. If
the kernel is already tainted,
allow_unsupported_modules
will be enabled. This will
prevent unsupported modules from failing in the system being
installed. (If no unsupported modules are present during installation
and the other special kernel command line option is not used, the
default will still be to disallow unsupported modules.)
If you install unsupported modules after the initial installation and want to enable those modules to be loaded during system boot, please do not forget to run depmod and mkinitrd.
Remember that loading and running unsupported modules will make the kernel and the whole system unsupported by SUSE.
SUSE Linux Enterprise Server 11 is compliant to IPv6 Logo Phase 2. However, when running the respective tests, you may see some tests failing. For various reasons, we cannot enable all the configuration options by default, which are necessary to pass all the tests. For details, see below.
Section 3: RFC 4862 - IPv6 Stateless Address Autoconfiguration
Some tests fail because of the default DAD handling in Linux; disabling the complete interface is possible, but not the default behavior (because security-wise, this might open a DoS attack vector, a malicious node on a network could shutdown the complete segment) this is still conforming to RFC 4862: the shutdown of the interface is a "should", not a mandatory ("must") rule.
The Linux kernel allows you to change the default behavior with a sysctl parameter. To do this on SUSE Linux Enterprise Server 11, you need to make the following changes in configuration:
Add ipv6 to the modules load early on boot
Edit /etc/sysconfig/kernel
and add ipv6 to
MODULES_LOADED_ON_BOOT e.g.
MODULES_LOADED_ON_BOOT="ipv6"
. This is needed for
the second change to work, if ipv6 is not loaded early enough,
setting the sysctl fails.
Add the following lines to /etc/sysctl.conf
## shutdown IPV6 on MAC based duplicate address detection net.ipv6.conf.default.accept_dad = 2 net.ipv6.conf.all.accept_dad = 2 net.ipv6.conf.eth0.accept_dad = 2 net.ipv6.conf.eth1.accept_dad = 2
Note: if you use other interfaces (e.g., eth2), please modify the lines. With these changes, all tests for RFC 4862 should pass.
Section 4: RFC 1981 - Path MTU Discovery for IPv6
Test v6LC.4.1.10: Multicast Destination - One Router
Test v6LC.4.1.11: Multicast Destination - Two Routers
On these two tests ping6 needs to be told to allow defragmentation of multicast packets. Newer ping6 versions have this disabled by default. Use: ping6 -M want <other parameters>. See man ping6 for more information.
Enable IPv6 in YaST for SCTP Support
SCTP is dependent on IPv6, so in order to successfully insert the SCTP module, IPv6 must be enabled in YaST. This allows for the IPv6 module to be automatically inserted when modprobe sctp is called.
Ensure all your logs go through permanent local storage or the network.
For example, putting /var/log
on a tmpfs file system
means that they will not survive a system boot. This limits your
ability, and the one of SUSE, to analyze log files in case of a support
request.
Exceptions are configurations where you save log files via syslog on a remote log server and permanently store the log files on the log server. Note: Not all log files can be redirected to a remote log server (e.g. yast-logs, boot logs and others); if these files are not available, support may be very hard to effectively diagnose issues and support the system.
The libica package contains the interface library routines used by IBM modules to interface with IBM Cryptographic Hardware (ICA). Starting with SLES 11 SP1, libica is provided in the s390x distribution in three flavors of packages: libica-1_3_9, libica-2_0_2, and libica-2_1_0 providing libica versions 1.3.9, 2.0.2, and 2.1.0 respectively.
libica 1.3.9 is provided for compatibility reasons with legacy hardware present e.g. in the ppc64 architecture. For s390x users it is always recommended to use the new libica 2.1.0 library since it supports all newer s390x hardware, larger key sizes and is backwards compatible with any ICA device driver in the s390x architecture.
You may choose to continue using libica 1.3.9 or 2.0.2 if you do not have newer Cryptographic hardware to exploit or wish continue using custom applications that do not support the libica 2.1.0 library yet. Both openCryptoki and openssl-ibmca, the two main exploiters for the libica interface, are provided in SLES 11 SP2 to support the newer libica 2.1.0 library.
YaST writes the MAC address for layer 2 devices only if they are of the card_types:
OSD_100
OSD_1000
OSD_10GIG
OSD_FE_LANE
OSD_GbE_LANE
OSD_Express
Per intent YaST does not write the MAC address for devices of the types:
HiperSockets
GuestLAN/VSWITCH QDIO
OSM
OSX
The script modify_resolvconf is removed in favor of a more versatile script called netconfig. This new script handles specific network settings from multiple sources more flexibly and transparently. See the documentation and man-page of netconfig for more information.
Memory cgroups are now disabled for machines where they cause memory exhaustion and crashes. Namely, X86 32-bit systems with PAE support and more than 8G in any memory node have this feature disabled.
The mcelog package logs and parses/translates Machine Check Exceptions (MCE) on hardware errors (also including memory errors). Formerly this has been done by a cron job executed hourly. Now hardware errors are immediately processed by an mcelog daemon.
However, the mcelog service is not enabled by default resulting in memory and CPU errors also not being logged by default. In addition, mcelog has a new feature to also handle predictive bad page offlining and automatic core offlining when cache errors happen.
The service can either be enabled via the YaST runlevel editor or via commandline with:
chkconfig mcelog on rcmcelog start
If you are not satisfied with locale system defaults, change the
settings in ~/.i18n
. Entries in
~/.i18n
override system defaults from
/etc/sysconfig/language
. Use the same variable
names but without the RC_
namespace prefixes; for
example, use LANG
instead of
RC_LANG
. For more information about locales in
general, see "Language and Country-Specific Settings" in the
Administration Guide.
kdump is useful, if the kernel is crashing or otherwise misbehaving and a kernel core dump needs to be captured for analysis.
Use YaST (
+ ) to configure your environment.When kdump is configured through YaST with ssh/scp as target and the target system is SUSE Linux Enterprise, then enable authentication using either of the following ways:
Copy the public keys to the target system:
ssh-copy-id -i ~/.ssh/id_*.pub <username>@<target system IP>
or
Change the PasswordAuthentication
setting in
/etc/ssh/sshd_config
of the target system from:
PasswordAuthentication no
to:
PasswordAuthentication yes
After the changing PasswordAuthentication
in
/etc/ssh/sshd_config
restart the sshd service on
the target system with:
rcsshd restart
Java packages are changed to follow the JPackage Standard
(http://www.jpackage.org/). For more information, see
the documentation in
/usr/share/doc/packages/jpackage-utils/
.
To avoid the mail-flood caused by cron status messages, the default
value of SEND_MAIL_ON_NO_ERROR
in
/etc/sysconfig/cron
is now set to
"no
" for new installations. Even with this setting
to "no
", cron data output will still be send to the
MAILTO
address, as documented in the cron manpage.
In the update case it is recommended to set these values according to your needs.
Read the READMEs on the DVDs.
Get the detailed changelog information about a particular package from the RPM (with filename <FILENAME>):
rpm --changelog -qp <FILENAME>.rpm
Check the ChangeLog
file in the top level of DVD1
for a chronological log of all changes made to the updated packages.
Find more information in the docu
directory of
DVD1 of the SUSE Linux Enterprise Server 11 Service Pack 2 DVDs. This
directory includes PDF versions of the SUSE Linux Enterprise Server 11
Installation Quick Start and Deployment Guides.
These Release Notes are identical across all architectures, and are available online at http://www.suse.com/releasenotes/.
AutoYaST documentation is available as part of the sles-manuals_en package (HTML) and as the sles-autoyast_en-pdf subpackage (PDF).
http://www.suse.com/documentation/sles11/ contains additional or updated documentation for SUSE Linux Enterprise Server 11 Service Pack 2.
Find a collection of White Papers in the SUSE Linux Enterprise Server Ressource Library at https://www.suse.com/products/server/resource-library/?ref=b#WhitePapers.
Visit http://www.suse.com/products/ for the latest product news from SUSE and http://www.suse.com/download-linux/source-code.html for additional information on the source code of SUSE Linux Enterprise products.
SUSE makes no representations or warranties with respect to the contents or use of this documentation, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. Further, SUSE reserves the right to revise this publication and to make changes to its content, at any time, without the obligation to notify any person or entity of such revisions or changes.
Further, SUSE makes no representations or warranties with respect to any software, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. Further, SUSE reserves the right to make changes to any and all parts of SUSE software, at any time, without any obligation to notify any person or entity of such changes.
Any products or technical information provided under this Agreement may be subject to U.S. export controls and the trade laws of other countries. You agree to comply with all export control regulations and to obtain any required licenses or classifications to export, re-export, or import deliverables. You agree not to export or re-export to entities on the current U.S. export exclusion lists or to any embargoed or terrorist countries as specified in U.S. export laws. You agree to not use deliverables for prohibited nuclear, missile, or chemical/biological weaponry end uses. Please refer to http://www.novell.com/info/exports/ for more information on exporting SUSE software. SUSE assumes no responsibility for your failure to obtain any necessary export approvals.
Copyright (c) 2010, 2011, 2012, 2013 SUSE LLC. All rights reserved. No part of this publication may be reproduced, photocopied, stored on a retrieval system, or transmitted without the express written consent of the publisher.
SUSE has intellectual property rights relating to technology embodied in the product that is described in this document. In particular, and without limitation, these intellectual property rights may include one or more of the U.S. patents listed at http://www.novell.com/company/legal/patents/ and one or more additional patents or pending patent applications in the U.S. and other countries.
For SUSE trademarks, see Novell Trademark ad Service Mark list (http://www.novell.com/company/legal/trademarks/tmlist.html). All third-party trademarks are the property of their respective owners.