Chapter 1. The Linux Kernel

Table of Contents

1. Configuration
1.1. Generic configuration
1.1.1. Local version
1.1.2. Default hostname
1.1.3. swap
1.1.4. Stack utilization messages
1.1.5. Kernel .config
1.1.6. Initial RAM filesystem and RAM disk
1.1.7. PAE
1.1.8. ACPI
1.1.9. SysRq
1.1.10. kexec
1.1.11. kdump
1.1.12. SMP
1.1.13. Big SMP
1.1.14. Maximum number of CPUs
1.2. Control Groups
1.2.1. Freezer cgroup subsystem
1.2.2. Device controller for cgroups
1.2.3. Cpuset support
1.2.4. Simple CPU accounting cgroup subsystem
1.2.5. Resource counters
1.2.6. Enable perf_event per-cpu per-container group (cgroup) monitoring
1.2.7. Group CPU scheduler
1.2.8. Block IO controller
1.3. Virtualization
1.3.1. KVM host
1.3.2. VM guest
1.3.2.1. Paravirtualization
1.3.2.2. KVM guest support
1.3.2.3. Lguest guest support
1.3.2.4. Xen guest support
1.3.3. Virt I/O support
1.3.3.1. PCI driver for virtio devices
1.3.3.2. Virtio balloon driver
1.3.3.3. Virtio block driver
1.3.3.4. Virtio network driver
1.3.3.5. Virtio console
1.3.3.6. Virtio Random Number Generator support
1.4. Multiple devices driver support
1.4.1. RAID support
1.4.1.1. Linear (append) mode
1.4.1.2. RAID-0 (striping) mode
1.4.1.3. RAID-1 (mirroring) mode
1.4.1.4. RAID-10 (mirrored striping) mode
1.4.1.5. RAID-4/RAID-5/RAID-6 mode
1.4.1.6. Faulty test module for MD
1.4.2. Block device as cache
1.4.3. Device Mapper support
1.4.3.1. Crypt target support
1.4.3.2. Snapshot target
1.4.3.3. Thin provisioning target
1.4.3.4. Cache target
1.4.3.5. Mirror target
1.4.3.6. RAID 1/4/5/6/10 target
1.4.3.7. Zero target
1.4.3.8. Multipath target
1.4.3.9. I/O delaying target
1.4.3.10. Flakey target
1.5. Network
1.5.1. VLAN Support
1.5.2. Bridge Support
1.5.3. Universal TUN/TAP device driver support
1.5.4. Bonding Support
1.5.5. RealTek RTL-8139 C+ (KVM default NIC)
1.6. Filesystems
1.6.1. EXT4
1.6.2. ReiserFS
1.6.3. JFS
1.6.4. XFS
1.6.5. Btrfs
1.6.6. NTFS
1.6.7. Inotify
2. Check the configuration
3. Compilation
4. Installation

Linux is a clone of the operating system Unix, written from scratch by Linus Torvalds with assistance from a loosely-knit team of hackers across the Net. It aims towards POSIX and Single UNIX Specification compliance.

1. Configuration

Make sure you have no stale .o files and dependencies lying around:

make mrproper

Run the text based color menus, radiolists & dialogs kernel configuration tools:

make menuconfig

Configure the kernel based on arch/x86/configs/i386_defconfig and the following recommandations.

1.1. Generic configuration

1.1.1. Local version

Append an extra string to the end of your kernel version. Keep it empty.

Symbol: LOCALVERSION [=]
Type  : string
Prompt: Local version - append to kernel release
  Location:
    -> General setup

1.1.2. Default hostname

Sets the default system hostname before userspace calls sethostname(2).

Symbol: DEFAULT_HOSTNAME [=wakes]
Type  : string
Prompt: Default hostname
  Location:
    -> General setup

1.1.3. swap

Adds support for so called swap devices or swap files in your kernel that are used to provide more virtual memory than the actual RAM present in your computer.

Symbol: SWAP [=y]
Type  : boolean
Prompt: Support for paging of anonymous memory (swap)
  Location:
    -> General setup
  Depends on: MMU [=y] && BLOCK [=y]

1.1.4. Stack utilization messages

Stack utilization instrumentation: disable to avoid messages like "lvm used greatest stack depth: 5956 bytes left"

Symbol: DEBUG_STACK_USAGE [=n]
Prompt: Stack utilization instrumentation
  Depends on: DEBUG_KERNEL
  Location:
    -> Kernel hacking

1.1.5. Kernel .config

This option enables the complete Linux kernel ".config" file contents to be saved in the kernel. It can be extracted from a running kernel by reading /proc/config.gz or using the script scripts/extract-ikconfig provided with the kernel sources.

Symbol: IKCONFIG [=y]
Prompt: Kernel .config support
  Location:
    -> General setup
Symbol: IKCONFIG_PROC [=y]
Prompt: Enable access to .config through /proc/config.gz
  Depends on: IKCONFIG && PROC_FS
  Location:
    -> General setup
      -> Kernel .config support (IKCONFIG [=y])

1.1.6. Initial RAM filesystem and RAM disk

The initial RAM filesystem is a ramfs which is loaded by the boot loader (loadlin or lilo) and that is mounted as root before the normal boot procedure. It is typically used to load modules needed to mount the "real" root file system, etc.

Symbol: BLK_DEV_INITRD [=y]
Type  : boolean
Prompt: Initial RAM filesystem and RAM disk (initramfs/initrd) support
  Location:
    -> General setup
  Depends on: BROKEN [=n] || !FRV

1.1.7. PAE

Physical Address Extension, necessary if you have a 32-bit processor and more than 4 gigabytes of physical RAM.

Symbol: HIGHMEM64G [=y]
Type  : boolean
Prompt: 64GB
  Location:
    -> Processor type and features
      -> High Memory Support (<choice> [=y])
  Depends on: <choice> && !M486 [=n]
  Selects: X86_PAE [=y]

1.1.8. ACPI

Advanced Configuration and Power Interface is an open industry specification that provides a robust functional replacement for several legacy configuration and power management interfaces, including the Plug-and-Play BIOS specification (PnP BIOS), the MultiProcessor Specification (MPS), and the Advanced Power Management (APM) specification.

Symbol: ACPI [=y]
Type  : boolean
Prompt: ACPI (Advanced Configuration and Power Interface) Support
  Location:
    -> Power management and ACPI options
  Depends on: !IA64_HP_SIM && (IA64 || X86 [=y]) && PCI [=y]
  Selects: PNP [=n]

1.1.9. SysRq

With SysRq (Alt+PrintScreen) you will be able to flush the buffer cache to disk, reboot the system immediately or dump some status information.

Symbol: MAGIC_SYSRQ [=y]
Type  : boolean
Prompt: Magic SysRq key
  Location:
    -> Kernel hacking
  Depends on: !UML

1.1.10. kexec

kexec is a system call that implements the ability to shutdown your current kernel, and to start another kernel. It is like a reboot but it is independent of the system firmware. And like a reboot you can start any kernel with it, not just Linux.

Symbol: KEXEC [=y]
Type  : boolean
Prompt: kexec system call
  Location:
    -> Processor type and features

1.1.11. kdump

Generate crash dump after being started by kexec.

Symbol: CRASH_DUMP [=y]
Type  : boolean
Prompt: kernel crash dumps
  Location:
    -> Processor type and features
  Depends on: X86_64 [=n] || X86_32 [=y] && HIGHMEM [=y]

1.1.12. SMP

This enables support for systems with more than one CPU.

Symbol: SMP [=y]
Type  : boolean
Prompt: Symmetric multi-processing support
  Location:
    -> Processor type and features

1.1.13. Big SMP

This option is needed for the systems that have more than 8 CPUs.

Symbol: X86_BIGSMP [=y]
Type  : boolean
Prompt: Support for big SMP systems with more than 8 CPUs
  Location:
    -> Processor type and features
  Depends on: X86_32 [=y] && SMP [=y]

1.1.14. Maximum number of CPUs

This allows you to specify the maximum number of CPUs which this kernel will support. The maximum supported value is 512.

Symbol: NR_CPUS [=64]
Type  : integer
Range : [2 512]
Prompt: Maximum number of CPUs
  Location:
    -> Processor type and features

1.2. Control Groups

This option adds support for grouping sets of processes together, for use with process control subsystems such as Cpusets, CFS, memory controls or device isolation.

Symbol: CGROUPS [=y]
Type  : boolean
Prompt: Control Group support
  Location:
    -> General setup
  Depends on: EVENTFD [=y]
  Selected by: SCHED_AUTOGROUP [=n]

1.2.1. Freezer cgroup subsystem

Provides a way to freeze and unfreeze all tasks in a cgroup.

Symbol: CGROUP_FREEZER [=y]
Type  : boolean
Prompt: Freezer cgroup subsystem
  Location:
    -> General setup
      -> Control Group support (CGROUPS [=y])
  Depends on: CGROUPS [=y]

1.2.2. Device controller for cgroups

Provides a cgroup implementing whitelists for devices which a process in the cgroup can mknod or open.

Symbol: CGROUP_DEVICE [=y]
Type  : boolean
Prompt: Device controller for cgroups
  Location:
    -> General setup
      -> Control Group support (CGROUPS [=y])
  Depends on: CGROUPS [=y]

1.2.3. Cpuset support

This option will let you create and manage CPUSETs which allow dynamically partitioning a system into sets of CPUs and Memory Nodes and assigning tasks to run only within those sets. This is primarily useful on large SMP or NUMA systems.

Symbol: CPUSETS [=y]
Type  : boolean
Prompt: Cpuset support
  Location:
    -> General setup
      -> Control Group support (CGROUPS [=y])
  Depends on: CGROUPS [=y]

1.2.4. Simple CPU accounting cgroup subsystem

Provides a simple Resource Controller for monitoring the total CPU consumed by the tasks in a cgroup.

Symbol: CGROUP_CPUACCT [=y]
Type  : boolean
Prompt: Simple CPU accounting cgroup subsystem
  Location:
    -> General setup
      -> Control Group support (CGROUPS [=y])
  Depends on: CGROUPS [=y]

1.2.5. Resource counters

This option enables controller independent resource accounting infrastructure that works with cgroups.

Symbol: RESOURCE_COUNTERS [=y]
Type  : boolean
Prompt: Resource counters
  Location:
    -> General setup
      -> Control Group support (CGROUPS [=y])
  Depends on: CGROUPS [=y]

1.2.6. Enable perf_event per-cpu per-container group (cgroup) monitoring

This option extends the per-cpu mode to restrict monitoring to threads which belong to the cgroup specified and run on the designated cpu.

Symbol: CGROUP_PERF [=y]
Type  : boolean
Prompt: Enable perf_event per-cpu per-container group (cgroup) monitoring
  Location:
    -> General setup
      -> Control Group support (CGROUPS [=y])
  Depends on: PERF_EVENTS [=y] && CGROUPS [=y]

1.2.7. Group CPU scheduler

This feature lets CPU scheduler recognize task groups and control CPU bandwidth allocation to such task groups. It uses cgroups to group tasks.

Symbol: CGROUP_SCHED [=y]
Type  : boolean
Prompt: Group CPU scheduler
  Location:
    -> General setup
      -> Control Group support (CGROUPS [=y])
  Depends on: CGROUPS [=y]

1.2.8. Block IO controller

Generic block IO controller cgroup interface. This is the common cgroup interface which should be used by various IO controlling policies.

Symbol: BLK_CGROUP [=y]
Type  : boolean
Prompt: Block IO controller
  Location:
    -> General setup
      -> Control Group support (CGROUPS [=y])
  Depends on: CGROUPS [=y] && BLOCK [=y]

1.3. Virtualization

1.3.1. KVM host

This allows your Linux host to run other operating systems inside virtual machines (guests). It supports hosting fully virtualized guest machines using hardware virtualization extensions. It also provides support for KVM on Intel processors equipped with the VT extensions and AMD processors equipped with the AMD-V (SVM) extensions.

Symbol: VIRTUALIZATION [=y]
Type  : boolean
Prompt: Virtualization
  Depends on: HAVE_KVM [=y] || X86 [=y]
Symbol: KVM [=m]
Type  : tristate
Prompt: Kernel-based Virtual Machine (KVM) support
  Location:
    -> Virtualization (VIRTUALIZATION [=y])
  Depends on: VIRTUALIZATION[=y] && HAVE_KVM[=y] && HIGH_RES_TIMERS [=y] && NET [=y]
Symbol: KVM_INTEL [=m]
Prompt: KVM for Intel processors support
  Depends on: VIRTUALIZATION && KVM
  Location:
    -> Virtualization (VIRTUALIZATION [=y])
      -> Kernel-based Virtual Machine (KVM) support (KVM [=m])
Symbol: KVM_AMD [=m]
Prompt: KVM for AMD processors support
  Depends on: VIRTUALIZATION && KVM
  Location:
    -> Virtualization (VIRTUALIZATION [=y])
      -> Kernel-based Virtual Machine (KVM) support (KVM [=m])

1.3.2. VM guest

This enables options for running Linux under various hyper-visors. This option enables basic hypervisor detection and platform setup.

Symbol: HYPERVISOR_GUEST [=y]
Type  : boolean
Prompt: Linux guest support
  Location:
    -> Processor type and features
  Selected by: X86_VSMP [=n] && X86_64 [=n] && PCI [=y] && X86_EXTENDED_PLATFORM [=n] && SMP [=y]
1.3.2.1. Paravirtualization

This changes the kernel so it can modify itself when it is run under a hypervisor, potentially improving performance significantly over full virtualization.

Symbol: PARAVIRT [=y]
Type  : boolean
Prompt: Enable paravirtualization code
  Location:
    -> Processor type and features
      -> Linux guest support (HYPERVISOR_GUEST [=y])
  Depends on: HYPERVISOR_GUEST [=y]
  Selected by: X86_VSMP [=n] && X86_64 [=n] && PCI [=y] && X86_EXTENDED_PLATFORM [=n] && SMP [=y]
1.3.2.2. KVM guest support

This option enables various optimizations for running under the KVM hypervisor.

Symbol: KVM_GUEST [=y]
Type  : boolean
Prompt: KVM Guest support (including kvmclock)
  Location:
    -> Processor type and features
      -> Linux guest support (HYPERVISOR_GUEST [=y])
  Depends on: HYPERVISOR_GUEST [=y] && PARAVIRT [=y]
  Selects: PARAVIRT_CLOCK [=y]
1.3.2.3. Lguest guest support

Lguest is a tiny in-kernel hypervisor. Selecting this will allow your kernel to boot under lguest.

Symbol: LGUEST_GUEST [=y]
Type  : boolean
Prompt: Lguest guest support
  Location:
    -> Processor type and features
      -> Linux guest support (HYPERVISOR_GUEST [=y])
  Depends on: HYPERVISOR_GUEST [=y] && X86_32 [=y] && PARAVIRT [=y]
  Selects: TTY [=y] && VIRTUALIZATION [=y] && VIRTIO [=y] && VIRTIO_CONSOLE [=y]
1.3.2.4. Xen guest support

This is the Linux Xen port. Enabling this will allow the kernel to boot in a paravirtualized environment under the Xen hypervisor.

Symbol: XEN [=y]
Type  : boolean
Prompt: Xen guest support
  Location:
    -> Processor type and features
      -> Linux guest support (HYPERVISOR_GUEST [=y])
        -> Enable paravirtualization code (PARAVIRT [=y])
  Depends on: HYPERVISOR_GUEST [=y] && PARAVIRT [=y] && \
              (X86_64 [=n] || X86_32 [=y] && X86_PAE [=y] && \
              !X86_VISWS [=n]) && X86_TSC [=y]
  Selects: PARAVIRT_CLOCK [=y] && XEN_HAVE_PVMMU [=n]

1.3.3. Virt I/O support

1.3.3.1. PCI driver for virtio devices

This drivers provides support for virtio based paravirtual device drivers over PCI. Most QEMU based VMMs should support these devices (like KVM or Xen).

Symbol: VIRTIO_PCI [=y]
Type  : tristate
Prompt: PCI driver for virtio devices
  Location:
    -> Device Drivers
      -> Virtio drivers
  Depends on: PCI [=y]
  Selects: VIRTIO [=y]
1.3.3.2. Virtio balloon driver

This driver supports increasing and decreasing the amount of memory within a KVM guest.

Symbol: VIRTIO_BALLOON [=y]
Type  : tristate
Prompt: Virtio balloon driver
  Location:
    -> Device Drivers
      -> Virtio drivers
  Depends on: VIRTIO [=y]
1.3.3.3. Virtio block driver

This is the virtual block driver for virtio. It can be used with lguest or QEMU based VMMs (like KVM or Xen).

Symbol: VIRTIO_BLK [=y]
Type  : tristate
Prompt: Virtio block driver
  Location:
    -> Device Drivers
      -> Block devices (BLK_DEV [=y])
  Depends on: BLK_DEV [=y] && VIRTIO [=y]
1.3.3.4. Virtio network driver

This is the virtual network driver for virtio. It can be used with lguest or QEMU based VMMs (like KVM or Xen).

Symbol: VIRTIO_NET [=y]
Type  : tristate                                                        
Prompt: Virtio network driver
  Location:
    -> Device Drivers
      -> Network device support (NETDEVICES [=y])
        -> Network core driver support (NET_CORE [=y])
  Depends on: NETDEVICES [=y] && NET_CORE [=y] && VIRTIO [=y]
1.3.3.5. Virtio console

Virtio console for use with lguest and other hypervisors.

Symbol: VIRTIO_CONSOLE [=y]
Type  : tristate
Prompt: Virtio console
  Location:
    -> Device Drivers
      -> Character devices
  Depends on: VIRTIO [=y] && TTY [=y]
  Selects: HVC_DRIVER [=y]
  Selected by: LGUEST_GUEST [=y] && PARAVIRT_GUEST [=y] && X86_32 [=y] && PARAVIRT [=y]
1.3.3.6. Virtio Random Number Generator support

This driver provides kernel-side support for the virtual Random Number Generator hardware.

Symbol: HW_RANDOM_VIRTIO [=y]
Type  : tristate
Prompt: VirtIO Random Number Generator support
  Location:
    -> Device Drivers
      -> Character devices
        -> Hardware Random Number Generator Core support (HW_RANDOM [=y])
  Depends on: HW_RANDOM [=y] && VIRTIO [=y]

1.4. Multiple devices driver support

Support multiple physical spindles through a single logical device. Required for RAID and logical volume management.

Symbol: MD [=y]
Type  : boolean
Prompt: Multiple devices driver support (RAID and LVM)
  Location:
    -> Device Drivers
  Depends on: BLOCK [=y]

1.4.1. RAID support

This driver lets you combine several hard disk partitions into one logical block device. This can be used to simply append one partition to another one or to combine several redundant hard disks into a RAID1/4/5 device so as to provide protection against hard disk failures. This is called "Software RAID" since the combining of the partitions is done by the kernel.

Symbol: BLK_DEV_MD [=y]
Type  : tristate
Prompt: RAID support
  Location:
    -> Device Drivers
      -> Multiple devices driver support (RAID and LVM) (MD [=y])
  Depends on: MD [=y]
  Selected by: DM_RAID [=y] && MD [=y] && BLK_DEV_DM [=y]
1.4.1.1. Linear (append) mode

This mode combines the hard disk partitions by simply appending one to the other.

Symbol: MD_LINEAR [=y]
Type  : tristate
Prompt: Linear (append) mode
  Location:
    -> Device Drivers
      -> Multiple devices driver support (RAID and LVM) (MD [=y])
        -> RAID support (BLK_DEV_MD [=y])
  Depends on: MD && BLK_DEV_MD
1.4.1.2. RAID-0 (striping) mode

This mode combines the hard disk partitions into one logical device in such a fashion as to fill them up evenly, one chunk here and one chunk there. This will increase the throughput rate if the partitions reside on distinct disks.

Symbol: MD_RAID0 [=y]
Type  : tristate
Prompt: RAID-0 (striping) mode
  Location:
    -> Device Drivers
      -> Multiple devices driver support (RAID and LVM) (MD [=y])
        -> RAID support (BLK_DEV_MD [=y])
  Depends on: MD && BLK_DEV_MD
1.4.1.3. RAID-1 (mirroring) mode

A RAID-1 set consists of several disk drives which are exact copies of each other. In the event of a mirror failure, the RAID driver will continue to use the operational mirrors in the set, providing an error free MD (multiple device) to the higher levels of the kernel. In a set with N drives, the available space is the capacity of a single drive, and the set protects against a failure of (N - 1) drives.

Symbol: MD_RAID1 [=y]
Type  : tristate
Prompt: RAID-1 (mirroring) mode
  Location:
    -> Device Drivers
      -> Multiple devices driver support (RAID and LVM) (MD [=y])
        -> RAID support (BLK_DEV_MD [=y])
  Depends on: MD && BLK_DEV_MD
1.4.1.4. RAID-10 (mirrored striping) mode

RAID-10 provides a combination of striping (RAID-0) and mirroring (RAID-1) with easier configuration and more flexible layout. Unlike RAID-0, but like RAID-1, RAID-10 requires all devices to be the same size (or at least, only as much as the smallest device will be used).

Symbol: MD_RAID10 [=y]
Type  : tristate
Prompt: RAID-10 (mirrored striping) mode
  Location:
    -> Device Drivers
      -> Multiple devices driver support (RAID and LVM) (MD [=y])
        -> RAID support (BLK_DEV_MD [=y])
  Depends on: MD && BLK_DEV_MD
1.4.1.5. RAID-4/RAID-5/RAID-6 mode

A RAID-5 set of N drives with a capacity of C MB per drive provides the capacity of C * (N - 1) MB, and protects against a failure of a single drive. For a given sector (row) number, (N - 1) drives contain data sectors, and one drive contains the parity protection. For a RAID-4 set, the parity blocks are present on a single drive, while a RAID-5 set distributes the parity across the drives in one of the available parity distribution methods.

A RAID-6 set of N drives with a capacity of C MB per drive provides the capacity of C * (N - 2) MB, and protects against a failure of any two drives. For a given sector (row) number, (N - 2) drives contain data sectors, and two drives contains two independent redundancy syndromes. Like RAID-5, RAID-6 distributes the syndromes across the drives in one of the available parity distribution methods.

Symbol: MD_RAID456 [=y]
Type  : tristate
Prompt: RAID-4/RAID-5/RAID-6 mode
  Location:
    -> Device Drivers
      -> Multiple devices driver support (RAID and LVM) (MD [=y])
        -> RAID support (BLK_DEV_MD [=y])
  Depends on: MD && BLK_DEV_MD
1.4.1.6. Faulty test module for MD

The "faulty" module allows for a block device that occasionally returns read or write errors. It is useful for testing.

Symbol: MD_FAULTY [=y]
Type  : tristate
Prompt: Faulty test module for MD
  Location:
    -> Device Drivers
      -> Multiple devices driver support (RAID and LVM) (MD [=y])
        -> RAID support (BLK_DEV_MD [=y])
  Depends on: MD [=y] && BLK_DEV_MD [=y]

1.4.2. Block device as cache

Allows a block device to be used as cache for other devices; uses a btree for indexing and the layout is optimized for SSDs.

Symbol: BCACHE [=y]
Type  : tristate
Prompt: Block device as cache
  Location:
    -> Device Drivers
      -> Multiple devices driver support (RAID and LVM) (MD [=y])
  Depends on: MD [=y]

1.4.3. Device Mapper support

Device-mapper is a low level volume manager. It works by allowing people to specify mappings for ranges of logical sectors. Various mapping types are available, in addition people may write their own modules containing custom mappings if they wish.

Symbol: BLK_DEV_DM [=y]
Type  : tristate
Prompt: Device mapper support
  Location:
    -> Device Drivers
      -> Multiple devices driver support (RAID and LVM) (MD [=y])
  Depends on: MD [=y]
1.4.3.1. Crypt target support

This device-mapper target allows you to create a device that transparently encrypts the data on it. You'll need to activate the ciphers you're going to use in the cryptoapi configuration.

Symbol: DM_CRYPT [=y]
Type  : tristate
Prompt: Crypt target support
  Location:
    -> Device Drivers
      -> Multiple devices driver support (RAID and LVM) (MD [=y])
        -> Device mapper support (BLK_DEV_DM [=y])
  Selects: CRYPTO && CRYPTO_CBC
  Depends on: MD && BLK_DEV_DM
1.4.3.2. Snapshot target

Allow volume managers to take writable snapshots of a device.

Symbol: DM_SNAPSHOT [=y]
Type  : tristate
Prompt: Snapshot target
  Location:
    -> Device Drivers
      -> Multiple devices driver support (RAID and LVM) (MD [=y])
        -> Device mapper support (BLK_DEV_DM [=y])
  Depends on: MD && BLK_DEV_DM
1.4.3.3. Thin provisioning target

Provides thin provisioning and snapshots that share a data store.

Symbol: DM_THIN_PROVISIONING [=y]
Type  : tristate
Prompt: Thin provisioning target
  Location:
    -> Device Drivers
      -> Multiple devices driver support (RAID and LVM) (MD [=y])
        -> Device mapper support (BLK_DEV_DM [=y])
  Depends on: MD && BLK_DEV_DM
1.4.3.4. Cache target

dm-cache attempts to improve performance of a block device by moving frequently used data to a smaller, higher performance device. Different 'policy' plugins can be used to change the algorithms used to select which blocks are promoted, demoted, cleaned etc. It supports writeback and writethrough modes.

Symbol: DM_CACHE [=y]
Type  : tristate
Prompt: Cache target (EXPERIMENTAL)
  Location:
    -> Device Drivers
      -> Multiple devices driver support (RAID and LVM) (MD [=y])
        -> Device mapper support (BLK_DEV_DM [=y])
  Depends on: MD && BLK_DEV_DM
1.4.3.5. Mirror target

Allow volume managers to mirror logical volumes, also needed for live data migration tools such as 'pvmove'.

Symbol: DM_MIRROR [=y]
Type  : tristate
Prompt: Mirror target
  Location:
    -> Device Drivers
      -> Multiple devices driver support (RAID and LVM) (MD [=y])
        -> Device mapper support (BLK_DEV_DM [=y])
  Depends on: MD && BLK_DEV_DM
1.4.3.6. RAID 1/4/5/6/10 target

A dm target that supports RAID1, RAID10, RAID4, RAID5 and RAID6 mapping.

Symbol: DM_RAID [=y]
Type  : tristate
Prompt: RAID 1/4/5/6/10 target
  Location:
    -> Device Drivers
      -> Multiple devices driver support (RAID and LVM) (MD [=y])
        -> Device mapper support (BLK_DEV_DM [=y])
  Depends on: MD && BLK_DEV_DM
1.4.3.7. Zero target

A target that discards writes, and returns all zeroes for reads. Useful in some recovery situations.

Symbol: DM_ZERO [=y]
Type  : tristate
Prompt: Zero target
  Location:
    -> Device Drivers
      -> Multiple devices driver support (RAID and LVM) (MD [=y])
  Depends on: MD && BLK_DEV_DM
1.4.3.8. Multipath target

Allow volume managers to support multipath hardware.

Symbol: DM_MULTIPATH [=y]
Type  : tristate
Prompt: Multipath target
  Location:
    -> Device Drivers
      -> Multiple devices driver support (RAID and LVM) (MD [=y])
  Depends on: MD && BLK_DEV_DM && (SCSI_DH || !SCSI_DH)
1.4.3.9. I/O delaying target

A target that delays reads and/or writes and can send them to different devices. Useful for testing.

Symbol: DM_DELAY [=y]
Type  : tristate
Prompt: I/O delaying target
  Location:
    -> Device Drivers
      -> Multiple devices driver support (RAID and LVM) (MD [=y])
  Depends on: MD && BLK_DEV_DM
1.4.3.10. Flakey target

A target that intermittently fails I/O for debugging purposes.

Symbol: DM_FLAKEY [=y]
Type  : tristate
Prompt: Flakey target
  Location:
    -> Device Drivers
      -> Multiple devices driver support (RAID and LVM) (MD [=y])
  Depends on: MD && BLK_DEV_DM

1.5. Network

1.5.1. VLAN Support

Symbol: VLAN_8021Q [=y]
Prompt: 802.1Q VLAN Support
  Depends on: NET
  Location:
    -> Networking support (NET [=y])
      -> Networking options

1.5.2. Bridge Support

Symbol: BRIDGE [=y]
Prompt: 802.1d Ethernet Bridging
  Depends on: NET
  Location:
    -> Networking support (NET [=y])
      -> Networking options
  Selects: LLC && STP

1.5.3. Universal TUN/TAP device driver support

Symbol: TUN [=y]
Prompt: Universal TUN/TAP device driver support
  Depends on: NETDEVICES
  Location:
    -> Device Drivers
      -> Network device support (NETDEVICES [=y])
  Selects: CRC32

1.5.4. Bonding Support

Symbol: BONDING [=y]
Prompt: Bonding driver support
  Depends on: NETDEVICES && INET && (IPV6 || IPV6=n)
  Location:
    -> Device Drivers
      -> Network device support (NETDEVICES [=y])

1.5.5. RealTek RTL-8139 C+ (KVM default NIC)

Symbol: 8139CP [=y]
Prompt: RealTek RTL-8139 C+ PCI Fast Ethernet Adapter support (EXPERIME
  Depends on: NETDEVICES && NET_ETHERNET && NET_PCI && PCI && EXPERIMEN
  Location:
    -> Device Drivers
      -> Network device support (NETDEVICES [=y])
        -> Ethernet (10 or 100Mbit) (NET_ETHERNET [=y])
  Selects: CRC32 && MII

1.6. Filesystems

1.6.1. EXT4

Symbol: EXT4_FS [=y]
Prompt: The Extended 4 (ext4) filesystem
  Depends on: BLOCK
  Location:
    -> File systems
  Selects: JBD2 && CRC16

1.6.2. ReiserFS

Symbol: REISERFS_FS [=y]
Prompt: Reiserfs support
  Depends on: BLOCK
  Location:
    -> File systems
  Selects: CRC32

1.6.3. JFS

Symbol: JFS_FS [=y]
Prompt: JFS filesystem support
  Depends on: BLOCK
  Location:
    -> File systems
  Selects: NLS && CRC32

1.6.4. XFS

Symbol: XFS_FS [=y]
Prompt: XFS filesystem support
  Depends on: BLOCK
  Location:
    -> File systems
  Selects: EXPORTFS

1.6.5. Btrfs

Symbol: BTRFS_FS [=y]
Prompt: Btrfs filesystem support
  Depends on: BLOCK
  Location:
    -> File systems

1.6.6. NTFS

Symbol: NTFS_FS [=y]
Prompt: NTFS file system support
  Depends on: BLOCK
  Location:
    -> File systems
      -> DOS/FAT/NT Filesystems
  Selects: NLS

1.6.7. Inotify

Symbol: INOTIFY_USER [=y]
Prompt: Inotify support for userspace
  Depends on: BLOCK
  Location:
    -> File systems
  Selects: ANON_INODES [=y] && FSNOTIFY [=y]