uawdijnntqw1x1x1
IP : 216.73.216.86
Hostname : 6.87.74.97.host.secureserver.net
Kernel : Linux 6.87.74.97.host.secureserver.net 4.18.0-553.83.1.el8_10.x86_64 #1 SMP Mon Nov 10 04:22:44 EST 2025 x86_64
Disable Function : None :)
OS : Linux
PATH:
/
home
/
emeraadmin
/
.htpasswds
/
..
/
www
/
8aabc
/
..
/
src
/
..
/
img
/
..
/
4d695
/
kexec-tools.zip
/
/
PK��\�W � � Newsnu�[���* 2.0 - In purgatory added -fno-zero-initialized-in-bss to prevent issues with recent versions of gcc - Add an option to configure to disable zlib support - Add mismatched architecture support - Updated the x86 architecture help - Updated the x86_64 architecture help - Fixed bzImage support - Added support for finding either the highest or lowest usable window. - Change the version number to 2.0 to reflect the major change in the code base. 1.99 was effectively the release canidate. * 1.99 - Rearchitect so the code is maintainable. - Add multiboot support - Add ia64 support - Add beoboot image support - Create generic elf loader code. - Created the relocated shared object purgatory to hold the code that runs between kernels. - Added a configure script - Added an rpm target - Added kexec on panic support - Initial stab at adding documentation - Added loader support for ET_DYN objects * 1.98 - Add mysteriously dropped changes to make x86_64 work - Update the distclean target to remove *.orig and *~ files * 1.97 - Add support for cross compiling x86_64 * 1.96 - add x86_64 support - add support for linux style arguments to the elf32-x86 loader - disable clearing of cr4 on x86 * 1.95 - add kexec-zImage-ppc64.c source file - GameCube/PPC32 sync'ed to 1.94 - use syscall() to call sys_kexec_load() and reboot() - add kexec-syscall.h, remove kexec-syscall.c - makefiles know about ARCH-es - add noifdown kexec option (Albert Herranz) * 1.94 - revert a bad 1.92 change (not setting optind & opterr for subsequent calls to getopt_long()) * 1.93 - restored "shutdown" functionality; - more help/usage text clarification; - add GPLv2 license to source files (with permission from Eric Biederman) * 1.92 - my_kexec(): call kexec() only one time; - add "unload" option; - fix some compiler warnings about "<var> might be used uninitialized"; - commented out shutdown capability since it was unreachable; * 1.91 - fix "-t" option: strcmp() was inverted (Albert Herranz) - check specified kernel image file for file type (Albert Herranz) * 1.9 - change reboot function to return type long (was int) - use kexec reserved syscall numbers (in Linux 2.6.6-mm3) * 1.8 - Fixed bug where ramdisk wasn't loaded when specified - Memory information is now read from /proc/iomem. Information that is not needed is ignored. * 1.7 - Update to new tentative syscall number.... * 1.6 - Redo all of the command line arguments. - Use the 32-bit kernel entry point. - Work around a failure to clear %cr4. * 1.5 - Port to a new kernel interface (Hopefully the final one). - Start working on setting up legacy hardware - Add --load and --exec options so the parts can be done at different times. ### PK��\.�� TODOnu�[���- x86 handle x86 vmlinux parameter header allocation issues. There is a bug where it can get stomped but the current code does not allow us much flexibility in what we do with it. - Restore enough state that DOS/arbitrary BIOS calls can be run on some platforms. Currently disk-related calls are quite likely to blow up. - x86 filling in other kernel parameters. - Merge reboot via kexec functionality into /sbin/reboot - Improve the documentation - Add support for loading a boot sector - Autobuilding of initramfs ### PK��\��h&early-kdump-howto.txtnu�[���Early Kdump HOWTO Introduction ------------ Early kdump is a mechanism to make kdump operational earlier than normal kdump service. The kdump service starts early enough for general crash cases, but there are some cases where it has no chance to make kdump operational in boot sequence, such as detecting devices and starting early services. If you hit such a case, early kdump may allow you to get more information of it. Early kdump is implemented as a dracut module. It adds a kernel (vmlinuz) and initramfs for kdump to your system's initramfs in order to load them as early as possible. After that, if you provide "rd.earlykdump" in kernel command line, then in the initramfs, early kdump will load those files like the normal kdump service. This is disabled by default. For the normal kdump service, it can check whether the early kdump has loaded the crash kernel and initramfs. It has no conflict with the early kdump. How to configure early kdump ---------------------------- We assume if you're reading this document, you should already have kexec-tools installed. You can rebuild the initramfs with earlykdump support with below steps: 1. start kdump service to make sure kdump initramfs is created. # systemctl start kdump NOTE: If a crash occurs during boot process, early kdump captures a vmcore and reboot the system by default, so the system might go into crash loop. You can avoid such a crash loop by adding the following settings, which power off the system after dump capturing, to kdump.conf in advance: final_action poweroff failure_action poweroff For the failure_action, you can choose anything other than "reboot". 2. rebuild system initramfs with earlykdump support. # dracut --force --add earlykdump NOTE: Recommend to backup the original system initramfs before performing this step to put it back if something happens during boot-up. 3. add rd.earlykdump in grub kernel command line. After making said changes, reboot your system to take effect. Of course, if you want to disable early kdump, you can simply remove "rd.earlykdump" from kernel boot parameters in grub, and reboot system like above. Once the boot is completed, you can check the status of the early kdump support on the command prompt: # journalctl -b | grep early-kdump Then, you will see some useful logs, for example: - if early kdump is successful. Mar 09 09:57:56 localhost dracut-cmdline[190]: early-kdump is enabled. Mar 09 09:57:56 localhost dracut-cmdline[190]: kexec: loaded early-kdump kernel - if early kdump is disabled. Mar 09 10:02:47 localhost dracut-cmdline[189]: early-kdump is disabled. Notes ----- - The size of early kdump initramfs will be large because it includes vmlinuz and kdump initramfs. - Early kdump inherits the settings of normal kdump, so any changes that caused normal kdump rebuilding also require rebuilding the system initramfs to make sure that the changes take effect for early kdump. Therefore, after the rebuilding of kdump initramfs is completed, provide a prompt message to tell the fact. - If you install an updated kernel and reboot the system with it, the early kdump will be disabled by default. To enable it with the new kernel, you need to take the above steps again. Limitation ---------- - At present, early kdump doesn't support fadump. - Early kdump loads a crash kernel and initramfs at the beginning of the process in system's initramfs, so a crash at earlier than that (e.g. in kernel initialization) cannot be captured even with the early kdump. PK��\�b\�<<fadump-howto.txtnu�[���Firmware assisted dump (fadump) HOWTO Introduction Firmware assisted dump is a new feature in the 3.4 mainline kernel supported only on powerpc architecture. The goal of firmware-assisted dump is to enable the dump of a crashed system, and to do so from a fully-reset system, and to minimize the total elapsed time until the system is back in production use. A complete documentation on implementation can be found at Documentation/powerpc/firmware-assisted-dump.txt in upstream linux kernel tree from 3.4 version and above. Please note that the firmware-assisted dump feature is only available on Power6 and above systems with recent firmware versions. Overview Fadump Fadump is a robust kernel crash dumping mechanism to get reliable kernel crash dump with assistance from firmware. This approach does not use kexec, instead firmware assists in booting the kdump kernel while preserving memory contents. Unlike kdump, the system is fully reset, and loaded with a fresh copy of the kernel. In particular, PCI and I/O devices are reinitialized and are in a clean, consistent state. This second kernel, often called a capture kernel, boots with very little memory and captures the dump image. The first kernel registers the sections of memory with the Power firmware for dump preservation during OS initialization. These registered sections of memory are reserved by the first kernel during early boot. When a system crashes, the Power firmware fully resets the system, preserves all the system memory contents, save the low memory (boot memory of size larger of 5% of system RAM or 256MB) of RAM to the previous registered region. It will also save system registers, and hardware PTE's. Fadump is supported only on ppc64 platform. The standard kernel and capture kernel are one and the same on ppc64. If you're reading this document, you should already have kexec-tools installed. If not, you install it via the following command: # yum install kexec-tools Fadump Operational Flow: Like kdump, fadump also exports the ELF formatted kernel crash dump through /proc/vmcore. Hence existing kdump infrastructure can be used to capture fadump vmcore. The idea is to keep the functionality transparent to end user. From user perspective there is no change in the way kdump init script works. However, unlike kdump, fadump does not pre-load kdump kernel and initrd into reserved memory, instead it always uses default OS initrd during second boot after crash. Hence, for fadump, we rebuild the new kdump initrd and replace it with default initrd. Before replacing existing default initrd we take a backup of original default initrd for user's reference. The dracut package has been enhanced to rebuild the default initrd with vmcore capture steps. The initrd image is rebuilt as per the configuration in /etc/kdump.conf file. The control flow of fadump works as follows: 01. System panics. 02. At the crash, kernel informs power firmware that kernel has crashed. 03. Firmware takes the control and reboots the entire system preserving only the memory (resets all other devices). 04. The reboot follows the normal booting process (non-kexec). 05. The boot loader loads the default kernel and initrd from /boot 06. The default initrd loads and runs /init 07. dracut-kdump.sh script present in fadump aware default initrd checks if '/proc/device-tree/rtas/ibm,kernel-dump' file exists before executing steps to capture vmcore. (This check will help to bypass the vmcore capture steps during normal boot process.) 09. Captures dump according to /etc/kdump.conf 10. Is dump capture successful (yes goto 12, no goto 11) 11. Perform the failure action specified in /etc/kdump.conf (The default failure action is reboot, if unspecified) 12. Perform the final action specified in /etc/kdump.conf (The default final action is reboot, if unspecified) How to configure fadump: Again, we assume if you're reading this document, you should already have kexec-tools installed. If not, you install it via the following command: # yum install kexec-tools Make the kernel to be configured with FADump as the default boot entry, if it isn't already: # grubby --set-default=/boot/vmlinuz-<kver> Boot into the kernel to be configured for FADump. To be able to do much of anything interesting in the way of debug analysis, you'll also need to install the kernel-debuginfo package, of the same arch as your running kernel, and the crash utility: # yum --enablerepo=\*debuginfo install kernel-debuginfo.$(uname -m) crash Next up, we need to modify some boot parameters to enable firmware assisted dump. With the help of grubby, it's very easy to append "fadump=on" to the end of your kernel boot parameters. To reserve the appropriate amount of memory for boot memory preservation, pass 'crashkernel=X' kernel cmdline parameter. For the recommended value of X, see 'FADump Memory Requirements' section. # grubby --args="fadump=on crashkernel=6G" --update-kernel=/boot/vmlinuz-`uname -r` By default, FADump reserved memory will be initialized as CMA area to make the memory available through CMA allocator on the production kernel. We can opt out of this, making reserved memory unavailable to production kernel, by booting the linux kernel with 'fadump=nocma' instead of 'fadump=on'. The term 'boot memory' means size of the low memory chunk that is required for a kernel to boot successfully when booted with restricted memory. By default, the boot memory size will be the larger of 5% of system RAM or 256MB. Alternatively, user can also specify boot memory size through boot parameter 'fadump_reserve_mem=' which will override the default calculated size. Use this option if default boot memory size is not sufficient for second kernel to boot successfully. After making said changes, reboot your system, so that the specified memory is reserved and left untouched by the normal system. Take note that the output of 'free -m' will show X MB less memory than without this parameter, which is expected. If you see OOM (Out Of Memory) error messages while loading capture kernel, then you should bump up the memory reservation size. Now that you've got that reserved memory region set up, you want to turn on the kdump init script: # systemctl enable kdump.service Then, start up kdump as well: # systemctl start kdump.service This should turn on the firmware assisted functionality in kernel by echo'ing 1 to /sys/kernel/fadump_registered, leaving the system ready to capture a vmcore upon crashing. For journaling filesystems like XFS an additional step is required to ensure bootloader does not pick the older initrd (without vmcore capture scripts): * If /boot is a separate partition, run the below commands as the root user, or as a user with CAP_SYS_ADMIN rights: # fsfreeze -f # fsfreeze -u * If /boot is not a separate partition, reboot the system. After reboot check if the kdump service is up and running with: # systemctl status kdump.service To test out whether FADump is configured properly, you can force-crash your system by echo'ing a 'c' into /proc/sysrq-trigger: # echo c > /proc/sysrq-trigger You should see some panic output, followed by the system reset and booting into fresh copy of kernel. When default initrd loads and runs /init, vmcore should be copied out to disk (by default, in /var/crash/<YYYY.MM.DD-HH:MM:SS>/vmcore), then the system rebooted back into your normal kernel. Once back to your normal kernel, you can use the previously installed crash kernel in conjunction with the previously installed kernel-debuginfo to perform postmortem analysis: # crash /usr/lib/debug/lib/modules/2.6.17-1.2621.el5/vmlinux /var/crash/2006-08-23-15:34/vmcore crash> bt and so on... Saving vmcore-dmesg.txt ----------------------- Kernel log bufferes are one of the most important information available in vmcore. Now before saving vmcore, kernel log bufferes are extracted from /proc/vmcore and saved into a file vmcore-dmesg.txt. After vmcore-dmesg.txt, vmcore is saved. Destination disk and directory for vmcore-dmesg.txt is same as vmcore. Note that kernel log buffers will not be available if dump target is raw device. FADump Memory Requirements: System Memory Recommended memory --------------------- ---------------------- 4 GB - 16 GB : 768 MB 16 GB - 64 GB : 1024 MB 64 GB - 128 GB : 2 GB 128 GB - 1 TB : 4 GB 1 TB - 2 TB : 6 GB 2 TB - 4 TB : 12 GB 4 TB - 8 TB : 20 GB 8 TB - 16 TB : 36 GB 16 TB - 32 TB : 64 GB 32 TB - 64 TB : 128 GB 64 TB & above : 180 GB Things to remember: 1) The memory required to boot capture Kernel is a moving target that depends on many factors like hardware attached to the system, kernel and modules in use, packages installed and services enabled, there is no one-size-fits-all. But the above recommendations are based on system memory. So, the above recommendations for FADump come with a few assumptions, based on available system memory, about the resources the system could have. So, please take the recommendations with a pinch of salt and remember to try capturing dump a few times to confirm that the system is configured successfully with dump capturing support. 2) Though the memory requirements for FADump seem high, this memory is not completely set aside but made available for userspace applications to use, through the CMA allocator. 3) As the same initrd is used for booting production kernel as well as capture kernel and with dump being captured in a restricted memory environment, few optimizations (like not inclding network dracut module, disabling multipath and such) are applied while building the initrd. In case, the production environment needs these optimizations to be avoided, dracut_args option in /etc/kdump.conf file could be leveraged. For example, if a user wishes for network module to be included in the initrd, adding the below entry in /etc/kdump.conf file and restarting kdump service would take care of it. dracut_args --add "network" 4) If FADump is configured to capture vmcore to a remote dump target using SSH or NFS protocol, the corresponding network interface '<interface-name>' is renamed to 'kdump-<interface-name>', if it is generic (like *eth# or net#). It happens because vmcore capture scripts in the initial RAM disk (initrd) add the 'kdump-' prefix to the network interface name to secure persistent naming. And as capture kernel and production kernel use the same initrd in case of FADump, the interface name is changed for the production kernel too. This is likely to impact network configuration setup for production kernel. So, it is recommended to use a non-generic name for a network interface, before setting up FADump to capture vmcore to a remote dump target based on that network interface, to avoid running into network configuration issues. Dump Triggering methods: This section talks about the various ways, other than a Kernel Panic, in which fadump can be triggered. The following methods assume that fadump is configured on your system, with the scripts enabled as described in the section above. 1) AltSysRq C FAdump can be triggered with the combination of the 'Alt','SysRq' and 'C' keyboard keys. Please refer to the following link for more details: https://fedoraproject.org/wiki/QA/Sysrq In addition, on PowerPC boxes, fadump can also be triggered via Hardware Management Console(HMC) using 'Ctrl', 'O' and 'C' keyboard keys. 2) Kernel OOPs If we want to generate a dump everytime the Kernel OOPses, we can achieve this by setting the 'Panic On OOPs' option as follows: # echo 1 > /proc/sys/kernel/panic_on_oops 3) PowerPC specific methods: On IBM PowerPC machines, issuing a soft reset invokes the XMON debugger(if XMON is configured). To configure XMON one needs to compile the kernel with the CONFIG_XMON and CONFIG_XMON_DEFAULT options, or by compiling with CONFIG_XMON and booting the kernel with xmon=on option. Following are the ways to remotely issue a soft reset on PowerPC boxes, which would drop you to XMON. Pressing a 'X' (capital alphabet X) followed by an 'Enter' here will trigger the dump. 3.1) HMC Hardware Management Console(HMC) available on Power4 and Power5 machines allow partitions to be reset remotely. This is specially useful in hang situations where the system is not accepting any keyboard inputs. Once you have HMC configured, the following steps will enable you to trigger fadump via a soft reset: On Power4 Using GUI * In the right pane, right click on the partition you wish to dump. * Select "Operating System->Reset". * Select "Soft Reset". * Select "Yes". Using HMC Commandline # reset_partition -m <machine> -p <partition> -t soft On Power5 Using GUI * In the right pane, right click on the partition you wish to dump. * Select "Restart Partition". * Select "Dump". * Select "OK". Using HMC Commandline # chsysstate -m <managed system name> -n <lpar name> -o dumprestart -r lpar 3.2) Blade Management Console for Blade Center To initiate a dump operation, go to Power/Restart option under "Blade Tasks" in the Blade Management Console. Select the corresponding blade for which you want to initate the dump and then click "Restart blade with NMI". This issues a system reset and invokes xmon debugger. Advanced Setups & Failure action: Kdump and fadump exhibit similar behavior in terms of setup & failure action. For fadump advanced setup related information see section "Advanced Setups" in "kexec-kdump-howto.txt" document. Refer to "Failure action" section in "kexec- kdump-howto.txt" document for fadump failure action related information. Compression and filtering Refer "Compression and filtering" section in "kexec-kdump-howto.txt" document. Compression and filtering are same for kdump & fadump. Notes on rootfs mount: Dracut is designed to mount rootfs by default. If rootfs mounting fails it will refuse to go on. So fadump leaves rootfs mounting to dracut currently. We make the assumtion that proper root= cmdline is being passed to dracut initramfs for the time being. If you need modify "KDUMP_COMMANDLINE=" in /etc/sysconfig/kdump, you will need to make sure that appropriate root= options are copied from /proc/cmdline. In general it is best to append command line options using "KDUMP_COMMANDLINE_APPEND=" instead of replacing the original command line completely. How to disable FADump: Remove "fadump=on"/"fadump=nocma" from kernel cmdline parameters OR replace it with "fadump=off" kernel cmdline parameter: # grubby --update-kernel=/boot/vmlinuz-`uname -r` --remove-args="fadump=on" or # grubby --update-kernel=/boot/vmlinuz-`uname -r` --remove-args="fadump=nocma" OR # grubby --update-kernel=/boot/vmlinuz-`uname -r` --args="fadump=off" If KDump is to be used as the dump capturing mechanism, update the crashkernel parameter (Else, remove "crashkernel=" parameter too, using grubby): # grubby --update-kernel=/boot/vmlinuz-$kver --args="crashkernl=auto" Reboot the system for the settings to take effect. PK��\�0��� kdump-in-cluster-environment.txtnu�[���Kdump-in-cluster-environment HOWTO Introduction Kdump is a kexec based crash dumping mechansim for Linux. This docuement illustrate how to configure kdump in cluster environment to allow the kdump crash recovery service complete without being preempted by traditional power fencing methods. Overview Kexec/Kdump Details about Kexec/Kdump are available in Kexec-Kdump-howto file and will not be described here. fence_kdump fence_kdump is an I/O fencing agent to be used with the kdump crash recovery service. When the fence_kdump agent is invoked, it will listen for a message from the failed node that acknowledges that the failed node is executing the kdump crash kernel. Note that fence_kdump is not a replacement for traditional fencing methods. The fence_kdump agent can only detect that a node has entered the kdump crash recovery service. This allows the kdump crash recovery service complete without being preempted by traditional power fencing methods. fence_kdump_send fence_kdump_send is a utility used to send messages that acknowledge that the node itself has entered the kdump crash recovery service. The fence_kdump_send utility is typically run in the kdump kernel after a cluster node has encountered a kernel panic. Once the cluster node has entered the kdump crash recovery service, fence_kdump_send will periodically send messages to all cluster nodes. When the fence_kdump agent receives a valid message from the failed nodes, fencing is complete. How to configure Pacemaker cluster environment: If we want to use kdump in Pacemaker cluster environment, fence-agents-kdump should be installed in every nodes in the cluster. You can achieve this via the following command: # yum install -y fence-agents-kdump Next is to add kdump_fence to the cluster. Assuming that the cluster consists of three nodes, they are node1, node2 and node3, and use Pacemaker to perform resource management and pcs as cli configuration tool. With pcs it is easy to add a stonith resource to the cluster. For example, add a stonith resource named mykdumpfence with fence type of fence_kdump via the following commands: # pcs stonith create mykdumpfence fence_kdump \ pcmk_host_check=static-list pcmk_host_list="node1 node2 node3" # pcs stonith update mykdumpfence pcmk_monitor_action=metadata --force # pcs stonith update mykdumpfence pcmk_status_action=metadata --force # pcs stonith update mykdumpfence pcmk_reboot_action=off --force Then enable stonith # pcs property set stonith-enabled=true How to configure kdump: Actually there are two ways how to configure fence_kdump support: 1) Pacemaker based clusters If you have successfully configured fence_kdump in Pacemaker, there is no need to add some special configuration in kdump. So please refer to Kexec-Kdump-howto file for more information. 2) Generic clusters For other types of clusters there are two configuration options in kdump.conf which enables fence_kdump support: fence_kdump_nodes <node(s)> Contains list of cluster node(s) separated by space to send fence_kdump notification to (this option is mandatory to enable fence_kdump) fence_kdump_args <arg(s)> Command line arguments for fence_kdump_send (it can contain all valid arguments except hosts to send notification to) These options will most probably be configured by your cluster software, so please refer to your cluster documentation how to enable fence_kdump support. Please be aware that these two ways cannot be combined and 2) has precedence over 1). It means that if fence_kdump is configured using fence_kdump_nodes and fence_kdump_args options in kdump.conf, Pacemaker configuration is not used even if it exists. PK��\�xp&W�W�kexec-kdump-howto.txtnu�[���================= Kexec/Kdump HOWTO ================= Introduction ============ Kexec and kdump are new features in the 2.6 mainstream kernel. These features are included in Red Hat Enterprise Linux 5. The purpose of these features is to ensure faster boot up and creation of reliable kernel vmcores for diagnostic purposes. Overview ======== Kexec ----- Kexec is a fastboot mechanism which allows booting a Linux kernel from the context of already running kernel without going through BIOS. BIOS can be very time consuming especially on the big servers with lots of peripherals. This can save a lot of time for developers who end up booting a machine numerous times. Kdump ----- Kdump is a new kernel crash dumping mechanism and is very reliable because the crash dump is captured from the context of a freshly booted kernel and not from the context of the crashed kernel. Kdump uses kexec to boot into a second kernel whenever system crashes. This second kernel, often called a capture kernel, boots with very little memory and captures the dump image. The first kernel reserves a section of memory that the second kernel uses to boot. Kexec enables booting the capture kernel without going through BIOS hence contents of first kernel's memory are preserved, which is essentially the kernel crash dump. Kdump is supported on the i686, x86_64, ia64 and ppc64 platforms. The standard kernel and capture kernel are one in the same on i686, x86_64, ia64 and ppc64. If you're reading this document, you should already have kexec-tools installed. If not, you install it via the following command: # yum install kexec-tools Now load a kernel with kexec: # kver=`uname -r` # kexec -l /boot/vmlinuz-$kver --initrd=/boot/initrd-$kver.img \ --command-line="`cat /proc/cmdline`" NOTE: The above will boot you back into the kernel you're currently running, if you want to load a different kernel, substitute it in place of `uname -r`. Now reboot your system, taking note that it should bypass the BIOS: # reboot How to configure kdump ====================== Again, we assume if you're reading this document, you should already have kexec-tools installed. If not, you install it via the following command: # yum install kexec-tools To be able to do much of anything interesting in the way of debug analysis, you'll also need to install the kernel-debuginfo package, of the same arch as your running kernel, and the crash utility: # yum --enablerepo=\*debuginfo install kernel-debuginfo.$(uname -m) crash Next up, we need to modify some boot parameters to reserve a chunk of memory for the capture kernel. With the help of grubby, it's very easy to append "crashkernel=128M" to the end of your kernel boot parameters. Note that the X values are such that X = the amount of memory to reserve for the capture kernel. And based on arch and system configuration, one might require more than 128M to be reserved for kdump. One need to experiment and test kdump, if 128M is not sufficient, try reserving more memory. # grubby --args="crashkernel=128M" --update-kernel=/boot/vmlinuz-`uname -r` Note that there is an alternative form in which to specify a crashkernel memory reservation, in the event that more control is needed over the size and placement of the reserved memory. The format is: crashkernel=range1:size1[,range2:size2,...][@offset] Where range<n> specifies a range of values that are matched against the amount of physical RAM present in the system, and the corresponding size<n> value specifies the amount of kexec memory to reserve. For example: crashkernel=512M-2G:64M,2G-:128M This line tells kexec to reserve 64M of ram if the system contains between 512M and 2G of physical memory. If the system contains 2G or more of physical memory, 128M should be reserved. Besides, since kdump needs to access /proc/kallsyms during a kernel loading if KASLR is enabled, check /proc/sys/kernel/kptr_restrict to make sure that the content of /proc/kallsyms is exposed correctly. We recommend to set the value of kptr_restrict to '1'. Otherwise capture kernel loading could fail. After making said changes, reboot your system, so that the X MB of memory is left untouched by the normal system, reserved for the capture kernel. Take note that the output of 'free -m' will show X MB less memory than without this parameter, which is expected. You may be able to get by with less than 128M, but testing with only 64M has proven unreliable of late. On ia64, as much as 512M may be required. Now that you've got that reserved memory region set up, you want to turn on the kdump init script: # chkconfig kdump on Then, start up kdump as well: # systemctl start kdump.service This should load your kernel-kdump image via kexec, leaving the system ready to capture a vmcore upon crashing. To test this out, you can force-crash your system by echo'ing a c into /proc/sysrq-trigger: # echo c > /proc/sysrq-trigger You should see some panic output, followed by the system restarting into the kdump kernel. When the boot process gets to the point where it starts the kdump service, your vmcore should be copied out to disk (by default, in /var/crash/<YYYY-MM-DD-HH:MM>/vmcore), then the system rebooted back into your normal kernel. Once back to your normal kernel, you can use the previously installed crash kernel in conjunction with the previously installed kernel-debuginfo to perform postmortem analysis: # crash /usr/lib/debug/lib/modules/2.6.17-1.2621.el5/vmlinux /var/crash/2006-08-23-15:34/vmcore crash> bt and so on... Notes on kdump ============== When kdump starts, the kdump kernel is loaded together with the kdump initramfs. To save memory usage and disk space, the kdump initramfs is generated strictly against the system it will run on, and contains the minimum set of kernel modules and utilities to boot the machine to a stage where the dump target could be mounted. With kdump service enabled, kdumpctl will try to detect possible system change and rebuild the kdump initramfs if needed. But it can not guarantee to cover every possible case. So after a hardware change, disk migration, storage setup update or any similar system level changes, it's highly recommended to rebuild the initramfs manually with following command: # kdumpctl rebuild Saving vmcore-dmesg.txt ======================= Kernel log bufferes are one of the most important information available in vmcore. Now before saving vmcore, kernel log bufferes are extracted from /proc/vmcore and saved into a file vmcore-dmesg.txt. After vmcore-dmesg.txt, vmcore is saved. Destination disk and directory for vmcore-dmesg.txt is same as vmcore. Note that kernel log buffers will not be available if dump target is raw device. Dump Triggering methods ======================= This section talks about the various ways, other than a Kernel Panic, in which Kdump can be triggered. The following methods assume that Kdump is configured on your system, with the scripts enabled as described in the section above. 1) AltSysRq C Kdump can be triggered with the combination of the 'Alt','SysRq' and 'C' keyboard keys. Please refer to the following link for more details: https://access.redhat.com/solutions/2023 In addition, on PowerPC boxes, Kdump can also be triggered via Hardware Management Console(HMC) using 'Ctrl', 'O' and 'C' keyboard keys. 2) NMI_WATCHDOG In case a machine has a hard hang, it is quite possible that it does not respond to keyboard interrupts. As a result 'Alt-SysRq' keys will not help trigger a dump. In such scenarios Nmi Watchdog feature can prove to be useful. The following link has more details on configuring Nmi watchdog option. https://access.redhat.com/solutions/125103 Once this feature has been enabled in the kernel, any lockups will result in an OOPs message to be generated, followed by Kdump being triggered. 3) Kernel OOPs If we want to generate a dump everytime the Kernel OOPses, we can achieve this by setting the 'Panic On OOPs' option as follows: # echo 1 > /proc/sys/kernel/panic_on_oops This is enabled by default on RHEL5. 4) NMI(Non maskable interrupt) button In cases where the system is in a hung state, and is not accepting keyboard interrupts, using NMI button for triggering Kdump can be very useful. NMI button is present on most of the newer x86 and x86_64 machines. Please refer to the User guides/manuals to locate the button, though in most occasions it is not very well documented. In most cases it is hidden behind a small hole on the front or back panel of the machine. You could use a toothpick or some other non-conducting probe to press the button. For example, on the IBM X series 366 machine, the NMI button is located behind a small hole on the bottom center of the rear panel. To enable this method of dump triggering using NMI button, you will need to set the 'unknown_nmi_panic' option as follows: # echo 1 > /proc/sys/kernel/unknown_nmi_panic 5) PowerPC specific methods: On IBM PowerPC machines, issuing a soft reset invokes the XMON debugger(if XMON is configured). To configure XMON one needs to compile the kernel with the CONFIG_XMON and CONFIG_XMON_DEFAULT options, or by compiling with CONFIG_XMON and booting the kernel with xmon=on option. Following are the ways to remotely issue a soft reset on PowerPC boxes, which would drop you to XMON. Pressing a 'X' (capital alphabet X) followed by an 'Enter' here will trigger the dump. 5.1) HMC Hardware Management Console(HMC) available on Power4 and Power5 machines allow partitions to be reset remotely. This is specially useful in hang situations where the system is not accepting any keyboard inputs. Once you have HMC configured, the following steps will enable you to trigger Kdump via a soft reset: On Power4 Using GUI * In the right pane, right click on the partition you wish to dump. * Select "Operating System->Reset". * Select "Soft Reset". * Select "Yes". Using HMC Commandline # reset_partition -m <machine> -p <partition> -t soft On Power5 Using GUI * In the right pane, right click on the partition you wish to dump. * Select "Restart Partition". * Select "Dump". * Select "OK". Using HMC Commandline # chsysstate -m <managed system name> -n <lpar name> -o dumprestart -r lpar 5.2) Blade Management Console for Blade Center To initiate a dump operation, go to Power/Restart option under "Blade Tasks" in the Blade Management Console. Select the corresponding blade for which you want to initate the dump and then click "Restart blade with NMI". This issues a system reset and invokes xmon debugger. Dump targets ============ In addition to being able to capture a vmcore to your system's local file system, kdump can be configured to capture a vmcore to a number of other locations, including a raw disk partition, a dedicated file system, an NFS mounted file system, or a remote system via ssh/scp. Additional options exist for specifying the relative path under which the dump is captured, what to do if the capture fails, and for compressing and filtering the dump (so as to produce smaller, more manageable, vmcore files, see "Advanced Setups" for more detail on these options). In theory, dumping to a location other than the local file system should be safer than kdump's default setup, as its possible the default setup will try dumping to a file system that has become corrupted. The raw disk partition and dedicated file system options allow you to still dump to the local system, but without having to remount your possibly corrupted file system(s), thereby decreasing the chance a vmcore won't be captured. Dumping to an NFS server or remote system via ssh/scp also has this advantage, as well as allowing for the centralization of vmcore files, should you have several systems from which you'd like to obtain vmcore files. Of course, note that these configurations could present problems if your network is unreliable. Kdump target and advanced setups are configured via modifications to /etc/kdump.conf, which out of the box, is fairly well documented itself. Any alterations to /etc/kdump.conf should be followed by a restart of the kdump service, so the changes can be incorporated in the kdump initrd. Restarting the kdump service is as simple as '/sbin/systemctl restart kdump.service'. There are two ways to config the dump target, config dump target only using "path", and config dump target explicitly. Interpretation of "path" also differs in two config styles. Config dump target only using "path" ------------------------------------ You can change the dump target by setting "path" to a mount point where dump target is mounted. When there is no explicitly configured dump target, "path" in kdump.conf represents the current file system path in which vmcore will be saved. Kdump will automatically detect the underlying device of "path" and use that as the dump target. In fact, upon dump, kdump creates a directory $hostip-$date with-in "path" and saves vmcore there. So practically dump is saved in $path/$hostip-$date/. Kdump will only check current mount status for mount entry corresponding to "path". So please ensure the dump target is mounted on "path" before kdump service starts. NOTES: - It's strongly recommanded to put an mount entry for "path" in /etc/fstab and have it auto mounted on boot. This make sure the dump target is reachable from the machine and kdump's configuration is stable. EXAMPLES: - path /var/crash/ This is the default configuration. Assuming there is no disk mounted on /var/ or on /var/crash, dump will be saved on disk backing rootfs in directory /var/crash. - path /var/crash/ (A separate disk mounted on /var/crash) Say a disk /dev/sdb is mounted on /var. In this case dump target will become /dev/sdb and path will become "/" and dump will be saved on "sdb:/var/crash/" directory. - path /var/crash/ (NFS mounted on /var) Say foo.com:/export/tmp is mounted on /var. In this case dump target is nfs server and path will be adjusted to "/crash" and dump will be saved to foo.com:/export/tmp/crash/ directory. Config dump target explicitely ------------------------------ You can set the dump target explicitly in kdump.conf, and "path" will be the relative path in the specified dump target. For example, if dump target is "ext4 /dev/sda", then dump will be saved in "path" directory on /dev/sda. Same is the case for nfs dump. If user specified "nfs foo.com:/export/tmp/" as dump target, then dump will effectively be saved in "foo.com:/export/tmp/var/crash/" directory. If the dump target is "raw", then "path" is ignored. If it's a filesystem target, kdump will need to know the right mount option. Kdump will check current mount status, and then /etc/fstab for mount options corresponding to the specified dump target and use it. If there are special mount option required for the dump target, it could be set by put an entry in fstab. If there are no related mount entry, mount option is set to "defaults". NOTES: - It's recommended to put an entry for the dump target in /etc/fstab and have it auto mounted on boot. This make sure the dump target is reachable from the machine and kdump won't fail. - Kdump ignores some mount options, including "noauto", "ro". This make it possible to keep the dump target unmounted or read-only when not used. EXAMPLES: - ext4 /dev/sda (mounted) path /var/crash/ In this case dump target is set to /dev/sdb, path is the absolute path "/var/crash" in /dev/sda, vmcore path will saved on "sda:/var/crash" directory. - nfs foo.com:/export/tmp (mounted) path /var/crash/ In this case dump target is nfs server, path is the absolute path "/var/crash", vmcore path will saved on "foo.com:/export/tmp/crash/" directory. - nfs foo.com:/export/tmp (not mounted) path /var/crash/ Same with above case, kdump will use "defaults" as the mount option for the dump target. - nfs foo.com:/export/tmp (not mounted, entry with option "noauto,nolock" exists in /etc/fstab) path /var/crash/ In this case dump target is nfs server, vmcore path will saved on "foo.com:/export/tmp/crash/" directory, and kdump will inherit "nolock" option. Dump target and mkdumprd ------------------------ MKdumprd is the tool used to create kdump initramfs, and it may change the mount status of the dump target in some condition. Usually the dump target should be used only for kdump. If you worry about someone uses the filesystem for something else other than dumping vmcore you can mount it as read-only or make it a noauto mount. Mkdumprd will mount/remount it as read-write for creating dump directory and will move it back to it's original state afterwards. Supported dump target types and requirements -------------------------------------------- 1) Raw partition Raw partition dumping requires that a disk partition in the system, at least as large as the amount of memory in the system, be left unformatted. Assuming /dev/vg/lv_kdump is left unformatted, kdump.conf can be configured with 'raw /dev/vg/lv_kdump', and the vmcore file will be copied via dd directly onto partition /dev/vg/lv_kdump. Restart the kdump service via '/sbin/systemctl restart kdump.service' to commit this change to your kdump initrd. Dump target should be persistent device name, such as lvm or device mapper canonical name. 2) Dedicated file system Similar to raw partition dumping, you can format a partition with the file system of your choice, Again, it should be at least as large as the amount of memory in the system. Assuming it should be at least as large as the amount of memory in the system. Assuming /dev/vg/lv_kdump has been formatted ext4, specify 'ext4 /dev/vg/lv_kdump' in kdump.conf, and a vmcore file will be copied onto the file system after it has been mounted. Dumping to a dedicated partition has the advantage that you can dump multiple vmcores to the file system, space permitting, without overwriting previous ones, as would be the case in a raw partition setup. Restart the kdump service via '/sbin/systemctl restart kdump.service' to commit this change to your kdump initrd. Note that for local file systems ext4 and ext2 are supported as dumpable targets. Kdump will not prevent you from specifying other filesystems, and they will most likely work, but their operation cannot be guaranteed. for instance specifying a vfat filesystem or msdos filesystem will result in a successful load of the kdump service, but during crash recovery, the dump will fail if the system has more than 2GB of memory (since vfat and msdos filesystems do not support more than 2GB files). Be careful of your filesystem selection when using this target. It is recommended to use persistent device names or UUID/LABEL for file system dumps. One example of persistent device is /dev/vg/<devname>. 3) NFS mount Dumping over NFS requires an NFS server configured to export a file system with full read/write access for the root user. All operations done within the kdump initial ramdisk are done as root, and to write out a vmcore file, we obviously must be able to write to the NFS mount. Configuring an NFS server is outside the scope of this document, but either the no_root_squash or anonuid options on the NFS server side are likely of interest to permit the kdump initrd operations write to the NFS mount as root. Assuming your're exporting /dump on the machine nfs-server.example.com, once the mount is properly configured, specify it in kdump.conf, via 'nfs nfs-server.example.com:/dump'. The server portion can be specified either by host name or IP address. Following a system crash, the kdump initrd will mount the NFS mount and copy out the vmcore to your NFS server. Restart the kdump service via '/sbin/systemctl restart kdump.service' to commit this change to your kdump initrd. 4) Special mount via "dracut_args" You can utilize "dracut_args" to pass "--mount" to kdump, see dracut manpage about the format of "--mount" for details. If there is any "--mount" specified via "dracut_args", kdump will build it as the mount target without doing any validation (mounting or checking like mount options, fs size, save path, etc), so you must test it to ensure all the correctness. You cannot use other targets in /etc/kdump.conf if you use "--mount" in "dracut_args". You also cannot specify mutliple "--mount" targets via "dracut_args". One use case of "--mount" in "dracut_args" is you do not want to mount dump target before kdump service startup, for example, to reduce the burden of the shared nfs server. Such as the example below: dracut_args --mount "192.168.1.1:/share /mnt/test nfs4 defaults" NOTE: - <mountpoint> must be specified as an absolute path. 5) Remote system via ssh/scp Dumping over ssh/scp requires setting up passwordless ssh keys for every machine you wish to have dump via this method. First up, configure kdump.conf for ssh/scp dumping, adding a config line of 'ssh user@server', where 'user' can be any user on the target system you choose, and 'server' is the host name or IP address of the target system. Using a dedicated, restricted user account on the target system is recommended, as there will be keyless ssh access to this account. Once kdump.conf is appropriately configured, issue the command 'kdumpctl propagate' to automatically set up the ssh host keys and transmit the necessary bits to the target server. You'll have to type in 'yes' to accept the host key for your targer server if this is the first time you've connected to it, and then input the target system user's password to send over the necessary ssh key file. Restart the kdump service via '/sbin/systemctl restart kdump.service' to commit this change to your kdump initrd. Advanced Setups =============== About /etc/sysconfig/kdump ------------------------------ Currently, there are a few options in /etc/sysconfig/kdump, which are usually used to control the behavior of kdump kernel. Basically, all of these options have default values, usually we do not need to change them, but sometimes, we may modify them in order to better control the behavior of kdump kernel such as debug, etc. -KDUMP_BOOTDIR Usually kdump kernel is the same as 1st kernel. So kdump will try to find kdump kernel under /boot according to /proc/cmdline. E.g we execute below command and get an output: cat /proc/cmdline BOOT_IMAGE=/xxx/vmlinuz-3.yyy.zzz root=xxxx ..... Then kdump kernel will be /boot/xxx/vmlinuz-3.yyy.zzz. However, this option is provided to user if kdump kernel is put in a different directory. -KDUMP_IMG This represents the image type used for kdump. The default value is "vmlinuz". -KDUMP_IMG_EXT This represents the images extension. Relocatable kernels don't have one. Currently, it is a null string by default. -KEXEC_ARGS Any additional kexec arguments required. For example: KEXEC_ARGS="--elf32-core-headers". In most situations, this should be left empty. But, sometimes we hope to get additional kexec loading debugging information, we can add the '-d' option for the debugging. -KDUMP_KERNELVER This is a kernel version string for the kdump kernel. If the version is not specified, the init script will try to find a kdump kernel with the same version number as the running kernel. -KDUMP_COMMANDLINE The value of 'KDUMP_COMMANDLINE' will be passed to kdump kernel as command line parameters, this will likely match the contents of the grub kernel line. In general, if a command line is not specified, which means that it is a null string such as KDUMP_COMMANDLINE="", the default will be taken automatically from the '/proc/cmdline'. -KDUMP_COMMANDLINE_REMOVE This option allows us to remove arguments from the current kdump command line. If we don't specify any parameters for the KDUMP_COMMANDLINE, it will inherit all values from the '/proc/cmdline', which is not expected. As you know, some default kernel parameters could affect kdump, furthermore, that could cause the failure of kdump kernel boot. In addition, the option is also helpful to debug the kdump kernel, we can use this option to change kdump kernel command line. For more kernel parameters, please refer to kernel document. -KDUMP_COMMANDLINE_APPEND This option allows us to append arguments to the current kdump command line after processed by the KDUMP_COMMANDLINE_REMOVE. For kdump kernel, some specific modules require to be disabled like the mce, cgroup, numa, hest_disable, etc. Those modules may waste memory or kdump kernel doesn't need them, furthermore, there may affect kdump kernel boot. Just like above option, it can be used to disable or enable some kernel modules so that we can exclude any errors for kdump kernel, this is very meaningful for debugging. -KDUMP_STDLOGLVL | KDUMP_SYSLOGLVL | KDUMP_KMSGLOGLVL These variables are used to control the kdump log level in the first kernel. In the second kernel, kdump will use the rd.kdumploglvl option to set the log level in the above KDUMP_COMMANDLINE_APPEND. Logging levels: no logging(0), error(1), warn(2), info(3), debug(4) Kdump Post-Capture Executable ----------------------------- It is possible to specify a custom script or binary you wish to run following an attempt to capture a vmcore. The executable is passed an exit code from the capture process, which can be used to trigger different actions from within your post-capture executable. If /etc/kdump/post.d directory exist, All files in the directory are collectively sorted and executed in lexical order, before binary or script specified kdump_post parameter is executed. In these scripts, the reference to the storage or network device should adhere to the section 'Supported dump target types and requirements' Kdump Pre-Capture Executable ---------------------------- It is possible to specify a custom script or binary you wish to run before capturing a vmcore. Exit status of this binary is interpreted: 0 - continue with dump process as usual non 0 - run the final action (reboot/poweroff/halt) If /etc/kdump/pre.d directory exists, all files in the directory are collectively sorted and executed in lexical order, after binary or script specified kdump_pre parameter is executed. Even if the binary or script in /etc/kdump/pre.d directory returns non 0 exit status, the processing is continued. In these scripts, the reference to the storage or network device should adhere to the section 'Supported dump target types and requirements' Extra Binaries -------------- If you have specific binaries or scripts you want to have made available within your kdump initrd, you can specify them by their full path, and they will be included in your kdump initrd, along with all dependent libraries. This may be particularly useful for those running post-capture scripts that rely on other binaries. Extra Modules ------------- By default, only the bare minimum of kernel modules will be included in your kdump initrd. Should you wish to capture your vmcore files to a non-boot-path storage device, such as an iscsi target disk or clustered file system, you may need to manually specify additional kernel modules to load into your kdump initrd. Failure action -------------- Failure action specifies what to do when dump to configured dump target fails. By default, failure action is "reboot" and that is system reboots if attempt to save dump to dump target fails. There are other failure actions available though. - dump_to_rootfs This option tries to mount root and save dump on root filesystem in a path specified by "path". This option will generally make sense when dump target is not root filesystem. For example, if dump is being saved over network using "ssh" then one can specify failure action to "dump_to_rootfs" to try saving dump to root filesystem if dump over network fails. - shell Drop into a shell session inside initramfs. - halt Halt system after failure - poweroff Poweroff system after failure. Compression and filtering ------------------------- The 'core_collector' parameter in kdump.conf allows you to specify a custom dump capture method. The most common alternate method is makedumpfile, which is a dump filtering and compression utility provided with kexec-tools. On some architectures, it can drastically reduce the size of your vmcore files, which becomes very useful on systems with large amounts of memory. A typical setup is 'core_collector makedumpfile -F -l --message-level 7 -d 31', but check the output of '/sbin/makedumpfile --help' for a list of all available options (-i and -g don't need to be specified, they're automatically taken care of). Note that use of makedumpfile requires that the kernel-debuginfo package corresponding with your running kernel be installed. Core collector command format depends on dump target type. Typically for filesystem (local/remote), core_collector should accept two arguments. First one is source file and second one is target file. For ex. - ex1. core_collector "cp --sparse=always" Above will effectively be translated to: cp --sparse=always /proc/vmcore <dest-path>/vmcore - ex2. core_collector "makedumpfile -l --message-level 7 -d 31" Above will effectively be translated to: makedumpfile -l --message-level 7 -d 31 /proc/vmcore <dest-path>/vmcore For dump targets like raw and ssh, in general, core collector should expect one argument (source file) and should output the processed core on standard output (There is one exception of "scp", discussed later). This standard output will be saved to destination using appropriate commands. raw dumps core_collector examples: - ex3. core_collector "cat" Above will effectively be translated to. cat /proc/vmcore | dd of=<target-device> - ex4. core_collector "makedumpfile -F -l --message-level 7 -d 31" Above will effectively be translated to. makedumpfile -F -l --message-level 7 -d 31 | dd of=<target-device> ssh dumps core_collector examples: - ex5. core_collector "cat" Above will effectively be translated to. cat /proc/vmcore | ssh <options> <remote-location> "dd of=path/vmcore" - ex6. core_collector "makedumpfile -F -l --message-level 7 -d 31" Above will effectively be translated to. makedumpfile -F -l --message-level 7 -d 31 | ssh <options> <remote-location> "dd of=path/vmcore" There is one exception to standard output rule for ssh dumps. And that is scp. As scp can handle ssh destinations for file transfers, one can specify "scp" as core collector for ssh targets (no output on stdout). - ex7. core_collector "scp" Above will effectively be translated to. scp /proc/vmcore <user@host>:path/vmcore About default core collector ---------------------------- Default core_collector for ssh/raw dump is: "makedumpfile -F -l --message-level 7 -d 31". Default core_collector for other targets is: "makedumpfile -l --message-level 7 -d 31". Even if core_collector option is commented out in kdump.conf, makedumpfile is default core collector and kdump uses it internally. If one does not want makedumpfile as default core_collector, then they need to specify one using core_collector option to change the behavior. Note: If "makedumpfile -F" is used then you will get a flattened format vmcore.flat, you will need to use "makedumpfile -R" to rearrange the dump data from stdard input to a normal dumpfile (readable with analysis tools). For example: "makedumpfile -R vmcore < vmcore.flat" Caveats ======= Console frame-buffers and X are not properly supported. If you typically run with something along the lines of "vga=791" in your kernel config line or have X running, console video will be garbled when a kernel is booted via kexec. Note that the kdump kernel should still be able to create a dump, and when the system reboots, video should be restored to normal. Notes ===== Notes on resetting video: ------------------------- Video is a notoriously difficult issue with kexec. Video cards contain ROM code that controls their initial configuration and setup. This code is nominally accessed and executed from the Bios, and otherwise not safely executable. Since the purpose of kexec is to reboot the system without re-executing the Bios, it is rather difficult if not impossible to reset video cards with kexec. The result is, that if a system crashes while running in a graphical mode (i.e. running X), the screen may appear to become 'frozen' while the dump capture is taking place. A serial console will of course reveal that the system is operating and capturing a vmcore image, but a casual observer will see the system as hung until the dump completes and a true reboot is executed. There are two possiblilties to work around this issue. One is by adding --reset-vga to the kexec command line options in /etc/sysconfig/kdump. This tells kdump to write some reasonable default values to the video card register file, in the hopes of returning it to a text mode such that boot messages are visible on the screen. It does not work with all video cards however. Secondly, it may be worth trying to add vga15fb.ko to the extra_modules list in /etc/kdump.conf. This will attempt to use the video card in framebuffer mode, which can blank the screen prior to the start of a dump capture. Notes on rootfs mount --------------------- Dracut is designed to mount rootfs by default. If rootfs mounting fails it will refuse to go on. So kdump leaves rootfs mounting to dracut currently. We make the assumtion that proper root= cmdline is being passed to dracut initramfs for the time being. If you need modify "KDUMP_COMMANDLINE=" in /etc/sysconfig/kdump, you will need to make sure that appropriate root= options are copied from /proc/cmdline. In general it is best to append command line options using "KDUMP_COMMANDLINE_APPEND=" instead of replacing the original command line completely. Notes on watchdog module handling --------------------------------- If a watchdog is active in first kernel then, we must have it's module loaded in crash kernel, so that either watchdog is deactivated or started being kicked in second kernel. Otherwise, we might face watchdog reboot when vmcore is being saved. When dracut watchdog module is enabled, it installs kernel watchdog module of active watchdog device in initrd. kexec-tools always add "-a watchdog" to the dracut_args if there exists at least one active watchdog and user has not added specifically "-o watchdog" in dracut_args of kdump.conf. If a watchdog module (such as hp_wdt) has not been written in watchdog-core framework then this option will not have any effect and module will not be added. Please note that only systemd watchdog daemon is supported as watchdog kick application. Notes for disk images --------------------- Kdump initramfs is a critical component for capturing the crash dump. But it's strictly generated for the machine it will run on, and have no generality. If you install a new machine with a previous disk image (eg. VMs created with disk image or snapshot), kdump could be broken easily due to hardware changes or disk ID changes. So it's strongly recommended to not include the kdump initramfs in the disk image in the first place, this helps to save space, and kdumpctl will build the initramfs automatically if it's missing. If you have already installed a machine with a disk image which have kdump initramfs embedded, you should rebuild the initramfs using "kdumpctl rebuild" command manually, or else kdump may not work as expeceted. Notes on encrypted dump target ------------------------------ Currently, kdump is not working well with encrypted dump target. First, user have to give the password manually in capture kernel, so a working interactive terminal is required in the capture kernel. And another major issue is that an OOM problem will occur with certain encryption setup. For example, the default setup for LUKS2 will use a memory hard key derivation function to mitigate brute force attach, it's impossible to reduce the memory usage for mounting the encrypted target. In such case, you have to either reserved enough memory for crash kernel according, or update your encryption setup. It's recommanded to use a non-encrypted target (eg. remote target) instead. Notes on device dump -------------------- Device dump allows drivers to append dump data to vmcore, so you can collect driver specified debug info. The drivers could append the data without any limit, and the data is stored in memory, this may bring a significant memory stress. So device dump is disabled by default by passing "novmcoredd" command line option to the kdump capture kernel. If you want to collect debug data with device dump, you need to modify "KDUMP_COMMANDLINE_APPEND=" value in /etc/sysconfig/kdump and remove the "novmcoredd" option. You also need to increase the "crashkernel=" value accordingly in case of OOM issue. Besides, kdump initramfs won't automatically include the device drivers which support device dump, only device drivers that are required for the dump target setup will be included. To ensure the device dump data will be included in the vmcore, you need to force include related device drivers by using "extra_modules" option in /etc/kdump.conf Parallel Dumping Operation ========================== Kexec allows kdump using multiple cpus. So parallel feature can accelerate dumping substantially, especially in executing compression and filter. For example: 1."makedumpfile -c --num-threads [THREAD_NUM] /proc/vmcore dumpfile" 2."makedumpfile -c /proc/vmcore dumpfile", 1 has better performance than 2, if THREAD_NUM is larger than two and the usable cpus number is larger than THREAD_NUM. Notes on how to use multiple cpus on a capture kernel on x86 system: Make sure that you are using a kernel that supports disable_cpu_apicid kernel option as a capture kernel, which is needed to avoid x86 specific hardware issue (*). The disable_cpu_apicid kernel option is automatically appended by kdumpctl script and is ignored if the kernel doesn't support it. You need to specify how many cpus to be used in a capture kernel by specifying the number of cpus in nr_cpus kernel option in /etc/sysconfig/kdump. nr_cpus is 1 at default. You should use necessary and sufficient number of cpus on a capture kernel. Warning: Don't use too many cpus on a capture kernel, or the capture kernel may lead to panic due to Out Of Memory. (*) Without disable_cpu_apicid kernel option, capture kernel may lead to hang, system reset or power-off at boot, depending on your system and runtime situation at the time of crash. Debugging Tips ============== - One can drop into a shell before/after saving vmcore with the help of using kdump_pre/kdump_post hooks. Use following in one of the pre/post scripts to drop into a shell. #!/bin/bash _ctty=/dev/ttyS0 setsid /bin/sh -i -l 0<>$_ctty 1<>$_ctty 2<>$_ctty One might have to change the terminal depending on what they are using. - Serial console logging for virtual machines I generally use "virsh console <domain-name>" to get to serial console. I noticed after dump saving system reboots and when grub menu shows up some of the previously logged messages are no more there. That means any important debugging info at the end will be lost. One can log serial console as follows to make sure messages are not lost. virsh ttyconsole <domain-name> ln -s <name-of-tty> /dev/modem minicom -C /tmp/console-logs Now minicom should be logging serial console in file console-logs. - Using the logger to output kdump log messages You can configure the kdump log level for the first kernel in the /etc/sysconfig/kdump. For example: KDUMP_STDLOGLVL=3 KDUMP_SYSLOGLVL=0 KDUMP_KMSGLOGLVL=0 The above configurations indicate that kdump messages will be printed to the console, and the KDUMP_STDLOGLVL is set to 3(info), but the KDUMP_SYSLOGLVL and KDUMP_KMSGLOGLVL are set to 0(no logging). This is also the current default log levels in the first kernel. In the second kernel, you can add the 'rd.kdumploglvl=X' option to the KDUMP_COMMANDLINE_APPEND in the /etc/sysconfig/kdump so that you can also set the log levels for the second kernel. The 'X' represents the logging levels, the default log level is 3(info) in the second kernel, for example: # cat /etc/sysconfig/kdump |grep rd.kdumploglvl KDUMP_COMMANDLINE_APPEND="irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 acpi_no_memhotplug transparent_hugepage=never nokaslr hest_disable novmcoredd rd.kdumploglvl=3" Logging levels: no logging(0), error(1),warn(2),info(3),debug(4) The ERROR level designates error events that might still allow the application to continue running. The WARN level designates potentially harmful situations. The INFO level designates informational messages that highlight the progress of the application at coarse-grained level. The DEBUG level designates fine-grained informational events that are most useful to debug an application. Note: if you set the log level to 0, that will disable the logs at the corresponding log level, which indicates that it has no log output. At present, the logger works in both the first kernel(kdump service debugging) and the second kernel. In the first kernel, you can find the historical logs with the journalctl command and check kdump service debugging information. In addition, the 'kexec -d' debugging messages are also saved to /var/log/kdump.log in the first kernel. For example: [root@ibm-z-109 ~]# ls -al /var/log/kdump.log -rw-r--r--. 1 root root 63238 Oct 28 06:40 /var/log/kdump.log If you want to get the debugging information of building kdump initramfs, you can enable the '--debug' option for the dracut_args in the /etc/kdump.conf, and then rebuild the kdump initramfs as below: # systemctl restart kdump.service That will rebuild the kdump initramfs and gerenate some logs to journald, you can find the dracut logs with the journalctl command. In the second kernel, kdump will automatically put the kexec-dmesg.log to a same directory with the vmcore, the log file includes the debugging messages like dmesg and journald logs. For example: [root@ibm-z-109 ~]# ls -al /var/crash/127.0.0.1-2020-10-28-02\:01\:23/ drwxr-xr-x. 2 root root 67 Oct 28 02:02 . drwxr-xr-x. 6 root root 154 Oct 28 02:01 .. -rw-r--r--. 1 root root 21164 Oct 28 02:01 kexec-dmesg.log -rw-------. 1 root root 74238698 Oct 28 02:01 vmcore -rw-r--r--. 1 root root 17532 Oct 28 02:01 vmcore-dmesg.txt If you want to get more debugging information in the second kernel, you can add the 'rd.debug' option to the KDUMP_COMMANDLINE_APPEND in the /etc/sysconfig/kdump, and then reload them in order to make the changes take effect. In addition, you can also add the 'rd.memdebug=X' option to the KDUMP_COMMANDLINE_APPEND in the /etc/sysconfig/kdump in order to output the additional information about kernel module memory consumption during loading. For more details, please refer to the /etc/sysconfig/kdump, or the man page of dracut.cmdline and kdump.conf. PK��\�e�o��live-image-kdump-howto.txtnu�[���Kdump now works on live images with some manual configurations. Here is the step by step guide. 1. Enable crashkernel reservation Since there isn't any config file that can be used to configure kernel parameters for live images before booting them, we have to append 'crashkernel' argument in boot menu every time we boot a live image. 2. Change dump target in /etc/kdump.conf When kdump kernel boots in a live environment, the default target /var/crash is in RAM so you need to change the dump target to an external disk or a network dump target. Besides, make sure that "default dump_to_rootfs" is not specified. 3. Start kdump service $ kdumpctl start 4. Trigger a kdump test $ echo 1 > /proc/sys/kernel/sysrq $ echo c > /proc/sysrq-trigger PK��\�n�.[[supported-kdump-targets.txtnu�[���Supported Kdump Targets This document try to list all supported kdump targets, and those supported or unknown/tech-preview targets, this can help users to decide whether a dump solution is available. Dump Target support status ========================== This section tries to come up with some kind of guidelines in terms of what dump targets are supported/not supported. Whatever is listed here is not binding in any manner. It is just sharing of current understanding and if something is not right, this section needs to be edited. Following are 3 lists. First one contains supported targets. These are generic configurations which should work and some configuration most likely has worked in testing. Second list is known unsupported targets. These targets we know either don't work or we don't support. And third list is unknown/tech-preview. We either don't yet know the status of kdump on these targets or these are under tech-preview. Note, these lists are not set in stone and can be changed at any point of time. Also these lists might not be complete. We will add/remove items to it as we get more testing information. Also, there are many corner cases which can't possibly be listed. For example in general we might be supporting software iscsi but there might be some configurations of it which don't work. So if any target is listed in supported section, it does not mean it works in all possible configurations. It just means that in common configurations it should work but there can be issues with particular configurations which are not supported. As we come to know of particular issues, we will keep on updating lists accordingly. Supported Dump targets ---------------------- storage: LVM volume Thin provisioning volume FC disks (qla2xxx, lpfc, bnx2fc, bfa) software initiator based iSCSI software RAID (mdraid) hardware RAID (cciss, hpsa, megaraid_sas, mpt2sas, aacraid) SCSI/SATA disks iSCSI HBA (all offload) hardware FCoE (qla2xxx, lpfc) software FCoE (bnx2fc) (Extra configuration required, please read "Note on FCoE" section below) network: Hardware using kernel modules: (tg3, igb, ixgbe, sfc, e1000e, bna, cnic, netxen_nic, qlge, bnx2x, bnx, qlcnic, be2net, enic, virtio-net, ixgbevf, igbvf) protocol: ipv4 bonding vlan bridge team vlan tagged bonding bridge over bond/team/vlan hypervisor: kvm xen (Supported in select configurations only) filesystem: ext[234] xfs nfs firmware: BIOS UEFI hypervisor: VMWare ESXi 4.1 and 5.1 Hyper-V 2012 R2 (RHEL Gen1 UP Guest only) Unsupported Dump targets ------------------------ storage: BIOS RAID Software iSCSI with iBFT (bnx2i, cxgb3i, cxgb4i) Software iSCSI with hybrid (be2iscsi) FCoE legacy IDE glusterfs gfs2/clvm/halvm network: hardware using kernel modules: (sfc SRIOV, cxgb4vf, pch_gbe) protocol: ipv6 wireless Infiniband (IB) vlan over bridge/team filesystem: btrfs Unknown/tech-preview -------------------- storage: PCI Express based SSDs hypervisor: Hyper-V 2008 Hyper-V 2012 Note on FCoE ===================== If you are trying to dump to a software FCoE target, you may encounter OOM issue, because some software FCoE requires more memory to work. In such case, you may need to increase the kdump reserved memory size in "crashkernel=" kernel parameter. By default, RHEL systems have "crashkernel=auto" in kernel boot arguments. The auto reserved memory size is designed to balance the coverage of use cases and an acceptable memory overhead, so not every use case could fit in, software FCoE is one of the case. For hardware FCoE, kdump should work naturally as firmware will do the initialization job. The capture kernel and kdump tools will run just fine. Useful Links ============ [1] RHEL6: Enabling kdump for full-virt (HVM) Xen DomU (https://access.redhat.com/knowledge/solutions/92943) PK��\�W � � Newsnu�[���PK��\.�� .TODOnu�[���PK��\��h&r early-kdump-howto.txtnu�[���PK��\�b\�<<�fadump-howto.txtnu�[���PK��\�0��� -Xkdump-in-cluster-environment.txtnu�[���PK��\�xp&W�W�ngkexec-kdump-howto.txtnu�[���PK��\�e�o�� live-image-kdump-howto.txtnu�[���PK��\�n�.[[Qsupported-kdump-targets.txtnu�[���PK��$
/home/emeraadmin/.htpasswds/../www/8aabc/../src/../img/../4d695/kexec-tools.zip