InfoScale 7.4.1U6 component patch for RHEL8 (2024)

Update ID: UPD466958

Version: 7.4.1.3500

Platform: Linux

Release date: 2024-08-30

Abstract

InfoScale 7.4.1U6 component patch on RHEL8 platform

Description

 * * * READ ME * * * * * * InfoScale 7.4.1 * * * * * * Patch 3500 * * * Patch Date: 2024-08-29This document provides the following information: * PATCH NAME * OPERATING SYSTEMS SUPPORTED BY THE PATCH * PACKAGES AFFECTED BY THE PATCH * BASE PRODUCT VERSIONS FOR THE PATCH * SUMMARY OF INCIDENTS FIXED BY THE PATCH * DETAILS OF INCIDENTS FIXED BY THE PATCH * INSTALLATION PRE-REQUISITES * INSTALLING THE PATCH * REMOVING THE PATCHPATCH NAME----------InfoScale 7.4.1 Patch 3500OPERATING SYSTEMS SUPPORTED BY THE PATCH----------------------------------------RHEL8 x86-64PACKAGES AFFECTED BY THE PATCH------------------------------VRTSaslapmVRTScavfVRTSsfmhVRTSvxvmBASE PRODUCT VERSIONS FOR THE PATCH----------------------------------- * InfoScale Availability 7.4.1 * InfoScale Enterprise 7.4.1 * InfoScale Foundation 7.4.1 * InfoScale Storage 7.4.1SUMMARY OF INCIDENTS FIXED BY THE PATCH---------------------------------------Patch ID: VRTSvxvm-7.4.1.3700* 4017284 (4011691) High CPU consumption on the VVR secondary nodes because of high pending IO load.* 4117916 (4055159) vxdisk list showing incorrect value of LUN_SIZE for nvme disks* 4174872 (4067191) IS8.0_SUSE15_CVR: After rebooted slave node master node got panic* 4118319 (4028439) Updating mediatype tages through disk online event.* 4118605 (4085145) System with NVME devices can crash due to memory corruption.* 4174010 (4154121) add a new tunable use_hw_replicatedev to enable Volume Manager to import the hardware replicated disk group.* 4174051 (4068090) System panic occurs because of block device inode ref count leaks.* 4174064 (4142772) Error mask NM_ERR_DCM_ACTIVE on rlink may not be cleared resulting in the rlink being unable to get into DCM again.* 4174066 (4100775) vxconfigd was hung as VxDMP doesn't support chained BIO on rhel7.* 4174074 (4093067) System panic occurs because of NULL pointer in block device structure.* 4174080 (3995308) vxtask status hang due to incorrect values getting copied into task status information.* 4174081 (4089626) Create XFS on VxDMP devices hang as VxDMP doesn't support chained BIO.* 4174082 (4117568) vradmind dumps core due to invalid memory access.* 4174083 (4098965) Crash at memset function due to invalid memory access.* 4174513 (4014894) Disk attach is done one by one for each disk creating transactions for each disk* 4174540 (4111254) vradmind dumps core while associating a rlink to rvg because of NULL pointer reference.* 4174543 (4108913) Vradmind dumps core because of memory corruption.* 4174547 (4105565) In CVR environment, system panic due to NULL pointer when VVR was doing recovery.* 4174555 (4128351) System hung observed when switching log owner.* 4174722 (4090772) vxconfigd/vx commands hung if fdisk opened secondary volume and secondary logowner panic'd* 4174724 (4122061) Observing hung after resync operation, vxconfigd was waiting for slaves' response* 4174729 (4114601) Panic: in dmp_process_errbp() for disk pull scenario.* 4174759 (4133793) vxsnap restore failed with DCO IO errors during the operation when run in loop for multiple VxVM volumes.* 4174778 (4072862) Stop cluster hang because of RVGLogowner and CVMClus resources fail to offline.* 4174781 (4044529) DMP is unable to display PWWN details for some LUNs by "vxdmpadm getportids".* 4174783 (4065490) VxVM udev rules consumes more CPU and appears in "top" output when system has thousands of storage devices attached.* 4174784 (4100716) No output from 'vxdisk list' if excluding any disks after applying patch VRTSvxvm-7.4.1.3409* 4174858 (4091076) SRL gets into pass-thru mode because of head error.* 4174860 (4090943) VVR Primary RLink cannot connect as secondary reports SRL log is full.* 4174868 (4040808) df command hung in clustered environment* 4174876 (4081740) vxdg flush command slow due to too many luns needlessly access /proc/partitions.* 4174884 (4069134) "vxassist maxsize alloc:array:<enclosure_name>" command may fail.* 4174885 (4009151) Auto-import of diskgroup on system reboot fails with error 'Disk for diskgroup not found'.* 4174992 (4118809) System panic at dmp_process_errbp.* 4174993 (3868140) VVR primary site node might panic if the rlink disconnects while some data is getting replicated to secondary.* 4175007 (4141806) Write_pattern got hung on primary master.* 4175098 (4159403) add clearclone option automatically when import the hardware replicated disk group.* 4175103 (4160883) clone_flag was set on srdf-r1 disks after reboot.* 4175104 (4167359) EMC DeviceGroup missing SRDF SYMDEV leads to DG corruption.* 4175105 (4168665) use_hw_replicatedev logic unable to import CVMVolDg resource unless vxdg -c is specified after EMC SRDF devices are closed and rescan on CVM Master.* 4175292 (4024140) In VVR environments, in case of disabled volumes, the DCM read operation does not complete, resulting in application IO hang.* 4175293 (4132799) No detailed error messages while joining CVM fail.* 4175294 (4115078) vxconfigd hung was observed when reboot all nodes of the primary site.* 4175295 (4087628) CVM goes into faulted state when slave node of primary is rebooted .* 4175297 (4152014) the excluded dmpnodes are visible after system reboot when SELinux is disabled.* 4175298 (4162349) vxstat not showing data under MIN and MAX header when using -S option* 4175348 (4011582) Display minimum and maximum read/write time it takes for the I/O under VxVM layer using vxstat utility.* 4176466 (4111978) Replication failed to start due to vxnetd threads not running on secondary site.* 4176467 (4136974) IPv6: With multiple RVGs, restarting vxnetd results in rlink disconnect* 4177461 (4017036) After enabling DMP (Dynamic Multipathing) Native support, enable /boot to bemounted on DMP device when Linux is booting with systemd.Patch ID: VRTSaslapm-7.4.1.3700* 4115221 (4011780) Add support for DELL EMC PowerStore plus PPPatch ID: VRTSvxvm-7.4.1.3500* 4055697 (4066785) create new option usereplicatedev=only to import the replicated LUN only.* 4109079 (4101128) VxVM rpm Support on RHEL 8.7 kernelPatch ID: VRTSaslapm-7.4.1.3500* 4065503 (4065495) Add support for DELL EMC PowerStore.* 4068406 (4068404) ASL request for HPE 3PAR/Primera/Alletra 9000 ALUA support.* 4076640 (4076320) AVID, reclaim_cmd_nv, extattr_nv, old_udid_nv are not generated for HPE 3PAR/Primera/Alletra 9000 ALUA array.* 4095952 (4093396) Fail to recognize more than one EMC PowerStore arrays.* 4110663 (4107932) ASLAPM rpm Support on RHEL 8.7 minor kernel 4.18.0-425.10.1.el8_7.x86_64Patch ID: VRTSvxvm-7.4.1.3400* 4062578 (4062576) hastop -local never finishes on Rhel8.4 and RHEL8.5 servers with latest minor kernels due to hang in vxdg deport command.Patch ID: VRTSvxvm-7.4.1.3300* 3984175 (3917636) Filesystems from /etc/fstab file are not mounted automatically on boot through systemd on RHEL7 and SLES12.* 4011097 (4010794) When storage activity was going on, Veritas Dynamic Multi-Pathing (DMP) caused system panic in a cluster.* 4039527 (4018086) The system hangs when the RVG in DCM resync with SmartMove is set to ON.* 4045494 (4021939) The "vradmin syncvol" command fails due to recent changes related to binding sockets without specifying IP addresses.* 4051815 (4031597) vradmind generates a core dump in __strncpy_sse2_unaligned.* 4051887 (3956607) A core dump occurs when you run the vxdisk reclaim command.* 4051889 (4019182) In case of a VxDMP configuration, an InfoScale server panics when applying a patch.* 4051896 (4010458) In a Veritas Volume Replicator (VVR) environment, the rlink might inconsistently disconnect due to unexpected transactions.* 4055653 (4049082) I/O read error is displayed when remote FSS node rebooting.* 4055660 (4046007) The private disk region gets corrupted if the cluster name is changed in FSS environment.* 4055668 (4045871) vxconfigd crashed at ddl_get_disk_given_path.* 4055697 (4047793) Unable to import diskgroup even replicated disks are in SPLIT mode* 4055772 (4043337) logging fixes for VVR* 4055895 (4038865) The system panics due to deadlock between inode_hash_lock and DMP shared lock.* 4055899 (3993242) vxsnap prepare command when run on vset sometimes fails.* 4055905 (4052191) Unexcepted scripts or commands are run due to an incorrect comments format in the vxvm-configure script.* 4055925 (4031064) Master switch operation is hung in VVR secondary environment.* 4055938 (3999073) The file system corrupts when the cfsmount group goes into offline state.* 4056107 (4036181) Volumes that are under a RVG (Replicated Volume Group), report an IO error.* 4056124 (4008664) System panic when signal vxlogger daemon that has ended.* 4056144 (3906534) After Dynamic Multi-Pathing (DMP) Native support is enabled, /boot should to be mounted on the DMP device.* 4056146 (3983832) VxVM commands hang in CVR environment.* 4056832 (4057526) Adding check for init while accessing /var/lock/subsys/ path in vxnm-vxnetd.sh script.Patch ID: VRTSvxvm-7.4.1.3200* 3984156 (3852146) A shared disk group (DG) fails to be imported when "-c" and "-o noreonline" are specified together.* 3984175 (3917636) Filesystems from /etc/fstab file are not mounted automatically on boot through systemd on RHEL7 and SLES12.* 4041285 (4044583) A system goes into the maintenance mode when DMP is enabled to manage native devices.* 4042039 (4040897) Add support for HPE MSA 2060 arrays in the current ASL.* 4050892 (3991668) In a Veritas Volume Replicator (VVR) configuration where secondary logging is enabled, data inconsistency is reported after the "No IBC message arrived" error is encountered.* 4051457 (3958062) After a boot LUN is migrated, enabling and disabling dmp_native_support fails.* 4051815 (4031597) vradmind generates a core dump in __strncpy_sse2_unaligned.* 4051887 (3956607) A core dump occurs when you run the vxdisk reclaim command.* 4051889 (4019182) In case of a VxDMP configuration, an InfoScale server panics when applying a patch.* 4051896 (4010458) In a Veritas Volume Replicator (VVR) environment, the rlink might inconsistently disconnect due to unexpected transactions.* 4051968 (4023390) Vxconfigd keeps dump core as invalid private region offset on a disk.* 4051985 (4031587) Filesystems are not mounted automatically on boot through systemd.* 4053231 (4053230) VxVM support for RHEL 8.5Patch ID: VRTSaslapm-7.4.1.3200* 4053234 (4053233) ASL-APM support for RHEL 8.5Patch ID: VRTSvxvm-7.4.1.3100* 4017284 (4011691) High CPU consumption on the VVR secondary nodes because of high pending IO load.* 4039240 (4027261) World writable permission not required for /var/VRTSvxvm/in.vxrsyncd.stderr and /var/adm/vx/vxdmpd.log* 4039242 (4008075) Observed with ASL changes for NVMe, This issue observed in reboot scenario. For every reboot machine was hitting panic And this was happening in loop.* 4039244 (4010612) This issue observed for NVMe and ssd. where every disk has separate enclosure like nvme0, nvme1... so on. means every nvme/ssd disks names would be hostprefix_enclosurname0_disk0, hostprefix_enclosurname1_disk0....* 4039249 (3984240) AIX builds were failing on AIX7.2* 4039525 (4012763) IO hang may happen in VVR (Veritas Volume Replicator) configuration when SRL overflows for one rlink while another one rlink is in AUTOSYNC mode.* 4039526 (4034616) vol_seclog_limit_ioload tunable needs to be enabled on Linux only.* 4040842 (4009353) Post enabling dmp native support machine is going in to mantaince mode* 4044174 (4044072) I/Os fail for NVMe disks with 4K block size on the RHEL 8.4 kernel.* 4045494 (4021939) The "vradmin syncvol" command fails due to recent changes related to binding sockets without specifying IP addresses.* 4045502 (4045501) The VRTSvxvm and the VRTSaslapm packages fail to install on Centos 8.4 systems.Patch ID: VRTSaslapm-7.4.1.3100* 4039241 (4010667) NVMe devices are not detected by Veritas Volume Manager(VxVM) on RHEL 8.* 4039527 (4018086) system hang was observed when RVG was in DCM resync with SmartMove as ON.Patch ID: VRTSvxvm-7.4.1.2900* 4013643 (4010207) System panicked due to hard-lockup due to a spinlock not released properly during the vxstat collection.* 4023762 (4020046) DRL log plex gets detached unexpectedly.* 4031342 (4031452) vxesd core dump in esd_write_fc()* 4033162 (3968279) Vxconfigd dumping core for NVME disk setup.* 4033163 (3959716) System may panic with sync replication with VVR configuration, when the RVG is in DCM mode.* 4033172 (3994368) vxconfigd daemon abort cause I/O write error* 4033173 (4021301) Data corruption issue observed in VxVM on RHEL8.* 4033216 (3993050) vxdctl dumpmsg command gets stuck on large node cluster* 4033515 (3984266) DCM flag in on the RVG volume may get deactivated after a master switch, which may cause excessive RVG recovery after subsequent node reboots.* 4035313 (4037915) VxVM 7.4.1 support for RHEL 8.4 compilation errors* 4036426 (4036423) Race condition while reading config file in docker volume plugin caused the issue in Flex Appliance.* 4037331 (4037914) BUG: unable to handle kernel NULL pointer dereference* 4037810 (3977101) Hitting core in write_sol_part()Patch ID: VRTSaslapm-7.4.1.2900* 4017906 (4017905) Modifying current ASL to support VSPEx series array.* 4022943 (4017656) Add support for XP8 arrays in the current ASL.* 4037946 (4037958) ASLAPM support for RHEL 8.4 on IS 7.4.1Patch ID: VRTSvxvm-7.4.1.2800* 3984155 (3976678) vxvm-recover: cat: write error: Broken pipe error encountered in syslog.* 4016283 (3973202) A VVR primary node may panic due to accessing already freed memory.* 4016291 (4002066) Panic and Hang seen in reclaim* 4016768 (3989161) The system panic occurs when dealing with getting log requests from vxloggerd.* 4017194 (4012681) If vradmind process terminates due to some reason, it is not properly restarted by RVG agent of VCS.* 4017502 (4020166) Vxvm Support on RHEL8 Update3* 4019781 (4020260) Failed to activate/set tunable dmp native support for Centos 8Patch ID: VRTSvxvm-7.4.1.2700* 3984163 (3978216) 'Device mismatch warning' seen on boot when DMP native support is enabled with LVM snapshot of root disk present* 4010517 (3998475) Unmapped PHYS read I/O split across stripes gives incorrect data leading to data corruption.* 4010996 (4010040) Configuring VRTSvxvm package creates a world writable file: /etc/vx/.vxvvrstatd.lock* 4011027 (4009107) CA chain certificate verification fails in SSL context.* 4011097 (4010794) Veritas Dynamic Multi-Pathing (DMP) caused system panic in a cluster while there were storage activities going on.* 4011105 (3972433) IO hang might be seen while issuing heavy IO load on volumes having cache objects.Patch ID: VRTSvxvm-7.4.1.2200* 3992902 (3975667) Softlock in vol_ioship_sender kernel thread* 3997906 (3987937) VxVM command hang may happen when snapshot volume is configured.* 4000388 (4000387) VxVM support on RHEL 8.2* 4001399 (3995946) CVM Slave unable to join cluster - VxVM vxconfigd ERROR V-5-1-11092 cleanup_client: (Memory allocation failure) 12* 4001736 (4000130) System panic when DMP co-exists with EMC PP on rhel8/sles12sp4.* 4001745 (3992053) Data corruption may happen with layered volumes due to some data not re-synced while attaching a plex.* 4001746 (3999520) vxconfigd may hang waiting for dmp_reconfig_write_lock when the DMP iostat tunable is disabled.* 4001748 (3991580) Deadlock may happen if IO performed on both source and snapshot volumes.* 4001750 (3976392) Memory corruption might happen in VxVM (Veritas Volume Manager) while processing Plex detach request.* 4001752 (3969487) Data corruption observed with layered volumes when mirror of the volume is detached and attached back.* 4001755 (3980684) Kernel panic in voldrl_hfind_an_instant while accessing agenode.* 4001757 (3969387) VxVM(Veritas Volume Manager) caused system panic when handle received request response in FSS environment.Patch ID: VRTSvxvm-7.4.1.1600* 3984139 (3965962) No option to disable auto-recovery when a slave node joins the CVM cluster.* 3984731 (3984730) VxVM logs warning messages when the VxDMP module is stopped or removed for the first time after the system is rebooted* 3988238 (3988578) Encrypted volume creation fails on RHEL 8* 3988843 (3989796) RHEL 8.1 support for VxVMPatch ID: VRTSsfmh-vom-HF0741600* 4176930 (4176927) sfmh for IS 7.4.1Patch ID: VRTScavf-7.4.1.3700* 4056567 (4054462) In a hardware replication environment, a shared disk group resource may fail to be imported when the HARDWARE_MIRROR flag is set.* 4172870 (4074274) DR test and failover activity might not succeed for hardware-replicated disk groups and EMC SRDF hardware-replicated disk groups are failing with PR operation failed message.* 4172873 (4079285) CVMVolDg resource takes many minutes to online with CPS fencing.* 4172875 (4088479) The EMC SRDF managed diskgroup import failed with below error. This failure is specific to EMC storage only on AIX with Fencing.Patch ID: VRTScavf-7.4.1.3400* 4056567 (4054462) In a hardware replication environment, a shared disk group resource may fail to be imported when the HARDWARE_MIRROR flag is set.DETAILS OF INCIDENTS FIXED BY THE PATCH---------------------------------------This patch fixes the following incidents:Patch ID: VRTSvxvm-7.4.1.3700* 4017284 (Tracking ID: 4011691)SYMPTOM:Observed high CPU consumption on the VVR secondary nodes because of high pending IO load.DESCRIPTION:High replication related IO load on the VVR secondary and the requirement of maintaining write order fidelity with limited memory pools created contention. This resulted in multiple VxVM kernel threads contending for shared resources and there by increasing the CPU consumption.RESOLUTION:Limited the way in which VVR consumes its resources so that a high pending IO load would not result into high CPU consumption.* 4117916 (Tracking ID: 4055159)SYMPTOM:vxdisk list showing incorrect value of LUN_SIZE for nvme disksDESCRIPTION:vxdisk list showing incorrect value of LUN_SIZE for nvme disks.RESOLUTION:Code changes have been done to show correct LUN_SIZE for nvme devices.* 4174872 (Tracking ID: 4067191)SYMPTOM:In CVR environment after rebooting Slave node, Master node may panic in volrv_free_muDESCRIPTION:As part of CVM Master switch a rvg_recovery is triggered. In this step racecondition can occured between the VVR objects due to which the object valueis not updated properly and can cause panic.RESOLUTION:Code changes are done to handle the race condition between VVR objects.* 4118319 (Tracking ID: 4028439)SYMPTOM:Not able to create cached volume due to SSD tag missingDESCRIPTION:Disk mediatype flag was not propagated previously, now updated during disk online.RESOLUTION:Code changes have been done to make mediatype tags visible during disk online* 4118605 (Tracking ID: 4085145)SYMPTOM:System with NVME devices can crash due to memory corruption.DESCRIPTION:As part of changes done to detect NVME devices through IOCTL, extra buflen was sent to nvme ioctl through the VRTSaslapm component.This lead to memory corruption and in some cases can cause system to crash.RESOLUTION:Appropriate code changes have been done in the VRTSaslapm to resolve the memory corruption.* 4174010 (Tracking ID: 4154121)SYMPTOM:When the replicated disks are in SPLIT mode, importing its disk group on target node failed with "Device is a hardware mirror".DESCRIPTION:When the replicated disks are in SPLIT mode, which are readable and writable, importing its disk group on target node failed with "Device is a hardware mirror". Third party doesn't expose disk attribute to show when it's in SPLIT mode. With this new enhancement, the replicated disk group can be imported when enable use_hw_replicatedev.RESOLUTION:The code is enhanced to import the replicated disk group on target node when enable use_hw_replicatedev.* 4174051 (Tracking ID: 4068090)SYMPTOM:System panicked in the following stack:#7 page_fault at ffffffffbce010fe[exception RIP: vx_bio_associate_blkg+56]#8 vx_dio_physio at ffffffffc0f913a3 [vxfs]#9 vx_dio_rdwri at ffffffffc0e21a0a [vxfs]#10 vx_dio_read at ffffffffc0f6acf6 [vxfs]#11 vx_read_common_noinline at ffffffffc0f6c07e [vxfs]#12 vx_read1 at ffffffffc0f6c96b [vxfs]#13 vx_vop_read at ffffffffc0f4cce2 [vxfs]#14 vx_naio_do_work at ffffffffc0f240bb [vxfs]#15 vx_naio_worker at ffffffffc0f249c3 [vxfs]DESCRIPTION:To get VxVM volume's block_device from gendisk, VxVM calls bdget_disk(), which increases device inode ref count. The ref count gets decreased by bdput() call that is missed in our code, hence the inode count leaks occurs, which may cause panic in vxfs when issuing IO on VxVM volume.RESOLUTION:The code changes have been done to fix the problem.* 4174064 (Tracking ID: 4142772)SYMPTOM:In case SRL overflow frequently happens, SRL reaches 99% filled but the rlink is unable to get into DCM mode.DESCRIPTION:When starting DCM mode, need to check if the error mask NM_ERR_DCM_ACTIVE has been set to prevent duplicated triggers. This flag should have been reset after DCM mode was activated by reconnecting the rlink. As there's a racing condition, the rlink reconnect may be completed before DCM is activated, hence the flag isn't able to be cleared.RESOLUTION:The code changes have been made to fix the issue.* 4174066 (Tracking ID: 4100775)SYMPTOM:vxconfigd kept waiting for IO drain when removed dmpnodes. It was hung with below stack:[] dmpsync_wait+0xa7/0xf0 [vxdmp][] dmp_destroy_mp_node+0x98/0x120 [vxdmp][] dmp_decode_destroy_dmpnode+0xd3/0x100 [vxdmp][] dmp_decipher_instructions+0x2d7/0x390 [vxdmp][] dmp_process_instruction_buffer+0x1be/0x1e0 [vxdmp][] dmp_reconfigure_db+0x5b/0xe0 [vxdmp][] gendmpioctl+0x76c/0x950 [vxdmp][] dmpioctl+0x39/0x80 [vxdmp][] dmp_ioctl+0x3a/0x70 [vxdmp][] blkdev_ioctl+0x28a/0xa20[] block_ioctl+0x41/0x50[] do_vfs_ioctl+0x3a0/0x5b0[] SyS_ioctl+0xa1/0xc0DESCRIPTION:XFS utilizes chained BIO feature to send BIOs to VxDMP. While the chained BIO isn't supported by VxDMP, it caused VxDMP kept waiting for a completed BIO.RESOLUTION:Code changes have been made to support chained BIO on rhel7.* 4174074 (Tracking ID: 4093067)SYMPTOM:System panicked in the following stack:#9 [] page_fault at [exception RIP: bdevname+26]#10 [] get_dip_from_device [vxdmp]#11 [] dmp_node_to_dip at [vxdmp]#12 [] dmp_check_nonscsi at [vxdmp]#13 [] dmp_probe_required at [vxdmp]#14 [] dmp_check_disabled_policy at [vxdmp]#15 [] dmp_initiate_restore at [vxdmp]#16 [] dmp_daemons_loop at [vxdmp]DESCRIPTION:After got block_device from OS, DMP didn't do the NULL pointer check against block_device->bd_part. This NULL pointer further caused system panic when bdevname() was called.RESOLUTION:The code changes have been done to fix the problem.* 4174080 (Tracking ID: 3995308)SYMPTOM:vxtask status hang due to incorrect values getting copied into task status information.DESCRIPTION:When doing atomic-copy admin task, VxVM copy the entire request structure passed as response of the task status in local copy. This creates some issues of incorrect copy/overwrite of pointer.RESOLUTION:Code changes have been made to fix the problem.* 4174081 (Tracking ID: 4089626)SYMPTOM:On RHEL8.5, IO hang occurrs when creating XFS on VxDMP devices or writing file on mounted XFS from VxDMP devices.DESCRIPTION:XFS utilizes chained BIO feature to send BIOs to VxDMP. While the chained BIO isn't suported by VxDMP, hence the BIOs may struck in SCSI disk driver.RESOLUTION:Code changes have been made to support chained BIO.* 4174082 (Tracking ID: 4117568)SYMPTOM:Vradmind dumps core with the following stack:#1 std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string (this=0x7ffdc380d810, __str=<error reading variable: Cannot access memory at address 0x3736656436303563>)#2 0x000000000040e02b in ClientMgr::closeStatsSession#3 0x000000000040d0d7 in ClientMgr::client_ipm_close#4 0x000000000058328e in IpmHandle::~IpmHandle#5 0x000000000057c509 in IpmHandle::events#6 0x0000000000409f5d in mainDESCRIPTION:After terminating vrstat, the StatSession in vradmind was closed and the corresponding Client object was deleted. When closing the IPM object of vrstat, try to access the removed Client, hence the core dump.RESOLUTION:Core changes have been made to fix the issue.* 4174083 (Tracking ID: 4098965)SYMPTOM:Vxconfigd dumping Core when scanning IBM XIV Luns with following stack.#0 0x00007fe93c8aba54 in __memset_sse2 () from /lib64/libc.so.6#1 0x000000000061d4d2 in dmp_getenclr_ioctl ()#2 0x00000000005c54c7 in dmp_getarraylist ()#3 0x00000000005ba4f2 in update_attr_list ()#4 0x00000000005bc35c in da_identify ()#5 0x000000000053a8c9 in find_devices_in_system ()#6 0x000000000053aab5 in mode_set ()#7 0x0000000000476fb2 in ?? ()#8 0x00000000004788d0 in main ()DESCRIPTION:This could cause 2 issues if there are more than 1 disk arrays connected:1. If the incorrect memory address exceeds the range of valid virtual memory, it will trigger "Segmentation fault" and crash vxconfigd.2. If the incorrect memory address does not exceed the range of valid virtual memory, it will cause memory corruption issue but maybe not trigger vxconfigd crash issue.RESOLUTION:Code changes have been made to correct the problem.* 4174513 (Tracking ID: 4014894)SYMPTOM:Disk attach taking long time with reboot/hastop in FSS environment.DESCRIPTION:Current code in vxattachd is calling 'vxdg -k add disk' command for each disk separately and this is serialised. This means number of transaction are initiated to add disk and can impact application IO multiple times due to IO quiesce/drain activity.RESOLUTION:code changes to add all disks in a single command, thus generating less transactions and execution time.* 4174540 (Tracking ID: 4111254)SYMPTOM:vradmind dumps core with the following stack:#3 0x00007f3e6e0ab3f6 in __assert_fail () from /root/cores/lib64/libc.so.6#4 0x000000000045922c in RDS::getHandle ()#5 0x000000000056ec04 in StatsSession::addHost ()#6 0x000000000045d9ef in RDS::addRVG ()#7 0x000000000046ef3d in RDS::createDummyRVG ()#8 0x000000000044aed7 in PriRunningState::update ()#9 0x00000000004b3410 in RVG::update ()#10 0x000000000045cb94 in RDS::update ()#11 0x000000000042f480 in DBMgr::update ()#12 0x000000000040a755 in main ()DESCRIPTION:vradmind was trying to access a NULL pointer (Remote Host Name) in a rlink object, as the Remote Host attribute of the rlink hasn't been set.RESOLUTION:The issue has been fixed by making code changes.* 4174543 (Tracking ID: 4108913)SYMPTOM:Vradmind dumps core with the following stacks:#3 0x00007f2c171be3f6 in __assert_fail () from /root/coredump/lib64/libc.so.6#4 0x00000000005d7a90 in VList::concat () at VList.C:1017#5 0x000000000059ae86 in OpMsg::List2Msg () at Msg.C:1280#6 0x0000000000441bf6 in OpMsg::VList2Msg () at ../../include/Msg.h:389#7 0x000000000043ec33 in DBMgr::processStatsOpMsg () at DBMgr.C:2764#8 0x00000000004093e9 in process_message () at srvmd.C:418#9 0x000000000040a66d in main () at srvmd.C:733#0 0x00007f4d23470a9f in raise () from /root/core.Jan18/lib64/libc.so.6#1 0x00007f4d23443e05 in abort () from /root/core.Jan18/lib64/libc.so.6#2 0x00007f4d234b3037 in __libc_message () from /root/core.Jan18/lib64/libc.so.6#3 0x00007f4d234ba19c in malloc_printerr () from /root/core.Jan18/lib64/libc.so.6#4 0x00007f4d234bba9c in _int_free () from /root/core.Jan18/lib64/libc.so.6#5 0x00000000005d5a0a in ValueElem::_delete_val () at Value.C:491#6 0x00000000005d5990 in ValueElem::~ValueElem () at Value.C:480#7 0x00000000005d7244 in VElem::~VElem () at VList.C:480#8 0x00000000005d8ad9 in VList::~VList () at VList.C:1167#9 0x000000000040a71a in main () at srvmd.C:743#0 0x000000000040b826 in DList::head () at ../include/DList.h:82#1 0x00000000005884c1 in IpmHandle::send () at Ipm.C:1318#2 0x000000000056e101 in StatsSession::sendUCastStatsMsgToPrimary () at StatsSession.C:1157#3 0x000000000056dea1 in StatsSession::sendStats () at StatsSession.C:1117#4 0x000000000046f610 in RDS::collectStats () at RDS.C:6011#5 0x000000000043f2ef in DBMgr::collectStats () at DBMgr.C:2799#6 0x00007f98ed9131cf in start_thread () from /root/core.Jan26/lib64/libpthread.so.0#7 0x00007f98eca4cdd3 in clone () from /root/core.Jan26/lib64/libc.so.6DESCRIPTION:There is a race condition in vradmind that may cause memory corruption and unpredictable result. Vradmind periodically forks a child thread to collect VVR statistic data and send them to the remote site. While the main thread may also be sending data using the same handler object, thus member variables in the handler object are accessed in parallel from multiple threads and may become corrupted.RESOLUTION:The code changes have been made to fix the issue.* 4174547 (Tracking ID: 4105565)SYMPTOM:In Cluster Volume Replication(CVR) environment, system panic with below stack when Veritas Volume Replicator(VVR) was doing recovery:[] do_page_fault [] page_fault [exception RIP: volvvr_rvgrecover_nextstage+747] [] volvvr_rvgrecover_done [vxio][] voliod_iohandle[vxio][] voliod_loop at[vxio]DESCRIPTION:There might be a race condition which caused VVR failed to trigger a DCM flush sio. VVR failed to do sanity check against this sio. Hence triggered system panic.RESOLUTION:Code changes have been made to do a sanity check of the DCM flush sio.* 4174555 (Tracking ID: 4128351)SYMPTOM:System hung observed when switching log owner.DESCRIPTION:VVR mdship SIOs might be throttled due to reaching max allocation count,etc. These SIOs are holding io count. When log owner change kicked in and quiesced RVG. VVR log owner change SIO is waiting for iocount to drop to zero to proceed further. VVR mdship requests from the log client are returned with EAGAIN as RVG quiesced. The throttled mdship SIOs need to be driven by the upcoming mdship requests, hence the deadlock, which caused system hung.RESOLUTION:Code changes have been made to flush the mdship queue before VVR log owner change SIO waiting for IO drain.* 4174722 (Tracking ID: 4090772)SYMPTOM:vxconfigd/vx commands hang on secondary site in a CVR environment.DESCRIPTION:Due to a window with unmatched SRL positions, if any application (e.g. fdisk) tryingto open the secondary RVG volume will acquire a lock and wait for SRL positions to match.During this if any vxvm transaction kicked in will also have to wait for same lock.Further logowner node panic'd which triggered logownership change protocol which hungas earlier transaction was stuck. As logowner change protocol could not complete,in absence of valid logowner SRL position could not match and caused deadlock. That leadto vxconfigd and vx command hang.RESOLUTION:Added changes to allow read operation on volume even if SRL positions areunmatched. We are still blocking write IOs and just allowing open() call for read-onlyoperations, and hence there will not be any data consistency or integrity issues.* 4174724 (Tracking ID: 4122061)SYMPTOM:Observing hung after resync operation, vxconfigd was waiting for slaves' response.DESCRIPTION:VVR logowner was in a transaction and returned VOLKMSG_EAGAIN to CVM_MSG_GET_METADATA which is expected. Once the client received VOLKMSG_EAGAIN, it would sleep 10 jiffies and retry the kmsg . In a busy cluster, it might happen the retried kmsgs plus the new kmsgs got built up and hit the kmsg flowcontrol before the vvr logowner transaction completed. Once the client refused any kmsgs due to the flowcontrol. The transaction on vvr logowner might get stuck because it required kmsg response from all the slave node.RESOLUTION:Code changes have been made to increase the kmsg flowcontrol and don't let kmsg receiver fall asleep but handle the kmsg in a restart function.* 4174729 (Tracking ID: 4114601)SYMPTOM:System gets panicked and rebootedDESCRIPTION:RCA:Start the IO on volume device and pull out it's disk from the machine and hit below panic on RHEL8. dmp_process_errbp dmp_process_errbuf.cold.2+0x328/0x429 [vxdmp] dmpioctl+0x35/0x60 [vxdmp] dmp_flush_errbuf+0x97/0xc0 [vxio] voldmp_errbuf_sio_start+0x4a/0xc0 [vxio] voliod_iohandle+0x43/0x390 [vxio] voliod_loop+0xc2/0x330 [vxio] ? voliod_iohandle+0x390/0x390 [vxio] kthread+0x10a/0x120 ? set_kthread_struct+0x50/0x50As disk pulled out from the machine VxIO hit a IO error and it routed that IO to dmp layer via kernel-kernel IOCTL for error analysis.following is the code path for IO routing,voldmp_errbuf_sio_start()-->dmp_flush_errbuf()--->dmpioctl()--->dmp_process_errbuf()dmp_process_errbuf() retrieves device number of the underlying path (os-device).and it tries to get bdev (i.e. block_device) pointer from path-device number.As path/os-device is removed by disk pull, linux returns fake bdev for the path-device number.For this fake bdev there is no gendisk associated with it (bdev->bd_disk is NULL).We are setting this NULL bdev->bd_disk to the IO buffer routed from vxio.which leads a panic on dmp_process_errbp.RESOLUTION:If bdev->bd_disk found NULL then set DMP_CONN_FAILURE error on the IO buffer and return DKE_ENXIO to vxio driver* 4174759 (Tracking ID: 4133793)SYMPTOM:DCO experience IO Errors while doing a vxsnap restore on vxvm volumes.DESCRIPTION:Dirty flag was getting set in context of an SIO with flag VOLSIO_AUXFLAG_NO_FWKLOG being set. This led to transaction errors while doing a vxsnap restore command in loop for vxvm volumes causing transaction abort. As a result, VxVM tries to cleanup by removing newly added BMs. Now, VxVM tries to access the deleted BMs. however it is not able to since they were deleted previously. This ultimately leads to DCO IO error.RESOLUTION:Skip first write klogging in the context of an IO with flag VOLSIO_AUXFLAG_NO_FWKLOG being set.* 4174778 (Tracking ID: 4072862)SYMPTOM:In case RVGLogowner resources get onlined on slave nodes, stop the whole cluster may fail and RVGLogowner resources goes in to offline_propagate state.DESCRIPTION:While stopping whole cluster, the racing may happen between CVM reconfiguration and RVGLogowner change SIO.RESOLUTION:Code changes have been made to fix these racings.* 4174781 (Tracking ID: 4044529)SYMPTOM:DMP is unable to display PWWN details for some LUNs by "vxdmpadm getportids".DESCRIPTION:Udev rules file(/usr/lib/udev/rules.d/63-fc-wwpn-id.rules) from newer RHEL OS will generate an addtional hardware path for a FC device, hence there will be 2 hardware paths for the same device. However vxpath_links script only consider single hardware path for a FC device. In the case of 2 hardware paths, vxpath_links may not treat it as a FC device, thus fail to populate PWWN related information.RESOLUTION:Code changes have been done to make vxpath_links correctly detect FC device even there are multiple hardware paths.* 4174783 (Tracking ID: 4065490)SYMPTOM:systemd-udev threads consumes more CPU during system bootup or device discovery.DESCRIPTION:During disk discovery when new storage devices are discovered, VxVM udev rules are invoked for creating hardware pathsymbolic link and setting SELinux security context on Veritas device files. For creating hardware path symbolic link to eachstorage device, "find" command is used internally which is CPU intensive operation. If too many storage devices are attached tosystem, then usage of "find" command causes high CPU consumption.Also, for setting appropriate SELinux security context on VxVM device files, restorecon is done irrespective of SELinux is enabled or disabled.RESOLUTION:Usage of "find" command is replaced with "udevadm" command. SELinux security context on VxVM device files is being setonly when SELinux is enabled on system.* 4174784 (Tracking ID: 4100716)SYMPTOM:After applying patch VRTSvxvm-7.4.1.3409, no output from 'vxdisk list' if excluding any disks.DESCRIPTION:Before installing patch VRTSvxvm-7.4.1.3409, exclude disk works fine.No output from 'vxdisk list' if excluding any disks after applying patch VRTSvxvm-7.4.1.3409RESOLUTION:As part of rpm upgrade the symlinks under "/dev/vx/.dmp" get regenerated using "/dev/vx/.dmp/vxpath_links ALL" option. Due the fix introduced in 7.4.1.3405 the "/dev/vx/.dmp/vxpath_links ALL" is not generating the symlinks under "/dev/vx/.dmp" correctly.* 4174858 (Tracking ID: 4091076)SYMPTOM:SRL gets into pass-thru mode when it's about to overflow.DESCRIPTION:Primary initiated log search for the requested update sent from secondary. The search aborted with head error as a check condition isn't set correctly.RESOLUTION:Fixed the check condition to resolve the issue.* 4174860 (Tracking ID: 4090943)SYMPTOM:On Primary, RLink is continuously getting connected/disconnected with below message seen in secondary syslog: VxVM VVR vxio V-5-3-0 Disconnecting replica <rlink_name> since log is full on secondary.DESCRIPTION:When RVG logowner node panic, RVG recovery happens in 3 phases.At the end of 2nd phase of recovery in-memory and on-disk SRL positions remains incorrectand during this time if there is logowner change then Rlink won't get connected.RESOLUTION:Handled in-memory and on-disk SRL positions correctly.* 4174868 (Tracking ID: 4040808)SYMPTOM:df command hung in clustered environmentDESCRIPTION:df command hung in clustered environment due to drl updates are not getting complete causing application IOs to hang.RESOLUTION:Fis is added to complete incore DRL updates and drive corresponding application IOs* 4174876 (Tracking ID: 4081740)SYMPTOM:vxdg flush command slow due to too many luns needlessly access /proc/partitions.DESCRIPTION:Linux BLOCK_EXT_MAJOR(block major 259) is used as extended devt for block devices. When partition number of one device is more than 15, the partition device gets assigned under major 259 to solve the sd limitations (16 minors per device), by which more partitions are allowed for one sd device. During "vxdg flush", for each lun in the disk group, vxconfigd reads file /proc/partitions line by line through fgets() to find all the partition devices with major number 259, which would cause vxconfigd to respond sluggishly if there are large amount of luns in the disk group.RESOLUTION:Code has been changed to remove the needless access on /proc/partitions for the luns without using extended devt.* 4174884 (Tracking ID: 4069134)SYMPTOM:"vxassist maxsize alloc:array:<enclosure_name>" command may fail with below error:VxVM vxassist ERROR V-5-1-18606 No disks match specification for Class: array, Instance: <enclosure_name>DESCRIPTION:If the enclosure name is greater than 16 chars then "vxassist maxsize alloc:array" command can fail. This is because if the enclosure name is more than 16 chars then it gets truncated while copying from VxDMP to VxVM.This further cause the above vxassist command to fail.RESOLUTION:Code changes are done to avoid the truncation of enclosure name while copying from VxDMP to VxVM.* 4174885 (Tracking ID: 4009151)SYMPTOM:Auto-import of diskgroup on system reboot fails with error:"Disk for diskgroup not found"DESCRIPTION:When diskgroup is auto-imported, VxVM (Veritas Volume Manager) tries to find the disk with latest configuration copy. During this the DG import process searches through all the disks. The procedure also tries to find out if the DG contains clone disks or standard disks. While doing this calculation the DG import process incorrectly determines that current DG contains cloned disks instead of standard disks because of the stale value being there for the previous DG selected. Since VxVM incorrectly decides to import cloned disks instead of standard disks the import fails with "Disk for diskgroup not found" error.RESOLUTION:Code has been modified to accurately determine whether the DG contains standard or cloned disks and accordingly use those disks for DG import.* 4174992 (Tracking ID: 4118809)SYMPTOM:System panic at dmp_process_errbp with following call stack.machine_kexec__crash_kexeccrash_kexecoops_endno_context__bad_area_nosemaphoredo_page_faultpage_fault[exception RIP: dmp_process_errbp+203]dmp_daemons_loopkthreadret_from_forkDESCRIPTION:When a LUN is detached, VxDMP may invoke its error handler to process the error buffer, during that period the OS SCSI device node could have been removed, which will make VxDMP can not find the corresponding path node and introduces a pointer reference panic.RESOLUTION:Code changes have been made to avoid panic.* 4174993 (Tracking ID: 3868140)SYMPTOM:VVR primary site node might panic if the rlink disconnects while some data is getting replicated to secondary with below stack: dump_stack()panic()vol_rv_service_message_start()update_curr()put_prev_entity()voliod_iohandle()voliod_loop()voliod_iohandle()DESCRIPTION:If rlink disconnects, VVR will clear some handles to the in-progress updates in memory, but if some IOs are still getting acknowledged from secondary to primary, then accessing updates for these IOs might result in panic at primary node.RESOLUTION:Code fix is implemented to correctly access the primary node updates in order to avoid the panic.* 4175007 (Tracking ID: 4141806)SYMPTOM:TC hung on primary node.DESCRIPTION:1. Secondary sent rlink pause checkpoint request to primary after loading ted spec actions2. Primary received pause checkpoint message from secondary and it didnt process the request because of tedspec action..3. Later on secondary after some timespan, rlink disconnect event occurred due to ack timeout for above pause checkpoint message.4. Above event sent rlink disconnect request to primary which inturn sets rp_disconnected to true.5. This caused primary to continue processing the pause checkpoint message via below code-- vol_rv_service_checkpoint->vol_rv_start_request_processing6. Depending on rlink phase, vol_rv_start_request_processing returns EAGAIN or EBUSY by setting interrupt- VOLRP_INTFLAG_PROCESS_REQUEST which inturn caused RUTHREAD to process the interrupt and set the VOL_RIFLAG_REQUEST_PENDING on rlink via vol_ru_process_interrupts.7. Later pause checkpoint sio tried to send the acknowledgment for VOLRV_MSG_CHECKPOINT message to secondary by setting msgsio_errno to NM_ERR_BUSY without resetting the VOL_RIFLAG_REQUEST_PENDING flag.8. Later after this write pattern is issued on primary and after some time SRL has reached a state where LOG OVERFLOW protection is triggered. This caused the incoming application IOs to throttle continuously till the SRL drains by some amount..9. After this ruthread which does the job of reading the data and sending updates to secondary is not happened as its continuously deferred and not doing any further work as VOL_RIFLAG_REQUEST_PENDING is set which inturn doesnt drain the SRL.10. This caused incoming IO to continuously throttle which caused hang..RESOLUTION:Reset the VOL_RIFLAG_REQUEST_PENDING flag when we try to send ack to secondary in case of EBUSY scenarios I.e when we are not able to process the request currently* 4175098 (Tracking ID: 4159403)SYMPTOM:When the replicated disks are in SPLIT mode and use_hw_replicatedev is on, disks are marked as cloned disks after the hardware replicated disk group gets imported.DESCRIPTION:add clearclone option automatically when import the hardware replicated disk group to clear the cloned flag on disks.RESOLUTION:The code is enhanced to import the replicated disk group with clearclone option.* 4175103 (Tracking ID: 4160883)SYMPTOM:clone_flag was set on srdf-r1 disks after reboot.DESCRIPTION:Clean clone got reset in case of AUTOIMPORT, which misled the clone_flag got set on the disk in the end.RESOLUTION:Code change has been made to correct the behavior of setting clone_flag on a disk.* 4175104 (Tracking ID: 4167359)SYMPTOM:EMC DeviceGroup missing SRDF SYMDEV. After doing disk group import, import failed with "Disk write failure" and corrupts disks headers.DESCRIPTION:SRDF will not make all disks read-writable (RW) on the remote side during an SRDF failover. When an SRDF SYMDEV is missing, the missing disk in pairs on the remote side remains in a write-disabled (WD) state. This leads to write errors, which can further cause disk header corruption.RESOLUTION:Code change has been made to fail disk group if any disks in this group are detected as WD.* 4175105 (Tracking ID: 4168665)SYMPTOM:use_hw_replicatedev logic unable to import CVMVolDg resource unless vxdg -c is specified after EMC SRDF devices are closed and rescan on CVM Master.DESCRIPTION:use_hw_replicatedev logic unable to import CVMVolDg resource unless vxdg -c is specified after EMC SRDF devices are closed and rescan on CVM Master.RESOLUTION:Reset "ret" before making another attempt of dg import.* 4175292 (Tracking ID: 4024140)SYMPTOM:In VVR environments, in case of disabled volumes, DCM read operation does not complete, resulting in application IO hang.DESCRIPTION:If all volumes in the RVG have been disabled, then the read on the DCM does not complete. This results in an IO hang and blocks other operations such as transactions and diskgroup delete.RESOLUTION:If all the volumes in the RVG are found disabled, then fail the DCM read.* 4175293 (Tracking ID: 4132799)SYMPTOM:If GLM is not loaded, start CVM fails with the following errors:# vxclustadm -m gab startnodeVxVM vxclustadm INFO V-5-2-9687 vxclustadm: Fencing driver is in disabled mode - VxVM vxclustadm ERROR V-5-1-9743 errno 3DESCRIPTION:The error number but the error message is printed while joining CVM fails.RESOLUTION:The code changes have been made to fix the issue.* 4175294 (Tracking ID: 4115078)SYMPTOM:vxconfigd hung was observed when reboot all nodes of the primary site.DESCRIPTION:When vvr logowner node wasn't configured on Master. VVR recovery was triggered by node leaving, in case data volume was in recovery, vvr logowner would send ilock request to Master node. Master granted the ilock request and sent a response to vvr logonwer. But due to a bug, ilock requesting node id mismatch was detected by vvr logowner. VVR logowner thought the ilock grant failed, mdship IO went into a permanent hang. vxconfigd was stuck and kept waiting for IO drain.RESOLUTION:Code changes have been made to correct the ilock requesting node id in the ilock request in such case.* 4175295 (Tracking ID: 4087628)SYMPTOM:When DCM is in replication mode with volumes mounted having large regions for DCM to sync and if slave node reboot is triggered, this might cause CVM to go into faulted state .DESCRIPTION:During Resiliency tests, performed sequence of operations as following. 1. On an AWS FSS-CVR setup, replication is started across the sites for 2 RVGs.2. The low owner service groups for both the RVGs are online on a Slave node. 3. Rebooted another Slave node where logowner is not online. 4. After Slave node come back from reboot, it is unable to join CVM Cluster. 5. Also vx commands are also hung/stuck on the CVM Master and Logowner slave node.RESOLUTION:In RU SIO before requesting vxfs_free_region(), drop IO count and hold it again after. Because the transaction has been locked (vol_ktrans_locked = 1) right before calling vxfs_free_region(), we don't need the iocount to hold rvg from being removed.* 4175297 (Tracking ID: 4152014)SYMPTOM:the excluded dmpnodes are visible after system reboot when SELinux is disabled.DESCRIPTION:During system reboot, disks' hardware soft links failed to be created before DMP exclusion in function, hence DMP failed to recognize the excluded dmpnodes.RESOLUTION:Code changes have been made to reduce the latency in creation of hardware soft links and remove tmpfs /dev/vx on an SELinux Disabled platform.* 4175298 (Tracking ID: 4162349)SYMPTOM:When using vxstat with -S option the values in two columns(MIN(ms) and MAX(ms)) are not printed.DESCRIPTION:When using vxstat with -S option the values in two columns(MIN(ms) and MAX(ms)) are not printed.vxstat -g <dg_name> -i 5 -S -u m OPERATIONS BYTES AVG TIME(ms) MIN(ms) MAX(ms)TYP NAME READ WRITE READ WRITE READ WRITE READ WRITE READ WRITEgf01sxdb320p Mon Apr 22 14:07:55 2024vol admvol 23977 96830 523.707m 425.3496m 1.43 2.12vol appvol 7056 30556 254.3959m 146.1489m 0.85 2.11RESOLUTION:In our code we were not printing the values for last two values. Code changes have been done to fix this issue.* 4175348 (Tracking ID: 4011582)SYMPTOM:In VxVM, minimum and maximum read/write time for the IO workload is not captured using vxstat utility.DESCRIPTION:Currently vxstat utility display only average read/write time it takes for the IO workload to complete Inder VxVM layer.RESOLUTION:Changes are done to existing vxstat utility to capture and display minimum and maximum read/write time.* 4176466 (Tracking ID: 4111978)SYMPTOM:Replication failed to start due to vxnetd threads not running on secondary site.DESCRIPTION:Vxnetd was waiting to start "nmcomudpsrv" and "nmcomlistenserver" threads. Due to a race condition of some resource between those two thread, vxnetd was stuck in a dead loop till max retry reached.RESOLUTION:Code changes have been made to add lock protection to avoid the race condition.* 4176467 (Tracking ID: 4136974)SYMPTOM:With multiple RVGs created and replication IP is an IPv6 address, restarting vxnetd results in vxnetd listen sockets binding to IPv4 and hence replication gets disconnected and remains disconnected.DESCRIPTION:With multiple RVGs created and replication IP is an IPv6 address, restarting vxnetd results in vxnetd listen sockets binding to IPv4 and hence replication gets disconnected and remains disconnected.RESOLUTION:Retry test bind on IPv6 for 3 minutes with 5 second delay between the retries* 4177461 (Tracking ID: 4017036)SYMPTOM:After enabling DMP (Dynamic Multipathing) Native support, enable /boot to bemounted on DMP device when Linux is booting with systemd.DESCRIPTION:Currently /boot is mounted on top of OS (Operating System) device. When DMPNative support is enabled, only VG's (Volume Groups) are migrated from OS device to DMP device.This is the reason /boot is not migrated to DMP device.With this if OS device path is not available then system becomes unbootable since /boot is not available. Thus it becomes necessary to mount /boot on DMPdevice to provide multipathing and resiliency. The current fix can only work on configurations with single boot partition.RESOLUTION:Code changes have been done to migrate /boot on top of DMP device when DMPNative support is enabled and when Linux is booting with systemd.Patch ID: VRTSaslapm-7.4.1.3700* 4115221 (Tracking ID: 4011780)SYMPTOM:This is new array and we need to add support for EMC PowerStore plus PP.DESCRIPTION:EMC PowerStore is new array and current ASL doesn't support it. So it will not be claimed with current ASL. This array support has been now added in the currentASL.RESOLUTION:Code changes to support EMC PowerStore plus PP have been done.Patch ID: VRTSvxvm-7.4.1.3500* 4055697 (Tracking ID: 4066785)SYMPTOM:When the replicated disks are in SPLIT mode, importing its disk group failed with "Device is a hardware mirror".DESCRIPTION:When the replicated disks are in SPLIT mode, which are readable and writable, importing its disk group failed with "Device is a hardware mirror". Third party doesn't expose disk attribute to show when it's in SPLIT mode. With this new enhancement, the replicated disk group can be imported with option `-o usereplicatedev=only`.RESOLUTION:The code is enhanced to import the replicated disk group with option `-o usereplicatedev=only`.* 4109079 (Tracking ID: 4101128)SYMPTOM:Old VxVM rpm fails to load on RHEL8.7DESCRIPTION:The RHEL8.7 is a new OS release and has multiple kernel changes which were making VxVM incompatible with OS kernel version 4.18.0-425.3.1RESOLUTION:Required code changes have been done. VxVM module compiled with RHEL 8.7 kernel.Patch ID: VRTSaslapm-7.4.1.3500* 4065503 (Tracking ID: 4065495)SYMPTOM:This is new array and we need to add support for EMC PowerStore.DESCRIPTION:EMC PowerStore is new array and current ASL doesn't support it. So it will not be claimed with current ASL. This array support has been now added in the current ASL.RESOLUTION:Code changes to support EMC PowerStore have been done.* 4068406 (Tracking ID: 4068404)SYMPTOM:We need to add support to claim ALUA Disks on HPE 3PAR/Primera/Alletra 9000 arrays.DESCRIPTION:Current ASL doesn't support HPE 3PAR/Primera/Alletra 9000 ALUA array type. This ALUA array support has been now added in the current ASL.RESOLUTION:Code changes to support HPE 3PAR/Primera/Alletra 9000 ALUA array have been done.* 4076640 (Tracking ID: 4076320)SYMPTOM:Not Able to get ARRAY_VOLUME_ID, old_udid.# vxdisk -p list 3pardata1_3 |grep -i ARRAY_VOLUME_ID# vxdisk -p list 3pardata1_3 |grep -i old_udid.DESCRIPTION:AVID, reclaim_cmd_nv, extattr_nv, old_udid_nv is not generated for HPE 3PAR/Primera/Alletra 9000 ALUA array.RESOLUTION:Code changes added to generate AVID, reclaim_cmd_nv, extattr_nv, old_udid_nv for HPE 3PAR/Primera/Alletra 9000 ALUA array have been done.* 4095952 (Tracking ID: 4093396)SYMPTOM:All the PowerStore arrays show the same SN. Customer failed to distinguish them. Because the enclose SN was hardcoded. It should be read from storage.DESCRIPTION:Code changes have been made to update enclosure SN correctly.RESOLUTION:Code changes to support EMC PowerStore have been done.* 4110663 (Tracking ID: 4107932)SYMPTOM:Support for ASLAPM on RHEL8.7 minor kernel 4.18.0-425.10.1.el8_7.x86_64DESCRIPTION:RedHat did some critical changes in latest kernel which causing soft-lockup issue kernel modules while installation.RESOLUTION:As suggested by RedHat (https://access.redhat.com/solutions/6985596) modules compiled with RHEL 8.7 minor kernel.Patch ID: VRTSvxvm-7.4.1.3400* 4062578 (Tracking ID: 4062576)SYMPTOM:When hastop -local is used to stop the cluster, dg deport command hangs. Below stack trace is observed in system logs :#0 [ffffa53683bf7b30] __schedule at ffffffffa834a38d #1 [ffffa53683bf7bc0] schedule at ffffffffa834a868 #2 [ffffa53683bf7bd0] blk_mq_freeze_queue_wait at ffffffffa7e4d4e6 #3 [ffffa53683bf7c18] blk_cleanup_queue at ffffffffa7e433b8 #4 [ffffa53683bf7c30] vxvm_put_gendisk at ffffffffc3450c6b [vxio] #5 [ffffa53683bf7c50] volsys_unset_device at ffffffffc3450e9d [vxio] #6 [ffffa53683bf7c60] vol_rmgroup_devices at ffffffffc3491a6b [vxio] #7 [ffffa53683bf7c98] voldg_delete at ffffffffc34932fc [vxio] #8 [ffffa53683bf7cd8] vol_delete_group at ffffffffc3494d0d [vxio] #9 [ffffa53683bf7d18] volconfig_ioctl at ffffffffc3555b8e [vxio]#10 [ffffa53683bf7d90] volsioctl_real at ffffffffc355fc8a [vxio]#11 [ffffa53683bf7e60] vols_ioctl at ffffffffc124542d [vxspec]#12 [ffffa53683bf7e78] vols_unlocked_ioctl at ffffffffc124547d [vxspec]#13 [ffffa53683bf7e80] do_vfs_ioctl at ffffffffa7d2deb4#14 [ffffa53683bf7ef8] ksys_ioctl at ffffffffa7d2e4f0#15 [ffffa53683bf7f30] __x64_sys_ioctl at ffffffffa7d2e536DESCRIPTION:This issue is seen due to some updation from kernel side w.r.t to handling request queue.Existing VxVM code set the request handling area (make_request_fn) as vxvm_gen_strategy, this functionality is getting impacted.RESOLUTION:Code changes are added to handle the request queues using blk_mq_init_allocated_queue.Patch ID: VRTSvxvm-7.4.1.3300* 3984175 (Tracking ID: 3917636)SYMPTOM:Filesystems from /etc/fstab file are not mounted automatically on boot through systemd on RHEL7 and SLES12.DESCRIPTION:While bootup, when systemd tries to mount using the devices mentioned in the /etc/fstab file on the device, the device cannot be accessed leading to the failure of the mount operation. As the device is discovered through udev infrastructure, the udev-rules for the device should be applied when the volumes are created so that the device gets registered with systemd. In case, udev rules are executed even before the device in the "/dev/vx/dsk" directory is created, the device will not be registered with systemd leading to the failure of mount operation.RESOLUTION:To register the device, create all the volumes and run the "udevadm trigger" to execute all the udev rules.* 4011097 (Tracking ID: 4010794)SYMPTOM:Veritas Dynamic Multi-Pathing (DMP) caused system panic in a cluster with below stack when storage activities were going on:dmp_start_cvm_local_failover+0x118()dmp_start_failback+0x398()dmp_restore_node+0x2e4()dmp_revive_paths+0x74()gen_update_status+0x55c()dmp_update_status+0x14()gendmpopen+0x4a0()DESCRIPTION:The system panic occurred due to invalid dmpnode's current primary path when disks were attached/detached in a cluster. When DMP accessed the current primary path without doing sanity check, the system panics due to an invalid pointer.RESOLUTION:Code changes have been made to avoid accessing any invalid pointer.* 4039527 (Tracking ID: 4018086)SYMPTOM:vxiod with ID as 128 was stuck with below stack: #2 [] vx_svar_sleep_unlock at [vxfs] #3 [] vx_event_wait at [vxfs] #4 [] vx_async_waitmsg at [vxfs] #5 [] vx_msg_send at [vxfs] #6 [] vx_send_getemapmsg at [vxfs] #7 [] vx_cfs_getemap at [vxfs] #8 [] vx_get_freeexts_ioctl at [vxfs] #9 [] vxportalunlockedkioctl at [vxportal] #10 [] vxportalkioctl at [vxportal] #11 [] vxfs_free_region at [vxio] #12 [] vol_ru_start_replica at [vxio] #13 [] vol_ru_start at [vxio] #14 [] voliod_iohandle at [vxio] #15 [] voliod_loop at [vxio]DESCRIPTION:With SmartMove feature set to ON, the vxiod with ID as 128 starts the replication where RVG is in DCM mode. Thus, the vxiod awaits the filesystem's response if the given region is used by filesystem or not. Filesystem will trigger MDSHIP IO on logowner. Due to a bug in the code, MDSHIP IO always gets queued in vxiod with ID as 128. Hence a deadlock situation occurs.RESOLUTION:Code changes have been made to avoid handling the MDSHIP IO in vxiod whose ID is bigger than 127.* 4045494 (Tracking ID: 4021939)SYMPTOM:The "vradmin syncvol" command fails and the following message is logged: "VxVM VVR vxrsync ERROR V-5-52-10206 no server host systems specified".DESCRIPTION:VVR sockets now bind without specifying IP addresses. This recent change causes issues when such interfaces are used to identify whether the associated remote host is same as the localhost. For example, in case of the "vradmin syncvol" command, VVR incorrectly assumes that the local host has been provided as the remote host, logs the error message and exits.RESOLUTION:Updated the vradmin utility to correctly identify the remote hosts that are passed to the "vradmin syncvol" command.* 4051815 (Tracking ID: 4031597)SYMPTOM:vradmind generates a core dump in __strncpy_sse2_unaligned.DESCRIPTION:The following core dump is generated:(gdb)btThread 1 (Thread 0x7fcd140b2780 (LWP 90066)):#0 0x00007fcd12b1d1a5 in __strncpy_sse2_unaligned () from /lib64/libc.so.6#1 0x000000000059102e in IpmServer::accept (this=0xf21168, new_handlesp=0x0) at Ipm.C:3406#2 0x0000000000589121 in IpmHandle::events (handlesp=0xf12088, new_eventspp=0x7ffc8e80a4e0, serversp=0xf120c8, new_handlespp=0x0, ms=100) at Ipm.C:613#3 0x000000000058940b in IpmHandle::events (handlesp=0xfc8ab8, vlistsp=0xfc8938, ms=100) at Ipm.C:645#4 0x000000000040ae2a in main (argc=1, argv=0x7ffc8e80e8e8) at srvmd.C:722RESOLUTION:vradmind is updated to properly handle getpeername(), which addresses this issue.* 4051887 (Tracking ID: 3956607)SYMPTOM:When removing a VxVM disk using the vxdg-rmdisk operation, the following error occurs while requesting a disk reclaim:VxVM vxdg ERROR V-5-1-0 Disk <device_name> is used by one or more subdisks which are pending to be reclaimed.Use "vxdisk reclaim <device_name>" to reclaim space used by these subdisks, and retry "vxdg rmdisk" command.Note: The reclamation operation is irreversible. However, a core dump occurs when vxdisk-reclaim is executed.DESCRIPTION:This issue occurs due to a memory allocation failure in the disk-reclaim code, which fails to be detected and causes an invalid address to be referenced. Consequently, a core dump occurs.RESOLUTION:The disk-reclaim code is updated to handle memory allocation failures properly.* 4051889 (Tracking ID: 4019182)SYMPTOM:In case of a VxDMP configuration, an InfoScale server panics when applying a patch. The following stack trace is generated:unix:panicsys+0x40()unix:vpanic_common+0x78()unix:panic+0x1c()unix:mutex_enter() - frame recycledvxdmp(unloaded text):0x108b987c(jmpl?)()vxdmp(unloaded text):0x108ab380(jmpl?)(0)genunix:callout_list_expire+0x5c()genunix:callout_expire+0x34()genunix:callout_execute+0x10()genunix:taskq_thread+0x42c()unix:thread_start+4()DESCRIPTION:Some VxDMP functions create callouts. The VxDMP module may already be unloaded when a callout expires, which may cause the server to panic. VxDMP should cancel any previous timeout function calls before it unloads itself.RESOLUTION:VxDMP is updated to cancel any previous timeout function calls before unloading itself.* 4051896 (Tracking ID: 4010458)SYMPTOM:In a VVR environment, the rlink might inconsistently disconnect due to unexpected transactions, and the following message might get logged:"VxVM VVR vxio V-5-0-114 Disconnecting rlink <rlink_name> to permit transaction to proceed"DESCRIPTION:In a VVR environment, a transaction is triggered when a change in the VxVM or the VVR objects needs to be persisted on disk. In some scenarios, a few unnecessary transactions get triggered in a loop, which was causes multiple rlink disconnects and the aforementioned message gets logged frequently. One such unexpected transaction occurs when the open/close command is issued for a volume as part of SmartIO caching. The vradmind daemon also issues some open/close commands on volumes as part of the I/O statistics collection, which triggers unnecessary transactions. Additionally, some unexpected transactions occur due to incorrect references to some temporary flags on the volumes.RESOLUTION:VVR is updated to first check whether SmartIO caching is configured on a system. If it is not configured, VVR disables SmartIO caching on the associated volumes. VVR is also updated to avoid the unexpected transactions that may occur due to incorrect references on certain temporary flags on the volumes.* 4055653 (Tracking ID: 4049082)SYMPTOM:I/O read error is displayed when remote FSS node rebooting.DESCRIPTION:When rebooting remote FSS node, I/O read requests to a mirror volume that is scheduled on the remote disk from the FSS node should be redirected to the remaining plex. However, current vxvm does not handle this correctly. The retrying I/O requests could still be sent to the offline remote disk, which cause to final I/O read failure.RESOLUTION:Code changes have been done to schedule the retrying read request on the remaining plex.* 4055660 (Tracking ID: 4046007)SYMPTOM:In FSS environment if the cluster name is changed then the private disk region gets corrupted.DESCRIPTION:Under some conditions, when vxconfigd tries to update the TOC (table of contents) blocks of disk private region, the allocation maps cannot be initialized in the memory. This could make allocation maps incorrect and lead to corruption of the private region on the disk.RESOLUTION:Code changes have been done to avoid corruption of private disk region.* 4055668 (Tracking ID: 4045871)SYMPTOM:vxconfigd crashed at ddl_get_disk_given_path with following stacks:ddl_get_disk_given_pathddl_reconfigure_allddl_find_devices_in_systemfind_devices_in_systemmode_setsetup_modestartupmain_startDESCRIPTION:Under some situations, duplicate paths can be added in one dmpnode in vxconfigd. If the duplicate paths are removed then the empty path entry can be generated for that dmpnode. Thus, later when vxconfigd accesses the empty path entry, it crashes due to NULL pointer reference.RESOLUTION:Code changes have been done to avoid the duplicate paths that are to be added.* 4055697 (Tracking ID: 4047793)SYMPTOM:When replicated disks are in SPLIT mode, importing its diskgroup failed with "Device is a hardware mirror".DESCRIPTION:When replicated disks are in SPLIT mode, which are r/w, importing its diskgroup failed with "Device is a hardware mirror". Third party doesn't expose disk attribute to show when it's in SPLIT mode. Now DMP refers to its REPLICATED status to judge if diskgroup import is allowed or not. `-o usereplicatedev=on/off` is enhanced to archive it.RESOLUTION:The code is enhanced to allow diskgroup import when replicated disks are in SPLIT mode.* 4055772 (Tracking ID: 4043337)SYMPTOM:rp_rv.log file uses space for logging.DESCRIPTION:rp_rv log files needs to be removed and logger file should have 16 mb rotational log files.RESOLUTION:The code changes are implemented to disabel logging for rp_rv.log files* 4055895 (Tracking ID: 4038865)SYMPTOM:In IRQ stack, the system panics at VxDMP module with the following calltrace:native_queued_spin_lock_slowpathqueued_spin_lock_slowpath_raw_spin_lock_irqsave7dmp_get_shared_lockgendmpiodonedmpiodonebio_endioblk_update_requestscsi_end_requestscsi_io_completionscsi_finish_commandscsi_softirq_doneblk_done_softirq__do_softirqcall_softirqdo_softirqirq_exitdo_IRQ <IRQ stack>DESCRIPTION:A deadlock issue occurred between inode_hash_lock and DMP shared lock when one process was holding inode_hash_lock, but acquired the DMP shared lock in IRQ context and the other processes holding the DMP shared lock acquired the inode_hash_lock.RESOLUTION:To avoid the deadlock issue, the code changes are done.* 4055899 (Tracking ID: 3993242)SYMPTOM:vxsnap prepare on vset might throw error : "VxVM vxsnap ERROR V-5-1-19171 Cannot perform prepare operation on cloud volume"DESCRIPTION:There were some wrong volume-records entries being fetched for VSET and due to which required validations were failing and triggering the issue .RESOLUTION:Code changes have been done to resolve the issue .* 4055905 (Tracking ID: 4052191)SYMPTOM:Any scripts or command files in the / directory may run unexpectedly when the system starts and vxvm volumes will not be available until those scripts or commands are complete.DESCRIPTION:If this issue occurs, /var/svc/log/system-vxvm-vxvm-configure:default.log indicates that a script or a command located in the / directory has been executed.For example,ABC Script ran!!/lib/svc/method/vxvm-configure[241] abc.sh not found/lib/svc/method/vxvm-configure[242] abc.sh not found/lib/svc/method/vxvm-configure[243] abc.sh not found/lib/svc/method/vxvm-configure[244] app/ cannot executeIn this example, abc.sh is located in the / directory and just echoes "ABC script ran !!". vxvm-configure launched abc.sh.RESOLUTION:The incorrect comments format in the SunOS_5.11.vxvm-configure.sh script is corrected.* 4055925 (Tracking ID: 4031064)SYMPTOM:During master switch with replication in progress, cluster wide hang is seen on VVR secondary.DESCRIPTION:With application running on primary, and replication setup between VVR primary & secondary, when master switch operation is attempted on secondary, it gets hung permanently.RESOLUTION:Appropriate code changes are done to handle scenario of master switch operation and replication data on secondary.* 4055938 (Tracking ID: 3999073)SYMPTOM:Data corruption occurred when the fast mirror resync (FMR) was enabled and the failed plex of striped-mirror layout was attached.DESCRIPTION:To determine and recover the regions of volumes using contents of detach, a plex attach operation with FMR tracking has been enabled.For the given volume region, the DCO region size being higher than the stripe-unit of volume, the code logic in plex attached code path was incorrectly skipping the bits in detach maps. Thus, some of the regions (offset-len) of volume did not sync with the attached plex leading to inconsistent mirror contents.RESOLUTION:To resolve the data corruption issue, the code has been modified to consider all the bits for given region (offset-len) in plex attached code.* 4056107 (Tracking ID: 4036181)SYMPTOM:IO error has been reported when RVG is not in enabled state after boot-up.DESCRIPTION:When RVG is not enabled/active, the volumes under a RVG will report an IO error.Messages logged:systemd[1]: Starting File System Check on /dev/vx/dsk/vvrdg/vvrdata1...systemd-fsck[4977]: UX:vxfs fsck.vxfs: ERROR: V-3-20113: Cannot open : No such device or address systemd-fsck[4977]: fsck failed with error code 31.systemd-fsck: UX:vxfs fsck.vxfs: ERROR: V-3-20005: read of super-block on /dev/vx/dsk/vvrdg/vvrdata1 failed: Input/output errorRESOLUTION:Issue got fixed by enabling the RVG using vxrvg command if the RVG is in disabled/recover state.* 4056124 (Tracking ID: 4008664)SYMPTOM:System panic occurs with the following stack:void genunix:psignal+4()void vxio:vol_logger_signal_gen+0x40()int vxio:vollog_logentry+0x84()void vxio:vollog_logger+0xcc()int vxio:voldco_update_rbufq_chunk+0x200()int vxio:voldco_chunk_updatesio_start+0x364()void vxio:voliod_iohandle+0x30()void vxio:voliod_loop+0x26c((void *)0)unix:thread_start+4()DESCRIPTION:Vxio keeps vxloggerd proc_t that is used to send a signal to vxloggerd. In case vxloggerd has been ended for some reason, the signal may be sent to an unexpected process, which may cause panic.RESOLUTION:Code changes have been made to correct the problem.* 4056144 (Tracking ID: 3906534)SYMPTOM:After Dynamic Multi-Pathing (DMP) Native support is enabled, /boot should to be mounted on the DMP device(Specific to Linux).DESCRIPTION:Typically, /boot is mounted on top of an Operating System (OS) device. When DMP Native support is enabled, only the volume groups (VGs) are migrated from the OS device to the DMP device, but /boot is not migrated. Parallely, if the OS device path is not available, the system becomes unbootable, because /boot is not available. Thus, it is necessary to mount /boot on the DMP device to provide multipathing and resiliency(Specific to Linux).RESOLUTION:The module is updated to migrate /boot on top of a DMP device when DMP Native support is enabled. Note: This fix is available for RHEL 6 only. For other Linux platforms, /boot will still not be mounted on the DMP device(Specific to Linux).* 4056146 (Tracking ID: 3983832)SYMPTOM:When the disk groups are deleted, multiple VxVM commands get hang in CVR secondary site.DESCRIPTION:VxVM command hangs when a deadlock was encountered during kmsg broadcast while deleting disk group and IBC unfreeze operation.RESOLUTION:Changes are done in VxVM code check either by transactions or avoiding deadlock.* 4056832 (Tracking ID: 4057526)SYMPTOM:Whenever vxnm-vxnetd is loaded, it reports "Cannot touch '/var/lock/subsys/vxnm-vxnetd': No such file or directory" in /var/log/messages.DESCRIPTION:New systemd update removed the support for "/var/lock/subsys/" directory. Thus, whenever vxnm-vxnetd is loaded on the systems supporting systemd, it reports "cannot touch '/var/lock/subsys/vxnm-vxnetd': No such file or directory"RESOLUTION:Added a check to validate if the /var/lock/subsys/ directory is supported in vxnm-vxnetd.shPatch ID: VRTSvxvm-7.4.1.3200* 3984156 (Tracking ID: 3852146)SYMPTOM:In a CVM cluster, when a shared DG is imported by specifying both, the "-c" and the "-o noreonline" options, you may encounter the following error: VxVM vxdg ERROR V-5-1-10978 Disk group <disk_group_name>: import failed: Disk for disk group not found.DESCRIPTION:The "-c" option updates the disk ID and the DG ID on the private region of the disks in the DG that is being imported. Such updated information is not yet seen by the slave because the disks have not been brought online again because the "noreonline" option was specified. As a result, the slave cannot identify the disk(s) based on the updated information sent from the master, which caused the import to fail with the error: Disk for disk group not found.RESOLUTION:VxVM is updated so that a shared DG import completes successfully even when the "-c" and the "-o noreonline" options are specified together.* 3984175 (Tracking ID: 3917636)SYMPTOM:Filesystems from /etc/fstab file are not mounted automatically on boot through systemd on RHEL7 and SLES12.DESCRIPTION:While bootup, when systemd tries to mount using the devices mentioned in /etc/fstab file on the device, the device is not accessible leading to the failure of the mount operation. As the device discovery happens through udev infrastructure, the udev-rules for those devices need to be run when volumes are created so that devices get registered with systemd. In the case udev rules are executed even before the devices in "/dev/vx/dsk" directory are created.Since the devices are not created, devices will not be registered with systemd leading to the failure of mount operation.RESOLUTION:Run "udevadm trigger" to execute all the udev rules once all volumes are created so that devices are registered.* 4041285 (Tracking ID: 4044583)SYMPTOM:A system goes into the maintenance mode when DMP is enabled to manage native devices.DESCRIPTION:The "vxdmpadm gettune dmp_native_support=on" command is used to enable DMP to manage native devices. After you change the value of the dmp_native_support tunable, you need to reboot the system needs for the changes to take effect. However, the system goes into the maintenance mode after it reboots. The issue occurs due to the copying of the local liblicmgr72.so file instead of the original one while creating the vx_initrd image.RESOLUTION:Code changes have been made to copy the correct liblicmgr72.so file. The system successfully reboots without going into maintenance mode.* 4042039 (Tracking ID: 4040897)SYMPTOM:This is new array and we need to add support for claiming HPE MSA 2060 arrays.DESCRIPTION:HPE MSA 2060 is new array and current ASL doesn't support it. So it will not be claimed with current ASL. This array support has been now added in the current ASL.RESOLUTION:Code changes to support HPE MSA 2060 array have been done.* 4050892 (Tracking ID: 3991668)SYMPTOM:In a VVR configuration with secondary logging enabled, data inconsistency is reported after the "No IBC message arrived" error is encountered.DESCRIPTION:It might happen that the VVR secondary node handles updates with larger sequence IDs before the In-Band Control (IBC) update arrives. In this case, VVR drops the IBC update. Due to the updates with the larger sequence IDs than the one for the IBC update, data writes cannot be started, and they get queued. Data loss may occur after the VVR secondary receives an atomic commit and frees the queue. If this situation occurs, the "vradmin verifydata" command reports data inconsistency.RESOLUTION:VVR is modified to trigger updates as they are received in order to start data volume writes.* 4051457 (Tracking ID: 3958062)SYMPTOM:After a boot LUN is migrated, disabling dmp_native_support fails with following error.VxVM vxdmpadm ERROR V-5-1-15883 check_bosboot open failed /dev/r errno 2VxVM vxdmpadm ERROR V-5-1-15253 bosboot would not succeed, please run manually to find the cause of failureVxVM vxdmpadm ERROR V-5-1-15251 bosboot check failedVxVM vxdmpadm INFO V-5-1-18418 restoring protofile+ final_ret=18+ f_exit 18VxVM vxdmpadm ERROR V-5-1-15690 Operation failed for one or more volume groupsVxVM vxdmpadm ERROR V-5-1-15686 The following VG(s) could not be migrated as could not disable DMP support for LVM bootability - rootvgDESCRIPTION:After performing a boot LUN migration, while enabling or disabling DMP native support, VxVM performs the 'bosboot' verification with the old boot disk name instead of the name of the migrated disk. This issue occurs on AIX, where the OS command returns the old boot disk name.RESOLUTION:VxVM is updated to use the correct OS command to get the boot disk name after migration.* 4051815 (Tracking ID: 4031597)SYMPTOM:vradmind generates a core dump in __strncpy_sse2_unaligned.DESCRIPTION:The following core dump is generated:(gdb)btThread 1 (Thread 0x7fcd140b2780 (LWP 90066)):#0 0x00007fcd12b1d1a5 in __strncpy_sse2_unaligned () from /lib64/libc.so.6#1 0x000000000059102e in IpmServer::accept (this=0xf21168, new_handlesp=0x0) at Ipm.C:3406#2 0x0000000000589121 in IpmHandle::events (handlesp=0xf12088, new_eventspp=0x7ffc8e80a4e0, serversp=0xf120c8, new_handlespp=0x0, ms=100) at Ipm.C:613#3 0x000000000058940b in IpmHandle::events (handlesp=0xfc8ab8, vlistsp=0xfc8938, ms=100) at Ipm.C:645#4 0x000000000040ae2a in main (argc=1, argv=0x7ffc8e80e8e8) at srvmd.C:722RESOLUTION:vradmind is updated to properly handle getpeername(), which addresses this issue.* 4051887 (Tracking ID: 3956607)SYMPTOM:When removing a VxVM disk using the vxdg-rmdisk operation, the following error occurs while requesting a disk reclaim:VxVM vxdg ERROR V-5-1-0 Disk <device_name> is used by one or more subdisks which are pending to be reclaimed.Use "vxdisk reclaim <device_name>" to reclaim space used by these subdisks, and retry "vxdg rmdisk" command.Note: The reclamation operation is irreversible. However, a core dump occurs when vxdisk-reclaim is executed.DESCRIPTION:This issue occurs due to a memory allocation failure in the disk-reclaim code, which fails to be detected and causes an invalid address to be referenced. Consequently, a core dump occurs.RESOLUTION:The disk-reclaim code is updated to handle memory allocation failures properly.* 4051889 (Tracking ID: 4019182)SYMPTOM:In case of a VxDMP configuration, an InfoScale server panics when applying a patch. The following stack trace is generated:unix:panicsys+0x40()unix:vpanic_common+0x78()unix:panic+0x1c()unix:mutex_enter() - frame recycledvxdmp(unloaded text):0x108b987c(jmpl?)()vxdmp(unloaded text):0x108ab380(jmpl?)(0)genunix:callout_list_expire+0x5c()genunix:callout_expire+0x34()genunix:callout_execute+0x10()genunix:taskq_thread+0x42c()unix:thread_start+4()DESCRIPTION:Some VxDMP functions create callouts. The VxDMP module may already be unloaded when a callout expires, which may cause the server to panic. VxDMP should cancel any previous timeout function calls before it unloads itself.RESOLUTION:VxDMP is updated to cancel any previous timeout function calls before unloading itself.* 4051896 (Tracking ID: 4010458)SYMPTOM:In a VVR environment, the rlink might inconsistently disconnect due to unexpected transactions, and the following message might get logged:"VxVM VVR vxio V-5-0-114 Disconnecting rlink <rlink_name> to permit transaction to proceed"DESCRIPTION:In a VVR environment, a transaction is triggered when a change in the VxVM or the VVR objects needs to be persisted on disk. In some scenarios, a few unnecessary transactions get triggered in a loop, which was causes multiple rlink disconnects and the aforementioned message gets logged frequently. One such unexpected transaction occurs when the open/close command is issued for a volume as part of SmartIO caching. The vradmind daemon also issues some open/close commands on volumes as part of the I/O statistics collection, which triggers unnecessary transactions. Additionally, some unexpected transactions occur due to incorrect references to some temporary flags on the volumes.RESOLUTION:VVR is updated to first check whether SmartIO caching is configured on a system. If it is not configured, VVR disables SmartIO caching on the associated volumes. VVR is also updated to avoid the unexpected transactions that may occur due to incorrect references on certain temporary flags on the volumes.* 4051968 (Tracking ID: 4023390)SYMPTOM:Vxconfigd crashes as a disk contains invalid privoffset(160), which is smaller than minimum required offset(VTOC 265, GPT 208).DESCRIPTION:There may have disk label corruption or stale information residents on the disk header, which caused unexpected label written.RESOLUTION:Add a assert when updating CDS label to ensure the valid privoffset written to disk header.* 4051985 (Tracking ID: 4031587)SYMPTOM:Filesystems are not mounted automatically on boot through systemd.DESCRIPTION:When systemd service tries to start all the FS in /etc/fstab, the Veritas Volume Manager (VxVM) volumes are not started since vxconfigd is still not up. The VxVM volumes are started a little bit later in the boot process. Since the volumes are not available, the FS are not mounted automatically at boot.RESOLUTION:Registered the VxVM volumes with UDEV daemon of Linux so that the FS would be mounted when the VxVM volumes are started and discovered by udev.* 4053231 (Tracking ID: 4053230)SYMPTOM:RHEL 8.5 support is to be provided with IS 7.4.1 and 7.4.2DESCRIPTION:RHEL 8.5 ZDS support is being provided with IS 7.4.1 and 7.4.2RESOLUTION:VxVM packages are available with RHEL 8.5 compatibilityPatch ID: VRTSaslapm-7.4.1.3200* 4053234 (Tracking ID: 4053233)SYMPTOM:RHEL 8.5 support is to be provided with IS 7.4.1 and 7.4.2DESCRIPTION:RHEL 8.5 ZDS support is being provided with IS 7.4.1 and 7.4.2RESOLUTION:ASL-APM packages are available with RHEL 8.5 compatibilityPatch ID: VRTSvxvm-7.4.1.3100* 4017284 (Tracking ID: 4011691)SYMPTOM:Observed high CPU consumption on the VVR secondary nodes because of high pending IO load.DESCRIPTION:High replication related IO load on the VVR secondary and the requirement of maintaining write order fidelity with limited memory pools created contention. This resulted in multiple VxVM kernel threads contending for shared resources and there by increasing the CPU consumption.RESOLUTION:Limited the way in which VVR consumes its resources so that a high pending IO load would not result into high CPU consumption.* 4039240 (Tracking ID: 4027261)SYMPTOM:These log files have permissions rw-rw-rw which are being flagged during customer's security scans.DESCRIPTION:There have been multiple concerns about world-writeable permissions on /var/VRTSvxvm/in.vxrsyncd.stderr and /var/adm/vx/vxdmpd.log .These log files have permissions rw-rw-rw which are being flagged by customer's security scans.RESOLUTION:The files are just log files with no sensitive information to leak, not much of a security threat. The files may not require world write permissions and can be restricted to the root user. Hence the permission of these files have been changed now !* 4039242 (Tracking ID: 4008075)SYMPTOM:Observed with ASL changes for NVMe, This issue observed in reboot scenario. For every reboot machine was hitting panic And this was happening in loop.DESCRIPTION:panic was hitting for such splitted bios, root cause for this is RHEL8 introduced a new field named as __bi_remaining.where __bi_remaining is maintanins the count of chained bios, And for every endio that __bi_remaining gets atomically decreased in bio_endio() function.While decreasing __bi_remaining OS checks that the __bi_remaining 'should not <= 0' and in our case __bi_remaining was always 0 and we were hitting OSBUG_ON.RESOLUTION:>>> For scsi devices maxsize is 4194304,[ 26.919333] DMP_BIO_SIZE(orig_bio) : 16384, maxsize: 4194304[ 26.920063] DMP_BIO_SIZE(orig_bio) : 262144, maxsize: 4194304>>>and for NVMe devices maxsize is 131072[ 153.297387] DMP_BIO_SIZE(orig_bio) : 262144, maxsize: 131072[ 153.298057] DMP_BIO_SIZE(orig_bio) : 262144, maxsize: 131072* 4039244 (Tracking ID: 4010612)SYMPTOM:$ vxddladm set namingscheme=ebn lowercase=noThis issue observed for NVMe and ssd. where every disk has separate enclosure like nvme0, nvme1... so on. means every nvme/ssd disks names would be hostprefix_enclosurname0_disk0, hostprefix_enclosurname1_disk0....DESCRIPTION:$ vxddladm set namingscheme=ebn lowercase=noThis issue observed for NVMe and ssd. where every disk has separate enclosure like nvme0, nvme1... so on.means every nvme/ssd disks names would be hostprefix_enclosurname0_disk0, hostprefix_enclosurname1_disk0....eg.smicro125_nvme0_0 <--- disk1smicro125_nvme1_0 <--- disk2for lowercase=no our current code is suppressing the suffix digit of enclosurname and hence multiple disks gets same name and it is showing udid_mismatch because whatever udid of private region is not matching with ddl. ddl database showing wrong info because of multiple disks gets same name.smicro125_nvme_0 <--- disk1 <<<<<<<-----here suffix digit of nvme enclosure suppressedsmicro125_nvme_0 <--- disk2RESOLUTION:Append the suffix integer while making da_name* 4039249 (Tracking ID: 3984240)SYMPTOM:AIX builds were failing on AIX7.2 BE.DESCRIPTION:VxVM builds were failing on AIX7.2 BE.RESOLUTION:Made build environment and packaging changes so as to support VxVM builds on AIX7.2 BE.* 4039525 (Tracking ID: 4012763)SYMPTOM:IO hang may happen in VVR (Veritas Volume Replicator) configuration when SRL overflows for one rlink while another one rlink is in AUTOSYNC mode.DESCRIPTION:In VVR, if the SRL overflow happens for rlink (R1) and some other rlink (R2) is ongoing the AUTOSYNC, then AUTOSYNC is aborted for R2, R2 gets detached and DCM mode is activated on R1 rlink.However, due to a race condition in code handling AUTOSYNC abort and DCM activation in parallel, the DCM could not be activated properly and IO which caused DCM activation gets queued incorrectly, this results in a IO hang.RESOLUTION:The code has been modified to fix the race issue in handling the AUTOSYNC abort and DCM activation at same time.* 4039526 (Tracking ID: 4034616)SYMPTOM:vol_seclog_limit_ioload tunable needs to be enabled on Linux only.DESCRIPTION:vol_seclog_limit_ioload tunable needs to be enabled on Linux only.RESOLUTION:The code changes are implemented to disable the tunable 'vol_seclog_limit_ioload' on non-linux platforms.* 4040842 (Tracking ID: 4009353)SYMPTOM:After the command, vxdmpadm settune dmp_native_support=on, machine goes into maintenance mode. Issue is produced on physical setup with root lvm diskDESCRIPTION:If there is '-' in native vgname, then the script is taking an inaccurate vgname.RESOLUTION:Code changes have been made to fix the issue.* 4044174 (Tracking ID: 4044072)SYMPTOM:I/Os fail for NVMe disks with 4K block size on the RHEL 8.4 kernel.DESCRIPTION:This issue occurs only in the case of disks of the 4K block size. I/Os complete successfully when the disks of 512 block size are used. If disks of the 4K block size are used, the following error messages are logged:[ 51.228908] VxVM vxdmp V-5-0-0 [Error] i/o error occurred (errno=0x206) on dmpnode 201/0x10[ 51.230070] blk_update_request: operation not supported error, dev nvme1n1, sector 240 op 0x0:(READ) flags 0x800 phys_seg 1 prio class 0[ 51.240861] blk_update_request: operation not supported error, dev nvme0n1, sector 0 op 0x0:(READ) flags 0x800 phys_seg 1 prio class 0RESOLUTION:After making the necessary code changes, no error messages are seen in dmesg and logical block size is set to 4096 (same as physical block size).* 4045494 (Tracking ID: 4021939)SYMPTOM:The "vradmin syncvol" command fails and the following message is logged: "VxVM VVR vxrsync ERROR V-5-52-10206 no server host systems specified".DESCRIPTION:VVR sockets now bind without specifying IP addresses. This recent change causes issues when such interfaces are used to identify whether the associated remote host is same as the localhost. For example, in case of the "vradmin syncvol" command, VVR incorrectly assumes that the local host has been provided as the remote host, logs the error message and exits.RESOLUTION:Code changes made to correctly identify remote hosts in the "vradmin syncvol" command.* 4045502 (Tracking ID: 4045501)SYMPTOM:The following errors occur during the installation of the VRTSvxvm and the VRTSaslapm packages on CentOS 8.4 systems:~Verifying packages...Preparing packages...This release of VxVM is for Red Hat Enterprise Linux 8and CentOS Linux 8.Please install the appropriate OSand then restart this installation of VxVM.error: %prein(VRTSvxvm-7.4.1.2500-RHEL8.x86_64) scriptlet failed, exit status 1error: VRTSvxvm-7.4.1.2500-RHEL8.x86_64: install failedcat: 9: No such file or directory~DESCRIPTION:The product installer reads the /etc/centos-release file to identify the Linux distribution. This issue occurs because the file has changed for CentOS 8.4.RESOLUTION:Code Changes have been made to correctly identify the Linux distribution.Patch ID: VRTSaslapm-7.4.1.3100* 4039241 (Tracking ID: 4010667)SYMPTOM:NVMe devices are not detected by Veritas Volume Manager(VxVM) on RHEL 8.DESCRIPTION:VxVM uses SCSI inquiry interface to detect the storage devices. From RHEL8 onwards SCSI inquiry interface is not available for NVMe devices.Due to this VxVM fails to detect the NVMe devices.RESOLUTION:Code changes have been done to use NVMe IOCTL interface to detect the NVMe devices.* 4039527 (Tracking ID: 4018086)SYMPTOM:vxiod with ID as 128 was stuck with below stack: #2 [] vx_svar_sleep_unlock at [vxfs] #3 [] vx_event_wait at [vxfs] #4 [] vx_async_waitmsg at [vxfs] #5 [] vx_msg_send at [vxfs] #6 [] vx_send_getemapmsg at [vxfs] #7 [] vx_cfs_getemap at [vxfs] #8 [] vx_get_freeexts_ioctl at [vxfs] #9 [] vxportalunlockedkioctl at [vxportal] #10 [] vxportalkioctl at [vxportal] #11 [] vxfs_free_region at [vxio] #12 [] vol_ru_start_replica at [vxio] #13 [] vol_ru_start at [vxio] #14 [] voliod_iohandle at [vxio] #15 [] voliod_loop at [vxio]DESCRIPTION:With SmartMove feature as ON, it can happen vxiod with ID as 128 starts replication where RVG was in DCM mode, this vxiod is waiting for filesystem's response if a given region is used by filesystem or not. Filesystem will trigger MDSHIP IO on logowner. Due to a bug in code, MDSHIP IO always gets queued in vxiod with ID as 128. Hence a dead lock situation.RESOLUTION:Code changes have been made to avoid handling MDSHIP IO in vxiod whose ID is bigger than 127.Patch ID: VRTSvxvm-7.4.1.2900* 4013643 (Tracking ID: 4010207)SYMPTOM:System panic occurred with the below stack:native_queued_spin_lock_slowpath()queued_spin_lock_slowpath()_raw_spin_lock_irqsave()volget_rwspinlock()volkiodone()volfpdiskiodone()voldiskiodone_intr()voldmp_iodone()bio_endio()gendmpiodone()dmpiodone()bio_endio()blk_update_request()scsi_end_request()scsi_io_completion()scsi_finish_command()scsi_softirq_done()blk_done_softirq()__do_softirq()call_softirq()DESCRIPTION:As part of collecting the IO statistics collection, the vxstat thread acquires a spinlock and tries to copy data to the user space. During the data copy, if some page fault happens, then the thread would relinquish the CPU and provide the same to some other thread. If the thread which gets scheduled on the CPU requests the same spinlock which vxstat thread had acquired, then this results in a hard lockup situation.RESOLUTION:Code has been changed to properly release the spinlock before copying out the data to the user space during vxstat collection.* 4023762 (Tracking ID: 4020046)SYMPTOM:The following IO errors are reported on VxVM sub-disks result in DRL log detached without any SCSI errors detected.VxVM vxio V-5-0-1276 error on Subdisk [xxxx] while writing volume [yyyy][log] offset 0 length [zzzz]VxVM vxio V-5-0-145 DRL volume yyyy[log] is detachedDESCRIPTION:DRL plexes detached as an atomic write flag (BIT_ATOMIC) was set on BIO unexpectedly. The BIT_ATOMIC flag gets set on bio only if VOLSIO_BASEFLAG_ATOMIC_WRITE flag is set on SUBDISK SIO and its parent MVWRITE SIO's sio_base_flags. When generating MVWRITE SIO, it's sio_base_flags was copied from a gio structure, because the gio structure memory isn't initialized it may contain gabarge values, hence the issue.RESOLUTION:Code changes have been made to fix the issue.* 4031342 (Tracking ID: 4031452)SYMPTOM:Add node operation is failing with error "Error found while invoking '' in the new node, and rollback done in both nodes"DESCRIPTION:Stack showed a valid address for pointer ptmap2, but still it generated core.It suggested that it might be a double-free case. Issue lies in freeing a pointerRESOLUTION:Added handling for such case by doing NULL assignment to pointers wherever they are freed* 4033162 (Tracking ID: 3968279)SYMPTOM:Vxconfigd dumps core with SEGFAULT/SIGABRT on boot for NVME setup.DESCRIPTION:For NVME setup, vxconfigd dumps core while doing device discovery as the data structure is accessed by multiple threads and can hit a race condition. For sector size other than 512, the partition size mismatch is seen because we are doing comparison with partition size from devintf_getpart() and it is in sector size of the disk. This can lead to call of NVME device discovery.RESOLUTION:Added mutex lock while accessing the data structure so as to prevent core. Made calculations in terms of sector size of the disk to prevent the partition size mismatch.* 4033163 (Tracking ID: 3959716)SYMPTOM:System may panic with sync replication with VVR configuration, when VVR RVG is in DCM mode, with following panic stack:volsync_wait [vxio]voliod_iohandle [vxio]volted_getpinfo [vxio]voliod_loop [vxio]voliod_kiohandle [vxio]kthreadDESCRIPTION:With sync replication, if ACK for data message is delayed from the secondary site, the primary site might incorrectly free the message from the waiting queue at primary site.Due to incorrect handling of the message, a system panic may happen.RESOLUTION:Required code changes are done to resolve the panic issue.* 4033172 (Tracking ID: 3994368)SYMPTOM:During node 0 shutting down, vxconfigd daemon abort on node 1, and I/O write error happened on node 1DESCRIPTION:Examining the vxconfigd core we found that it entered into endless sigio processing which resulted in stack overflow and hence vxconfigd core dumped.After that vxconfigd restarted and ended up in dg disable scenario.RESOLUTION:We have done the appropriate code changes to handle the scenario of stack overflow.* 4033173 (Tracking ID: 4021301)SYMPTOM:Data corruption issue happened with the big size IO processed by Linux kernel IO split on RHEL8.DESCRIPTION:On RHEL8 or as of Linux kernel 3.13, it introduces some changes in Linux kernel block layer, new item of the bio iterator structure is used to represent the start offset of bio or bio vectors after the IO processed by Linux kernel IO split functions. Also, in recent version of vxfs, it can generate bio with larger size than the size limitation defined within Linux kernel block layer and VxVM, which lead the IO from vxfs could be split by Linux kernel. For such split IOs, VxVM does not take the new item of the bio iterator into account while process them, which caused the data is written to wrong position of volume/disk. Hence, data corruption.RESOLUTION:Code changes have been made to bypass the Linux kernel IO split functions, which seems redundant for VxVM IO processing.* 4033216 (Tracking ID: 3993050)SYMPTOM:vxdctl dumpmsg command gets stuck on large node cluster during reconfigurationDESCRIPTION:vxdctl dumpmsg command gets stuck on large node cluster during reconfiguration with following stack. This causes /var/adm/vx/voldctlmsg.log fileto get filled with old repeated messages in GBs consuming most of /var space.# pstack 210460voldctl_get_msgdump ()do_voldmsg ()main ()RESOLUTION:Code changes have been done to dump correct required messages to file* 4033515 (Tracking ID: 3984266)SYMPTOM:DCM flag in on the RVG (Replicated Volume Group) volume may get deactivated after a master switch in CVR (Clustered Volume Replicator) which may cause excessive RVG recovery after subsequent node reboots.DESCRIPTION:After master switch, the DCM flag needs to be updated on the new CVM master node. Due to a transaction initiated in parallel with master switch, the DCM flag was getting lost. This was causing excessive RVG recovery during next node reboots as the DCM write position was NOT updated for a long time.RESOLUTION:The code is fixed to handle the race in updating the DCM flag during a master switch.* 4035313 (Tracking ID: 4037915)SYMPTOM:Getting compilation errors due to RHEL's source code changesDESCRIPTION:While compiling the RHEL 8.4 kernel (4.18.0-304) the build compilation fails due to certain RH source changes.RESOLUTION:Following changes have been fixed to work with VxVM 7.4.1__bdevname - depreciatedSolution: Have a struct block_device and use bdevnameblkg_tryget_closest - placed under EXPORT_SYMBOL_GPLSolution: Locally defined the function where compilation error was hitsync_core - implicit declarationThe implementation of function sync_core() has been moved to header file sync_core.h, so including this header file fixes the error* 4036426 (Tracking ID: 4036423)SYMPTOM:Race condition while reading config file in docker volume plugin caused the issue in Flex Appliance.DESCRIPTION:If 2 simultaneous requests come for say MountVolume, then both of them update the global variables and it leads to wrong parameter valuesin some cases.RESOLUTION:Fix is to read this file only once during startup in init() function. If the user wants to change default values in the config file,then he will have to restart the vxinfoscale-docker service.* 4037331 (Tracking ID: 4037914)SYMPTOM:Crash while running VxVM cert.DESCRIPTION:While running the VM cert, there is a panic reported and theRESOLUTION:Setting bio and submitting to IOD layer in our own vxvm_gen_strategy() function* 4037810 (Tracking ID: 3977101)SYMPTOM:While testing on VM cert a core dump is produced, no functionality breaks were observedDESCRIPTION:A regression caused by read_sol_label using same return varible (ret) more than once. Added code to get sector size and used same return variable, the function was returning presence of label even if it does not existRESOLUTION:Code repositioned in vxpart.c to assign only in presence of label to return valuePatch ID: VRTSaslapm-7.4.1.2900* 4017906 (Tracking ID: 4017905)SYMPTOM:VSPEx is new array that we need to support. The current ASL is not able to claim it.DESCRIPTION:VSPEx is new array and current ASL is not able to claim it. So, we need to modify our code to support this array.RESOLUTION:Modified the asl code to support claim for VSPEx array.* 4022943 (Tracking ID: 4017656)SYMPTOM:This is new array and we need to add support for claiming XP8 arrays.DESCRIPTION:XP8 is new array and current ASL doesn't support it. So it will not be claimed with current ASL. This array support has been now added in the current ASL.RESOLUTION:Code changes to support XP8 array have been done.* 4037946 (Tracking ID: 4037958)SYMPTOM:ASLAPM package not compiling with RHEL 8.4DESCRIPTION:With the ongoing compilation issues for RHEL 8.4, the ASLAPM package was not compiling as the build wasn't proceedingRESOLUTION:Code fixes have been made for the compilation issues and now the ASLAPM package is built.Patch ID: VRTSvxvm-7.4.1.2800* 3984155 (Tracking ID: 3976678)SYMPTOM:vxvm-recover: cat: write error: Broken pipe error encountered in syslog multiple times.DESCRIPTION:Due to a bug in vxconfigbackup script which is started by vxvm-recover "cat : write error: Broken pipe" is encountered in syslog and it is reported under vxvm-recover. In vxconfigbackup code multiple subshells are created in a function call and the first subshell is for cat command. When a particular if condition is satistfied, return is called exiting the later subshells even when there is data to be read in the created cat subshell, which results in broken pipe error.RESOLUTION:Changes are done in VxVM code to handle the broken pipe error.* 4016283 (Tracking ID: 3973202)SYMPTOM:A VVR primary node may panic with below stack due to accessing the freed memory:nmcom_throttle_send()nmcom_sender()kthread ()kernel_thread()DESCRIPTION:After sending the data to VVR (Veritas Volume Replicator) secondary site, the code was accessing some variables for which the memory was already released due to the data ACK getting processed quite early. This was a rare race condition which may happen due to accessing the freed memory.RESOLUTION:Code changes have been made to avoid the incorrect memory access.* 4016291 (Tracking ID: 4002066)SYMPTOM:System panic with below stack when do reclaim:__wake_up_common_lock+0x7c/0xc0sbitmap_queue_wake_all+0x43/0x60blk_mq_tag_wakeup_all+0x15/0x30blk_mq_wake_waiters+0x3d/0x50blk_set_queue_dying+0x22/0x40blk_cleanup_queue+0x21/0xd0vxvm_put_gendisk+0x3b/0x120 [vxio]volsys_unset_device+0x1d/0x30 [vxio]vol_reset_devices+0x12b/0x180 [vxio]vol_reset_kernel+0x16c/0x220 [vxio]volconfig_ioctl+0x866/0xdf0 [vxio]DESCRIPTION:With recent kernel, it is expected that kernel will return the pre-allocated sense buffer. These sense buffer pointers are supposed to be unchanged across multiple uses of a request. They are pre-allocated and expected to be unchanged until such a time as the request memory is to be freed. DMP overwrote the original sense buffer, hence the issue.RESOLUTION:Code changes have been made to avoid tampering the pre-allocated sense buffer.* 4016768 (Tracking ID: 3989161)SYMPTOM:The system panic occurs because of hard lockup with the following stack:#13 [ffff9467ff603860] native_queued_spin_lock_slowpath at ffffffffb431803e#14 [ffff9467ff603868] queued_spin_lock_slowpath at ffffffffb497a024#15 [ffff9467ff603878] _raw_spin_lock_irqsave at ffffffffb4988757#16 [ffff9467ff603890] vollog_logger at ffffffffc105f7fa [vxio]#17 [ffff9467ff603918] vol_rv_update_childdone at ffffffffc11ab0b1 [vxio]#18 [ffff9467ff6039f8] volsiodone at ffffffffc104462c [vxio]#19 [ffff9467ff603a88] vol_subdisksio_done at ffffffffc1048eef [vxio]#20 [ffff9467ff603ac8] volkcontext_process at ffffffffc1003152 [vxio]#21 [ffff9467ff603b10] voldiskiodone at ffffffffc0fd741d [vxio]#22 [ffff9467ff603c40] voldiskiodone_intr at ffffffffc0fda92b [vxio]#23 [ffff9467ff603c80] voldmp_iodone at ffffffffc0f801d0 [vxio]#24 [ffff9467ff603c90] bio_endio at ffffffffb448cbec#25 [ffff9467ff603cc0] gendmpiodone at ffffffffc0e4f5ca [vxdmp]... ...#50 [ffff9497e99efa60] do_page_fault at ffffffffb498d975#51 [ffff9497e99efa90] page_fault at ffffffffb4989778#52 [ffff9497e99efb40] conv_copyout at ffffffffc10005da [vxio]#53 [ffff9497e99efbc8] conv_copyout at ffffffffc100044e [vxio]#54 [ffff9497e99efc50] volioctl_copyout at ffffffffc1032db3 [vxio]#55 [ffff9497e99efc80] vol_get_logger_data at ffffffffc105e4ce [vxio]#56 [ffff9497e99efcf8] voliot_ioctl at ffffffffc105e66b [vxio]#57 [ffff9497e99efd78] volsioctl_real at ffffffffc10aee82 [vxio]#58 [ffff9497e99efe50] vols_ioctl at ffffffffc0646452 [vxspec]#59 [ffff9497e99efe70] vols_unlocked_ioctl at ffffffffc06464c1 [vxspec]#60 [ffff9497e99efe80] do_vfs_ioctl at ffffffffb4462870#61 [ffff9497e99eff00] sys_ioctl at ffffffffb4462b21DESCRIPTION:Vxio kernel sends a signal to vxloggerd to flush the log as it is almost full. Vxloggerd calls into vxio kernel to copy the log buffer out. As vxio copy log date from kernel to user with holding a spinlock, if a page fault occurs during the copy out, hard lockup and panic occur.RESOLUTION:Code changes have been made the fix the problem.* 4017194 (Tracking ID: 4012681)SYMPTOM:If vradmind process terminates due to some reason, it is not properly restarted by RVG agent of VCS.DESCRIPTION:The RVG(Replicated Volume Group) agent of VCS(Veritas Cluster Server) restarts the vradmind process if it gets killed or terminateddue to some reason, this was not working properly on systemd enabled platforms like RHEL-7.In the systemd enabled platforms, after the vradmind process dies, the vras-vradmind service used to stay in active/running state, due to this, even after the RVG agent issued a command to start the vras-vradmind service, the vradmind process was not getting started.RESOLUTION:The code is modified to fix the parameters for vras-vradmind service, so that the service status will change to failed/faulted if vradmind process gets killed. The service can be manually started later or RVG agent of VCS can start the service, which will start the vradmind process as well.* 4017502 (Tracking ID: 4020166)SYMPTOM:Build issue becuase of "struct request"error: struct request has no member named next_rqLinux has deprecated the member next_reqDESCRIPTION:The issue was observed due to changes in OS structureRESOLUTION:code changes are done in required files* 4019781 (Tracking ID: 4020260)SYMPTOM:While enabling dmp native support tunable dmp_native_support for Centos 8 below mentioned error was observed:[root@dl360g9-4-vm2 ~]# vxdmpadm settune dmp_native_support=onVxVM vxdmpadm ERROR V-5-1-15690 Operation failed for one or more volume groupsVxVM vxdmpadm ERROR V-5-1-15686 The following vgs could not be migrated as error in bootloader configuration file cl[root@dl360g9-4-vm2 ~]#DESCRIPTION:The issue was observed due to missing code check-ins for CentOS 8 in the required files.RESOLUTION:Changes are done in required files for dmp native support in CentOS 8Patch ID: VRTSvxvm-7.4.1.2700* 3984163 (Tracking ID: 3978216)SYMPTOM:'Device mismatch warning' seen on boot when DMP native support is enabled with LVM snapshot of root disk presentDESCRIPTION:When we enable DMP (Dynamic Multipathing) Native Support featue on the system having a LVM snapshot of root disk present, "Device mismatch" warning messages are seen on every reboot in boot.log file. The messages are coming because LVM is trying to access the LV using the information present in the lvm.cache file which is stale. Because of accessing the stale file, the warning messages are seen on reboot.RESOLUTION:Fix is to remove the LVM cache file during system shutdown as part of VxVM shutdown.* 4010517 (Tracking ID: 3998475)SYMPTOM:Data corruption is observed and service groups went into partial state.DESCRIPTION:In VxVM, fsck log replay initiated read of 64 blocks, that was getting split across 2 stripes of the stripe-mirror volume. So, we had 2 read I/Os of 48 blocks (first split I/O) and 16 blocks (second split I/O).Since the volume was in RWBK mode, this read I/O was stabilized. Upon completion of the read I/O at subvolume level, this I/O was unstabilized and the contents of the stable I/O (stablekio) were copied to the original I/O (origkio). It was observed that the data was always correct till the subvolume level but at the top level plex and volume level, it was incorrect (printed checksum in vxtrace output for this).The reason for this was during unstabilization, we do volkio_to_kio_copy() which copies the contents from stable kio to orig kio (since it is a read).As the orig kio was an unmapped PHYS I/O, in Solaris 11.4, the contents will be copied out using bp_copyout() from volkiomem_kunmap(). The volkiomem_seek() and volkiomem_next_segment() allocates pagesize (8K) kernel buffer (zero'ed out) where the contents will be copied to.When the first split I/O completes unstabilization before the second split I/O, this issue was not seen. However, if the second split I/O completed before the first splt I/O then this issue was seen. Here, in the last iteration of the volkio_to_kio_copy(), the data copied was less than the allocated region size. We allocate 8K region size whereas the data copied from stablekio was less than 8K. Later, during kunmap(), we do a bp_copyout() of alloocated size i.e. 8K. This caused copyout of extra regions that were zero'ed out. Hence the data corruption.RESOLUTION:Now we do a bp_copyout() of the right length i.e. of the copied size instead of the allocated region size.* 4010996 (Tracking ID: 4010040)SYMPTOM:Configuring VRTSvxvm package creates a world writable file: "/etc/vx/.vxvvrstatd.lock".DESCRIPTION:VVR statistics daemon (vxvvrstad) creates this file on startup. The umask for this daemon was not set correctly resulting in creation of the world writable file.RESOLUTION:VVR daemon is updated to to set the umask properly.* 4011027 (Tracking ID: 4009107)SYMPTOM:CA chain certificate verification fails in VVR when the number of intermediate certificates is greater than the depth. So, we get error in SSL initialization.DESCRIPTION:CA chain certificate verification fails in VVR when the number of intermediate certificates is greater than the depth. SSL_CTX_set_verify_depth() API decides the depth of certificates (in /etc/vx/vvr/cacert file) to be verified, which is limited to count 1 in code. Thus intermediate CA certificate present first in /etc/vx/vvr/cacert (depth 1 CA/issuer certificate for server certificate) could be obtained and verified during connection, but root CA certificate (depth 2 higher CA certificate) could not be verified while connecting and hence the error.RESOLUTION:Removed the call of SSL_CTX_set_verify_depth() API so as to handle the depth automatically.* 4011097 (Tracking ID: 4010794)SYMPTOM:Veritas Dynamic Multi-Pathing (DMP) caused system panic in a cluster with below stack while there were storage activities going on.dmp_start_cvm_local_failover+0x118()dmp_start_failback+0x398()dmp_restore_node+0x2e4()dmp_revive_paths+0x74()gen_update_status+0x55c()dmp_update_status+0x14()gendmpopen+0x4a0()DESCRIPTION:It could happen dmpnode's current primary path became invalid when disks were attached/detached in a cluster. DMP accessed the current primary path without doing sanity check. Hence system panic due to an invalid pointer.RESOLUTION:Code changes have been made to avoid accessing a invalid pointer.* 4011105 (Tracking ID: 3972433)SYMPTOM:IO hang might be seen while issuing heavy IO load on volumes having cache objects.DESCRIPTION:While issuing heavy IO on volumes having cache objects, the IO on cache volumes may stall due to locking(region lock) involved for overlapping IO requests on the same cache object. When appropriate locks are granted to IOs, all the IOs were getting processed in serial fashion through single VxVM IO daemon thread. This serial processing was causing slowness, resulting in a IO hang like situation and application timeouts.RESOLUTION:The code changes are done to properly perform multi-processing of the cache volume IOs.Patch ID: VRTSvxvm-7.4.1.2200* 3992902 (Tracking ID: 3975667)SYMPTOM:NMI watchdog: BUG: soft lockupDESCRIPTION:When flow control on ioshipping channel is set there is window in code where vol_ioship_sender thread can go in tight loop.This causes softlockupRESOLUTION:Relinquish CPU to schedule other process. vol_ioship_sender() thread will restart after some delay.* 3997906 (Tracking ID: 3987937)SYMPTOM:VxVM command hang happens when heavy IO load performed on VxVM volume with snapshot, IO memory pool full is also observed.DESCRIPTION:It's a deadlock situation occurring with heavy IOs on volume with snapshots. When a multistep SIO A acquired ilock and it's child MV write SIO is waiting for memory pool which is full, another multistep SIO B has acquired memory and waiting for the ilock held by multistep SIO A.RESOLUTION:Code changes have been made to fix the issue.* 4000388 (Tracking ID: 4000387)SYMPTOM:Existing VxVM module fails to load on Rhel 8.2DESCRIPTION:RHEL 8.2 is a new release and had few KABI changes on which VxVM compilation breaks .RESOLUTION:Compiled VxVM code against 8.2 kernel and made changes to make it compatible.* 4001399 (Tracking ID: 3995946)SYMPTOM:CVM Slave unable to join cluster with below error:VxVM vxconfigd ERROR V-5-1-11092 cleanup_client: (Memory allocation failure) 12VxVM vxconfigd ERRORV-5-1-11467 kernel_fail_join(): Reconfiguration interrupted: Reason is retry to add a node failed (13, 0)DESCRIPTION:vol_vvr_tcp_keepalive and vol_vvr_tcp_timeout are introduced in 7.4.1 U1 for Linux only. For other platforms like Solaris and AIX, it isn't supported. Due a bug in code, those two tunables were exposed,and cvm couldn't get those two tunables info from master node. Hence the issue.RESOLUTION:Code change has been made to hide vol_vvr_tcp_keepalive and vol_vvr_tcp_timeout for other platforms like Solaris and AIX.* 4001736 (Tracking ID: 4000130)SYMPTOM:System panic when DMP co-exists with EMC PP on rhel8/sles12sp4 with below stacks:#6 [] do_page_fault #7 [] page_fault [exception RIP: dmp_kernel_scsi_ioctl+888]#8 [] dmp_kernel_scsi_ioctl at [vxdmp]#9 [] dmp_dev_ioctl at [vxdmp]#10 [] do_passthru_ioctl at [vxdmp]#11 [] dmp_tur_temp_pgr at [vxdmp]#12 [] dmp_pgr_set_temp_key at [vxdmp]#13 [] dmpioctl at [vxdmp]#14 [] dmp_ioctl at [vxdmp]#15 [] blkdev_ioctl #16 [] block_ioctl #17 [] do_vfs_ioctl#18 [] ksys_ioctl Or #8 [ffff9c3404c9fb40] page_fault #9 [ffff9c3404c9fbf0] dmp_kernel_scsi_ioctl at [vxdmp]#10 [ffff9c3404c9fc30] dmp_scsi_ioctl at [vxdmp]#11 [ffff9c3404c9fcb8] dmp_send_scsireq at [vxdmp]#12 [ffff9c3404c9fcd0] dmp_do_scsi_gen at [vxdmp]#13 [ffff9c3404c9fcf0] dmp_pr_send_cmd at [vxdmp]#14 [ffff9c3404c9fd80] dmp_pr_do_read at [vxdmp]#15 [ffff9c3404c9fdf0] dmp_pgr_read at [vxdmp]#16 [ffff9c3404c9fe20] dmpioctl at [vxdmp]#17 [ffff9c3404c9fe30] dmp_ioctl at [vxdmp]DESCRIPTION:Upwards 4.10.17, there is no such guarantee from the block layer or other drivers to ensure that the cmd pointer at least points to __cmd, when initialize a SCSI request. DMP directly accesses cmd pointer after got the SCSI request from underlayer without sanity check, hence the issue.RESOLUTION:Code changes have been made to do sanity check when initialize a SCSI request.* 4001745 (Tracking ID: 3992053)SYMPTOM:Data corruption may happen with layered volumes due to some data not re-synced while attaching a plex. This is due to inconsistent data across the plexes after attaching a plex in layered volumes.DESCRIPTION:When a plex is detached in a layered volume, the regions which are dirty/modified are tracked in DCO (Data change object) map.When the plex is attached back, the data corresponding to these dirty regions is re-synced to the plex being attached.There was a defect in the code due to which the some particular regions were NOT re-synced when a plex is attached.This issue only happens only when the offset of the sub-volume is NOT aligned with the region size of DCO (Data change object) volume.RESOLUTION:The code defect is fixed to correctly copy the data for dirty regions when the sub-volume offset is NOT aligned with the DCO region size.* 4001746 (Tracking ID: 3999520)SYMPTOM:VxVM commands may hang with below stack when user tries to start or stop the DMP IO statistics collection whenthe DMP iostat tunable (dmp_iostats_state) was disabled earlier.schedule()rwsem_down_failed_common()rwsem_down_write_failed()call_rwsem_down_write_failed()dmp_reconfig_write_lock()dmp_update_reclaim_attr()gendmpioctl()dmpioctl()DESCRIPTION:When the DMP iostat tunable (dmp_iostats_state) is disabled and user tries to start (vxdmpadm iostat start) or stop (vxdmpadm iostat stop) the DMP iostat collection, then a thread which collects the IO statistics was exiting without releasing a lock. Due to this,further VxVM commands were getting hung while waiting for the lock.RESOLUTION:The code is changed to correctly release the lock when the tunable 'dmp_iostats_state' is disabled.* 4001748 (Tracking ID: 3991580)SYMPTOM:IO and VxVM command hang may happen if IO performed on both source and snapshot volumes.DESCRIPTION:It's a deadlock situation occurring with heavy IOs on both source volume and snapshot volume. SIO (a), USER_WRITE, on snap volume, held ILOCK (a), waiting for memory(full).SIO (b), PUSHED_WRITE, on snap volume, waiting for ILOCK (a).SIO (c), parent of SIO (b), USER_WRITE, on the source volume, held ILOCK (c) and memory, waiting for SIO (b) done.RESOLUTION:User separate pool for IO writes on Snapshot volume to resolve the issue.* 4001750 (Tracking ID: 3976392)SYMPTOM:Memory corruption might happen in VxVM (Veritas Volume Manager) while processing Plex detach request.DESCRIPTION:During processing of plex detach request, the VxVM volume is operated in serial manner. During serialization it might happen that current thread has queued the I/O and still accessing the same. In the meantime the same I/O is picked up by one of VxVM threads for processing. The processing of the I/O is completed and the same is deleted after that. The current thread is still accessing the same memory which was already deleted which might lead to memory corruption.RESOLUTION:Fix is to not use the same I/O in the current thread once the I/O is queued as part of serialization and the processing is done before queuing the I/O.* 4001752 (Tracking ID: 3969487)SYMPTOM:Data corruption observed with layered volumes after resynchronisation when mirror of the volume is detached and attached back.DESCRIPTION:In case of layered volume, if the IO fails at the underlying subvolume layer before doing the mirror detach the top volume in layered volume has to be serialized (run IO's in serial fashion). When volume is serialized IO's on the volume are directly tracked into detach map of DCO (Data Change Object). During this time period if some of the new IO's occur on the volume then those IO's would not be tracked as part of the detach map inside DCO since detach map tracking is not yet enabled by failed IO's. The new IO's which are not being tracked in detach map would be missed when the plex resynchronisation happens later which leads to corruption.RESOLUTION:Fix is to delay the unserialization of the volume till the point failed IO's actually detach the plex and enable detach map tracking. This would make sure new IO's are tracked as part of detach map of DCO.* 4001755 (Tracking ID: 3980684)SYMPTOM:Kernel panic in voldrl_hfind_an_instant while accessing agenode with stack[exception RIP: voldrl_hfind_an_instant+49]#11 voldrl_find_mark_agenodes#12 voldrl_log_internal_30#13 voldrl_log_30#14 volmv_log_drlfmr#15 vol_mv_write_start#16 volkcontext_process#17 volkiostart#18 vol_linux_kio_start#19 vxiostrategy...DESCRIPTION:Agenode corruption is hit in case of use of per file sequential hint. Agenode's linked list is corrupted as pointer was not set to NULLwhen reusing the agenode.RESOLUTION:Changes are done in VxVM code to avoid Agenode list corruption.* 4001757 (Tracking ID: 3969387)SYMPTOM:In FSS(Flexible Storage Sharing) environment,system might panic with below stack:vol_get_ioscb [vxio]vol_ecplex_rhandle_resp [vxio]vol_ioship_rrecv [vxio]gab_lrrecv [gab]vx_ioship_llt_rrecv [llt]vx_ioship_process_frag_packets [llt]vx_ioship_process_data [llt]vx_ioship_recv_data [llt]DESCRIPTION:In certain scenario, it may happen that request got purged and response came after that. Then system might panic due to access the freed resource.RESOLUTION:Code changes have been made to fix the issue.Patch ID: VRTSvxvm-7.4.1.1600* 3984139 (Tracking ID: 3965962)SYMPTOM:No option to disable auto-recovery when a slave node joins the CVM cluster.DESCRIPTION:In a CVM environment, when the slave node joins the CVM cluster, it is possible that the plexes may not be in sync. In such a scenario auto-recovery is triggered for the plexes. If a node is stopped using the hastop -all command when the auto-recovery is in progress, the vxrecover operation may hang. An option to disable auto-recovery is not available.RESOLUTION:The VxVM module is updated to allow administrators to disable auto-recovery when a slave node joins a CVM cluster.A new tunable, auto_recover, is introduced. By default, the tunable is set to 'on' to trigger the auto-recovery. Set its value to 'off' to disable auto-recovery. Use the vxtune command to set the tunable.* 3984731 (Tracking ID: 3984730)SYMPTOM:VxVM logs warning messages when the VxDMP module is stopped or removed for the first time after the system is rebooted.DESCRIPTION:VxVM logs these warnings because the QUEUE_FLAG_REGISTERED and QUEUE_FLAG_INIT_DONE queue flags are not cleared while registering the dmpnode.The following stack is reported after stopping/removing VxDMP for first time after every reboot:kernel: WARNING: CPU: 28 PID: 33910 at block/blk-core.c:619 blk_cleanup_queue+0x1a3/0x1b0kernel: CPU: 28 PID: 33910 Comm: modprobe Kdump: loaded Tainted: P OE ------------ 3.10.0-957.21.3.el7.x86_64 #1kernel: Hardware name: HPE ProLiant DL380 Gen10/ProLiant DL380 Gen10, BIOS U30 10/02/2018kernel: Call Trace:kernel: [<ffffffff9dd63107>] dump_stack+0x19/0x1bkernel: [<ffffffff9d697768>] __warn+0xd8/0x100kernel: [<ffffffff9d6978ad>] warn_slowpath_null+0x1d/0x20kernel: [<ffffffff9d944b03>] blk_cleanup_queue+0x1a3/0x1b0kernel: [<ffffffffc0cd1f3f>] dmp_unregister_disk+0x9f/0xd0 [vxdmp]kernel: [<ffffffffc0cd7a08>] dmp_remove_mp_node+0x188/0x1e0 [vxdmp]kernel: [<ffffffffc0cd7b45>] dmp_destroy_global_db+0xe5/0x2c0 [vxdmp]kernel: [<ffffffffc0cde6cd>] dmp_unload+0x1d/0x30 [vxdmp]kernel: [<ffffffffc0d0743a>] cleanup_module+0x5a/0xd0 [vxdmp]kernel: [<ffffffff9d71692e>] SyS_delete_module+0x19e/0x310kernel: [<ffffffff9dd75ddb>] system_call_fastpath+0x22/0x27kernel: --[ end trace fd834bc7817252be ]--RESOLUTION:The queue flags are modified to handle this situation and not to log such warning messages.* 3988238 (Tracking ID: 3988578)SYMPTOM:Encrypted volume creation fails on RHEL 8DESCRIPTION:On the RHEL 8 platform, python3 gets installed by default. However, the Python script that is used to create encrypted volumes and to communicate with the Key Management Service (KMS) is not compatible with python3. Additionally, an 'unsupported protocol' error is reported for the SSL protocol SSLv23 that is used in the PyKMIP library to communicate with the KMS.RESOLUTION:The python script is made compatible with python2 and python3. A new option ssl_version is made available in the /etc/vx/enc-kms-kmip.conf file to represent the SSL version to be used by the KMIP client. The 'unsupported protocol' error is addressed by using the protocol version PROTOCOL_TLSv1.The following is an example of the sample configuration file:[client]host = kms-enterprise.example.comport = 5696keyfile = /etc/vx/client-key.pemcertfile = /etc/vx/client-crt.pemcacerts = /etc/vx/cacert.pemssl_version = PROTOCOL_TLSv1* 3988843 (Tracking ID: 3989796)SYMPTOM:Existing package failed to load on RHEL 8.1 setup.DESCRIPTION:RHEL 8.1 is a new release and hence VxVM module is compiled with this new kernel along with other few other changes .RESOLUTION:changes have been done to make VxVM compatible with RHEL 8.1Patch ID: VRTSsfmh-vom-HF0741600* 4176930 (Tracking ID: 4176927)SYMPTOM:NADESCRIPTION:NARESOLUTION:NAPatch ID: VRTScavf-7.4.1.3700* 4056567 (Tracking ID: 4054462)SYMPTOM:In a hardware replication environment, a shared disk group resource may fail to be imported when the HARDWARE_MIRROR flag is set.DESCRIPTION:After the VCS hardware replication agent resource fails over control to the secondary site, the CVMVolDg agent does not rescan all the required device paths in case of a multi-pathing configuration. The vxdg import operation fails, because the hardware device characteristics for all the paths are not refreshed.RESOLUTION:This hotfix addresses the issue by providing two new resource-level attributes for the CVMVolDg agent.- The ScanDisks attribute specifies whether to perform a selective for all the disk paths that are associated with a VxVM disk group. When ScanDisks is set to 1, the agent performs a selective devices scan. Before attempting to import a hardware clone or a hardware replicated device, the VxVM and the DMP attributes of a disk are refreshed. ScanDisks is set to 0 by default, which indicates that a selective device scan is not performed. However, even when ScanDisks is set to 0, if the disk group fails during the first import attempt, the agent checks the error string. If the string contains the text HARDWARE_MIRROR, the agent performs a selective device scan to increase the chances of a successful import.- The DGOptions attribute specifies options to be used with the vxdg import command that is executed by the agent to bring the CVMVolDg resource online.Sample resource configuration for hardware replicated shared disk groups:CVMVolDg tc_dg ( CVMDiskGroup = datadg CVMVolume = { vol01 } CVMActivation = sw CVMDeportOnOffline = 1 ClearClone = 1 ScanDisks = 1 DGOptions = "-t -o usereplicatedev=on" )NOTE: The new "-o usereplicatedev=on" vxdg option is provided with VxVM hot-fixes from 7.4.1.x onwards.* 4172870 (Tracking ID: 4074274)SYMPTOM:DR test and failover activity might not succeed for hardware-replicated disk groups and EMC SRDF hardware-replicated disk groups are failing with PR operation failed message.DESCRIPTION:In case of hardware-replicated disks like EMC SRDF, failover of disk groups might not automatically succeed and a manual intervention might be needed. After failover, disks at the new primary site have the 'udid_mismatch' flag which needs to be updated manually for a successful failover. And also we need to change SCSI-3 error message to "PR operation failed".RESOLUTION:For DMP environments, the VxVM & DMP extended attributes need to be refreshed by using 'vxdisk scandisks' prior to import. VxVM has also provided a new vxdg import option '-o usereplicatedev=only' with DMP. This option selects only the hardware-replicated disks during LUN selection process.And Pre VxVM 8.0.x, we getting "SCSI-3 PR operation failed" as shown and changes done respectivelySample syntax# /usr/sbin/vxdg -s -o groupreserve=VCS -o clearreserve -cC -t import AIXSRDFVxVM vxdg ERROR V-5-1-19179 Disk group AIXSRDF: import failed:SCSI-3 PR operation failedVRTScavf (CVM) 7.4.2.2201 agent enhanced on AIX to handle EMC SRDF VxVM vxdg ERROR V-5-1-19179 Disk group AIXSRDF: import failed: SCSI-3 PR operation failed failuresNEW 8.0.x VxVM error message format:2023/09/27 12:44:02 VCS INFO V-16-20007-1001 CVMVolDg:<RESOURCE-NAME>:online:VxVM vxdg ERROR V-5-1-19179 Disk group <DISKGROUP-NAME>: import failed:PR operation failed* 4172873 (Tracking ID: 4079285)SYMPTOM:CVMVolDg resource takes many minutes to online with CPS fencing.DESCRIPTION:When fencing is configured as CP server based but not disk based SCSI3PR, Diskgroups are still imported with SCSI3 reservations, which causes SCSI3 PR errors during import and it will take long time due to retries.RESOLUTION:Code changes have been done to import Diskgroup without SCSI3 reservations when SCSI3 PR is disabled.* 4172875 (Tracking ID: 4088479)SYMPTOM:The EMC SRDF managed diskgroup import failed with below error. This failure is specific to EMC storage only on AIX with Fencing.DESCRIPTION:The EMC SRDF managed diskgroup import failed with below error. This failure is specific to EMC storage only on AIX with Fencing.#/usr/sbin/vxdg -o groupreserve=VCS -o clearreserve -c -tC import srdfdgVxVM vxdg ERROR V-5-1-19179 Disk group srdfdg: import failed:SCSI-3 PR operation failedRESOLUTION:06/16 14:31:49: VxVM vxconfigd DEBUG V-5-1-7765 /dev/vx/rdmp/emc1_0c93: pgr_register: setting pgrkey: AVCS06/16 14:31:49: VxVM vxconfigd DEBUG V-5-1-5762 prdev_open(/dev/vx/rdmp/emc1_0c93): open failure: 47 //#define EWRPROTECT 47 /* Write-protected media */06/16 14:31:49: VxVM vxconfigd ERROR V-5-1-18444 vold_pgr_register: /dev/vx/rdmp/emc1_0c93: register failed:errno:47 Make sure the disk supports SCSI-3 PR. AIX differentiates between RW and RD-only opens. When the underlying device state has changed, because of the pending open count(dmp_cache_open feature), device open failed.Patch ID: VRTScavf-7.4.1.3400* 4056567 (Tracking ID: 4054462)SYMPTOM:In a hardware replication environment, a shared disk group resource may fail to be imported when the HARDWARE_MIRROR flag is set.DESCRIPTION:After the VCS hardware replication agent resource fails over control to the secondary site, the CVMVolDg agent does not rescan all the required device paths in case of a multi-pathing configuration. The vxdg import operation fails, because the hardware device characteristics for all the paths are not refreshed.RESOLUTION:This hotfix addresses the issue by providing two new resource-level attributes for the CVMVolDg agent.- The ScanDisks attribute specifies whether to perform a selective for all the disk paths that are associated with a VxVM disk group. When ScanDisks is set to 1, the agent performs a selective devices scan. Before attempting to import a hardware clone or a hardware replicated device, the VxVM and the DMP attributes of a disk are refreshed. ScanDisks is set to 0 by default, which indicates that a selective device scan is not performed. However, even when ScanDisks is set to 0, if the disk group fails during the first import attempt, the agent checks the error string. If the string contains the text HARDWARE_MIRROR, the agent performs a selective device scan to increase the chances of a successful import.- The DGOptions attribute specifies options to be used with the vxdg import command that is executed by the agent to bring the CVMVolDg resource online.Sample resource configuration for hardware replicated shared disk groups:CVMVolDg tc_dg ( CVMDiskGroup = datadg CVMVolume = { vol01 } CVMActivation = sw CVMDeportOnOffline = 1 ClearClone = 1 ScanDisks = 1 DGOptions = "-t -o usereplicatedev=on" )INSTALLATION PRE-REQUISITE:7.4.1GA + 7.4.1.3100 + 7.4.1.3200 + 7.4.1.3300Supported Platforms:RHEL8.6INSTALLING THE PATCH--------------------Run the Installer script to automatically install the patch:-----------------------------------------------------------Please be noted that the installation of this P-Patch will cause downtime.To install the patch perform the following steps on at least one node in the cluster:1. Copy the patch infoscale-rhel8_x86_64-Patch-7.4.1.3500.tar.gz to /tmp2. Untar infoscale-rhel8_x86_64-Patch-7.4.1.3500.tar.gz to /tmp/patch # mkdir /tmp/patch # cd /tmp/patch # gunzip /tmp/infoscale-rhel8_x86_64-Patch-7.4.1.3500.tar.gz # tar xf /tmp/infoscale-rhel8_x86_64-Patch-7.4.1.3500.tar3. Install the patch(Please be noted that the installation of this P-Patch will cause downtime.) # pwd /tmp/patch # ./installVRTSinfoscale741P3500 [<host1> <host2>...]You can also install this patch together with (7.4.1GA + 7.4.1.3100 + 7.4.1.3200 +7.4.1.3300) base release using Install Bundles1. Download this patch and extract it to a directory2. Change to the Veritas InfoScale 7.4.1 directory and invoke the installer script with -patch_path option where -patch_path should point to the patch directory # ./installer -patch_path [<path to 7.4.1.3100>] -patch_path [<path to 7.4.1.3200>] -patch_path [<path to 7.4.1.3300>] -patch_path [<path to this patch>][<host1> <host2>...]Install the patch manually:--------------------------Manual installation is not recommended.REMOVING THE PATCH------------------Manual uninstallation is not recommended.SPECIAL INSTRUCTIONS--------------------NONEOTHERS------NONE

Applies to the following product releases

InfoScale Availability 7.4.1

Release date: 2019-02-01

End of standard support:2024-07-31

Sustaining support starts:2026-07-31

End of support life:To be determined

InfoScale Storage 7.4.1

Release date: 2019-02-01

End of standard support:2024-07-31

Sustaining support starts:2026-07-31

End of support life:To be determined

InfoScale Foundation 7.4.1

Release date: 2019-02-01

End of standard support:2024-07-31

Sustaining support starts:2026-07-31

End of support life:To be determined

InfoScale Enterprise 7.4.1

Release date: 2019-02-01

End of standard support:2024-07-31

Sustaining support starts:2026-07-31

End of support life:To be determined

Update files

InfoScale 7.4.1U6 component patch for RHEL8 (2024)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Carlyn Walter

Last Updated:

Views: 6736

Rating: 5 / 5 (50 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Carlyn Walter

Birthday: 1996-01-03

Address: Suite 452 40815 Denyse Extensions, Sengermouth, OR 42374

Phone: +8501809515404

Job: Manufacturing Technician

Hobby: Table tennis, Archery, Vacation, Metal detecting, Yo-yoing, Crocheting, Creative writing

Introduction: My name is Carlyn Walter, I am a lively, glamorous, healthy, clean, powerful, calm, combative person who loves writing and wants to share my knowledge and understanding with you.