Copyright © 2010-2014 Dot Hill Systems Corp.
March 2014
This package delivers firmware for AssuredSAN 3000 series array controllers and includes enhanced features or fixes to issues found during use and additional qualification testing.
Beginning with TS230 and continuing with this release, firmware for all AssuredSAN 3000 series controllers was merged into a common firmware stream that is used for all host protocols.
The latest, approved companion versions of drive enclosure firmware are included in this firmware package. When firmware on 3000 series controller enclosures is updated using this firmware package, firmware on cascaded drive enclosures is also updated. For a list of supported companion drive enclosure firmware, see “Additional devices.”
| Operating system | AssuredSAN 3000 series controller model | ||||
|---|---|---|---|---|---|
| FC | FC/iSCSI | SAS | 10GbE iSCSI | 1GbE iSCSI | |
| Apple Mac OS | √ | √ | |||
| Microsoft Windows Server 2003 | √ | √ | √ | √ | √ |
| Microsoft Windows Server 2008 | √ | √ | √ | √ | √ |
| Microsoft Windows Server 2012 | √ | √ | √ | √ | √ |
| Red Hat Enterprise Linux 5.x | √ | √ | √ | √ | √ |
| Red Hat Enterprise Linux 6.x | √ | √ | √ | √ | √ |
| Solaris 10 | √ | √ | |||
| Solaris 11 | √ | √ | |||
| SuSE Linux Enterprise Server 10 | √ | √ | √ | √ | √ |
| SuSE Linux Enterprise Server 11 | √ | √ | √ | √ | √ |
| VMware 4.x | √ | √ | √ | √ | √ |
| VMware 5.x | √ | √ | √ | √ | √ |
On the FC/iSCSI controller, Apple Mac OS and Solaris are supported for FC connections only.
AssuredSAN 3000 series System array controller enclosures support the cascading of drive enclosures. The following table lists supported drive enclosure models and firmware versions.
| AssuredSAN 3000 series array controller firmware | Cascaded drive enclosure model | Minimum qualified drive enclosure firmware |
| TS251S | Dot Hill 3120 and 3130 Drive Enclosures | S200B28 |
| Dot Hill 2122 Drive Enclosure | E110B17 | |
| Dot Hill 2130 Drive Enclosure | O320B13 |
After updating array controller firmware or after connecting new drive enclosures to an existing controller enclosure, verify the firmware compatibility of all devices. If needed, obtain and install the supported controller or drive enclosure firmware. Firmware is available for download from the Customer Resource Center at http://crc.dothill.com.
Previous versions of the AssuredSAN 3000 Series firmware required an AssuredSAN 3000 series software plug-in in order to work with the VMware vStorage API for Array Integration (VAAI). This plug-in enabled the offloading of key ESX operations to 3000 Series storage systems.
Beginning with the TS251R004 firmware release, the AssuredSAN 3000 Series VAAI Plug-in is no longer supported. The AssuredSAN 3000 Series controller firmware now uses T10 compliance in an ESX Environment.
In order to properly upgrade your ESX Environment, perform the following actions:
Determine whether your ESXi/ESX 5.x host has the AssuredSAN 3000 Series VAAI Plug-in installed.
Disable the AssuredSAN 3000 Series VAAI Plug-in.
Remove the VAAI vSphere Installation Bundle (VIB).
Remove all claim-rules associated with the AssuredSAN 3000 Series VAAI Plug-in.
NOTE: Failure to correctly remove the AssuredSAN 3000 Series VAAI Plug-in and associated claim-rules will result in degraded performance and possible loss of access to datastores. | |
The following features were added or enhanced in TS251R004:
Added functionality to avoid ensure proper branding problem after a firmware downgrade to TS240/TS230.
Enhanced the hardware failure event alerting.
Allowed background drive scrubs from controller B.
Upgraded Linux kernel to 2.6.38.8.
Enhanced the user creation policy to prevent use of restricted user names.
Fixed vdisk reconstruction event message to correlate with the correct status.
Improved "Health" reporting for sub-components of the controller.
Improved the Trust process.
Improved the CLI restart sc both command address arrays with single controller.
Added support for T10-compliance for VAAI.
Improved logging to prevent reset after upgrade.
Changed the criticality of event 172 from a warning to an error.
Changed the event level to critical rather than the present warning level, if a vdisk becomes critical and no spares are configured.
Improved the event notification to handle the situation if replication fails due to a bad block on the source volume.
Upgraded OpenSSH to 6.3.
Improved password security.
Enhanced help for AssuredSAN 3000 Series replication to AssuredSAN 3004 Series storage systems.
Enhanced remote command security.
Enhanced the quarantine function.
Enhanced extended copy command for T10 compliance.
Added warning message about default volume mapping in online help in RAIDar and the CLI. Also, default mapping requires confirmation in RAIDar.
Improved failed drive reporting during medium error.
Improved text for compact flash failure event 204.
The following feature was added in TS250P002:
RAIDar localization in Simplified Chinese, Chinese Traditional, Dutch, French, German, Italian, Korean, Spanish, and Japanese.
The following features were added or enhanced in TS250R023:
Added support for Microsoft Windows Server 2012.
Added the ability to change the link speed (100Mbps, 10Mbps) and duplex (half, full) mode of the controller management port.
After a reconstruction completes, an event is now logged that includes the number of unrecoverable errors, if any.
When event 173 An error was reported by a disk drive is reported due to a vdisk being dequarantined, the message now includes the state of the vdisk and indicates whether the dequarantine was automatic or was requested manually.
In the CLI, the trust command now indicates disks that are out-of-sync, the age of the disk to indicate how far out of sync it is, and disks that were used for reconstruction but where reconstruction did not complete. The user is given the choice to include the out-of-sync or partially reconstructed disks, to exclude them, or to abort. If no out-of-sync or partially reconstructed disks are present, the command continues without a prompt.
The Clear Disk Metadata function of RAIDar now prompts to confirm the operation, allowing the user a chance to reconsider the possibly destructive results of the operation.
Included instructions for obtaining historical performance data in the FTP prompt.
Fibre Channel soft loop IDs are now retained during controller failover and failback.
The following features were fixed in TS251R004:
After detaching the secondary volume and deleting the replication set, accessing the secondary volume failed.
Logs produced 0kb store.log files.
ATS command improperly aborted.
LSI logic chip returned bad link.
Disk drive error counters were not cleared for slot when a disk drive was replaced.
Disk drive did not become ready following a disk firmware update.
Get logs via ftp crashed.
MC rebooted due to timeout.
Pulling the Management Ethernet cable reset the link settings.
Changed the failure alert to show the correct power supply when you receive a 314 error.
Scheduler did not honor the "Time Constraint" and "Date Constraint" of the schedule policy.
Reverse sequential IO's crashed the controllers.
Controller B did not perform event notification when the Ethernet cable of Controller A was unplugged.
Cache sync failed.
Slow insertion of a controller caused incorrect modification to vdisk metadata.
Default iSCSI host IP settings did not persist after SC reboot or power cycle, after being set via MC CLI command restore defaults factory.
Drive scrub error stats were incorrect.
Security issues have been addressed.
Modified the permissions interfaces for admin user.
Incorrect shutdown status of both controllers was shown.
RAIDar showed incorrect message when restarting partner MC.
Incorrect table of contents displayed for Japanese help.
RAID 10 vdisk automatically de-quarantined with a stale disk.
The iSCSI Host Interfaces accepted duplicate IP addresses.
Host with openvms profile accepted LUN 0 while mapping a volume to a host.
Unable to get logs without restarting MC.
Replication not re-established following move of replication array.
Controllers crashed on login.
Controller crashed due to iSCSI connection loss.
Controller crashed during failback while replication in progress.
Rebuild targets did not go to leftover state when a drive is pulled, while reconstruction and de-quarantine were in progress.
Array health status was not propagated correctly in RAID 10 and RAID 50 vdisks.
Replications were suspended.
Array obtained a duplicate IP address when the array was configured for DHCP.
Unable to delete remote systems.
Continued to use old node world-wide name after controller replacement.
Schedules did not run or ran before the modified start time when an expired schedule was modified.
FRU list did not show fault on PSU.
MC locking issues occurred.
SNMP Get Fails -- SNMP agent died.
The vdisk name appeared as "Unknown name" when a volume was unmapped.
Single controller showed P2000 add-on enclosure as degraded.
MC firmware on Controller A did not upgrade if firmware on Controller B was upgraded.
Errors occurred in SSL connection.
Users were disconnected in RAIDar and CLI when deleting their own accounts.
Show disk information intermittently was not returned when partner controller was physically removed.
Controller crashed when using Remote Snap and the master volume was offline.
The following features were fixed in TS250P002:
Fixed the problem with aborting Atomic Test-andSet (ATS) commands.
Fixed the hanging controller issue that is seen during several iterations of the VMware Site Recovery Manager (SRM) failover and failback process.
Fixed the Atomic Test-and-Set logic that set the incorrect error state condition and subsequently crash the array controllers.
Partner Firmware Update (PFU) now functions properly when the new Management Controller (MC) version matches the old MC version.
The following features were fixed in TS250R023:
Alerts indicating that a disk channel has gone into a degraded state have been modified so that the normal reduction of channel speed due to intermittent noise will no longer generate a Warning alert.
Preliminary reporting of a supercap overcharge failing condition was removed.
When the array received a DHCP address from a DHCP server, it responded with a unique hostname, which displayed in the DHCP server. This unique hostname overwrote the hostname supplied by the DHCP server in option 012.
Hyper-V servers could not be migrated to AssuredSAN 3000 series storage systems. Support for Hyper-V Live Migration has been added.
Controllers were unresponsive to CLI requests for up to 30 minutes after a firmware update.
For SMI-S, inconsistent data was provided for the CIM_AuthorizedPrivilege instance.
SMI-S queries might fail if Small Footprint CIM Broker daemons are restarted after a memory threshold limit is reached. After enabling either SMI-S protocol, the SMI-S service on the array will now consistently start.
The system failed to identify and report incorrect SAS cabling to external drive enclosures.
A replication set failed to add the secondary volume, even though the volume was created.
SNMP stopped working.
Vdisk scrub after a hard reboot did not start at the expected time.
Event 58 An error was reported by a disk drive, was reported as an ERROR instead of INFORMATIONAL.
CPLD code failed to update.
Could not access the management controller, because the controller did not connect to its assigned static IP address.
There was a discrepancy in the state of the drives when Available, Global Spare, and vdisk drive spindown is enabled.
Two drives were specified as global spares for a RAID6 LUN, but only one of the drives was actually assigned.
Fixed a rare condition where the controllers were responding with the incorrect brand information to SMI-S requests.
Both controllers crashed when installing firmware.
In dual-controller configurations, a newly-installed controller did not inherit the properties of the other controller.
After the maximum number of volumes and snapshots have been created, new volumes could not be created, even after deleting snapshots or volumes to reduce the number below the maximum.
Persistent Reservations were not updated with a new key, and attempts to preempt it fail.
Added fixes to correct NMIs and Failure to Flush to Non-Volatile Memory errors.
Fixed an issue with the controller reporting incompatible host ports on the 10Gb iSCSI controller.
Corrected an issue where the controller would halt due to simultaneous physical reconstruction events occurring.
Fixed issues where, under rare conditions, the controllers halted during failover and failback operations.
Firmware update failed.
When a volume was deleted and the controller restarted, stale information about the volume was retained in the cache.
When a controller was replaced, host nicknames were cleared.
A controller crashed with a page fault after creating a remote replication.
A controller halted due to an extreme number of medium errors on a drive.
Could not obtain logs via FTP.
The controller crashed when a Fibre Channel cable was pulled.
The controller crashed during a RAID 6 reconstruction.
Both controllers crashed when a read operation returned a non-medium error on a reconstruction target.
A controller crashed when vdisk ownership was changed and a volume copy operation was in progress.
Drives that were re-seated returned a status of VDISK instead of LEFTOVR.
Event 481 indicated that the compact flash (CF) needed to be replaced, but the CF is not a replaceable component. When event 481 is received, you need to replace the controller module.
If a snapshot operation was initiated on a volume and the volume was deleted while the operation was in progress, messages were displayed that the operation completed successfully, when it really did not.
The controller stopped responding to SMI-S and CLI queries.
A RAID 6 volume did not rebuild.
Management IP addresses were reset from DHCP to 10.0.0.2.
Vdisk expansion metadata became invalid, causing the expansion to halt.
The Management Controller hung and when it was restarted by the Storage Controller, several events were logged during the process.
When controllers were interchanged within an enclosure, the vdisks were quarrantined.
After a SATA drive was removed from a drive enclosure, RAIDar and CLI still reported the drive as being installed.
Unable to perform explict LUN mapping with the no access option.
When modifying a recurring scheduled task, the Next Run time advanced past the next intended run time.
The management controller hung.
CLI-Specific fixes:
restore defaults command: inadvertently permitted the “manage” user to access restricted commands..
show frus and show expander-status commands: accepted a ? in the command, and then returned incorrect output.
clear cache command: when in single-controller mode, the cache was not cleared.
show cache-parameters command: reported an Operation Mode of Unknown.
show redundancy command: the display did not show that the controller was in single-controller mode.
show replication-volumes command: did not show correct volume information when both volumes are set as primary.
Show volumes command: occasionally displayed an empty list.
show host-parameters command: did not display any output; it only displayed the word successful.
convert master-to-std command: the system took up to 30 seconds to respond.
RAIDar-Specific fixes:
When using Windows Internet Explorer, a Java Script error occurred during a restart or shutdown of the controller.
RAIDar did not respond after restarting both management controllers.
Mapping information for controller B is occasionally incorrect.
Event message implies the wrong controller needs to be restarted.
In the tabular view of the front of the enclosure, sorting by the Size column did not sort properly.
After installing disk firmware, a Communication with the system has been lost message was displayed.
When configuring an iSCSI host interface, the Link Speed field is not updated in some languages.
After deleting a snapshot on the Destination system, the snapshot was still included in the display.
The Configure Network Interfaces page did not update after changing values.
When operating in Single Controller Mode, RAIDar reported that the health of the non-present controller and of attached drive enclosure I/O modules was not available or was degraded.
When hovering over a button or tab, the button or tab disappeared.
When a vdisk was deleted from one controller, system status showed a warning, and the other controller showed a degraded health status.
History of replication jobs remained in RAIDar, even though older snapshots were deleted successfully. They cleared only after the management controller was restarted.
The local link status of controller host ports might be reported incorrectly.
AssuredSAN 3000 series systems contain an embedded SMI-S provider for use by SMI-S client applications. The embedded provider is designed to support 3000 series configurations with up to 24 hard drives and up to 250 mapping paths. A mapping path is defined as a 3000 series volume presented through a 3000 series target port to a Host initiator.
In environments using replication, all AssuredSAN 3000 series controllers must have the same firmware version installed. Running different firmware versions among AssuredSAN 3000 series controllers might prevent replications from occurring.
To replicate between an AssuredSAN 3000 series system and an AssuredSAN 4004 series series system or AssuredSAN Ultra series system, the secondary volume must be exactly the same size as the primary volume. To ensure that the size is exactly the same when creating the secondary volume manually, use the CLI replicate volume command as described in the AssuredSAN 3000 Series CLI Reference Guide and the AssuredSAN 4004 Series CLI Reference Guide.
When changing a replication set (for example, adding or removing a replication volume, or deleting the replication set), do so from the source system; when aborting, suspending, or resuming a replication, do so from the destination system.
When changing the primary volume of a replication set, do so from the destination system first, then perform the change on the source system.
When using Windows Dynamic Disk (software RAID) on top of hardware RAID, there are some cautions to be considered. For more information, see the section “Real World: Dynamic versus Basic Disks” on the topic at http://technet.microsoft.com/en-us/library/dd163558.aspx.
Failover and failback times are affected by the number of system volumes; the more volumes there are on the system, the more time is required for failover and failback to complete.
For AssuredSAN 3920 and 3930 hybrid FC/iSCSI controllers, mapping a volume via iSCSI and FC to the same server is not a supported configuration. Many operating systems' multipath solutions will not correctly handle the multi-protocols. Do not map a LUN in this manner.
Do not cycle power or restart devices during a firmware update. If the update is interrupted or there is a power failure, the module could become inoperative. If this occurs, contact technical support. The module may need to be returned to the factory for reprogramming.
Before upgrading firmware, ensure that the system is stable and is not being reconfigured or changed in any way. If changes are in progress, monitor them and wait until they are completed before proceeding with the upgrade.
In dual-module enclosures, both controllers or both I/O modules must have the same firmware version installed. Running different firmware versions on installed modules may cause unexpected results.
Create a full backup of system data. (Strongly recommended.)
Schedule an appropriate time to install the firmware:
For dual domain systems, because the online firmware upgrade is performed while host I/Os are being processed, I/O load can impact the upgrade process. Select a period of low I/O activity to ensure the upgrade completes as quickly as possible and avoid disruptions to hosts and applications due to timeouts.
Allocate sufficient time for the update:
It takes approximately 45 minutes for the firmware to load and for the automatic restart to complete on the first controller module. When dual modules are installed, the full process time is approximately 90 minutes. If cascaded drive enclosures are also being updated, total process time may be as long as 180 minutes.
Set the Partner Firmware Update option so that, in dual-controller systems, both controllers are updated. When the Partner Firmware Update option is enabled, after the installation process completes and restarts the first controller, the system automatically installs the firmware and restarts the second controller. If Partner Firmware Update is disabled, after updating software on one controller, you must manually update the second controller.
Monitor the system display to determine update status and see when the update is complete.
Verify system status in the system's management utility and confirm that the new firmware version is listed as installed.
Review system event logs.
Updating array controller firmware may result in new event messages that are not described in earlier versions of documentation. For comprehensive event message documentation, see the most current version of the AssuredSAN 3000 Series Event Descriptions Reference Guide.
Ensure that both Ethernet connections are accessible before downgrading the firmware.
When using a Binary firmware package, you must manually disable the Partner Firmware Update (PFU) and then downgrade the firmware on each controller separately (one after the other).
Reverting from TS250 to firmware prior to TS230 is not supported.
Using RAIDar to install TS250 firmware is supported only when upgrading from TS230 or later firmware. When upgrading from all other firmware versions, install the TS250 firmware using FTP.
Obtain the firmware package in .zip file format from the Customer Resource Center website at http://crc.dothill.com. Save it to a temporary directory, and extract the contents.
Locate the extracted firmware file. The firmware filename is in the following format: TSxxxRyyy-zz.bin
In single-domain environments, stop all I/O to vdisks in the enclosure before starting the firmware update.
Log in to RAIDar and, in the Configuration View panel, right-click the system and then select Tools > Update Firmware.
Wait for the installation to complete. During installation, each updated module automatically restarts.
In the RAIDar display, verify that the expected firmware version is installed on each module.
Obtain the firmware package in .zip file format from the Customer Resource Center website at http://crc.dothill.com. Save it to a temporary directory, and extract the contents.
Locate the extracted firmware file. The firmware file name is in the following format: TSxxxRyyy-zz.bin
Using RAIDar, prepare to use FTP:
In single-domain environments, stop I/O to vdisks in the enclosure before starting the firmware update.
Open a command prompt (Windows) or a terminal window (UNIX), and navigate to the directory containing the firmware file to load.
Wait for the installation to complete. During installation, each updated module automatically restarts.
If needed, repeat these steps to load the firmware on additional modules.
Verify that the expected firmware version is installed on each module.
Using RAIDar, right-click the system in the Configuration View panel, and then select Tools > Update Firmware.
In the CLI, execute the show version or the show enclosures command.
If you experience issues during the installation process, do the following:
When viewing system version information in the RAIDar System Overview panel, if an hour has elapsed and the components do not show that they were updated to the new firmware version, refresh the browser. If version information is still incorrect, proceed to the next troubleshooting step.
If version information does not show that the new firmware has been installed, even after refreshing the browser, restart all system controllers. For example, in the CLI, enter the restart mc both command. After the controllers have restarted, one of three things happens:
Updated system version information is displayed and the new firmware version shows that it was installed.
The Partner Firmware Update process automatically begins and installs the firmware on the second controller. When complete, the versions should be correct.
System version information is still incorrect. If system version information is still incorrect, proceed to the next troubleshooting step.
Verify that all system controllers are operating properly. For example, in the CLI, enter the show disks command and read the display to confirm that the displayed information is correct.
If the show disks command fails to display the disks correctly, communications within the controller have failed. To reestablish communication, cycle power on the system and repeat the show disks command. (Do not restart the controllers; cycle power on the controller enclosure.)
If the show disks command from all controllers is successful, perform the firmware update process again.
The following is a cumulative list of known issues and workarounds:
Issue: The AssuredSAN 3000 System Setup Guide describes the Cache LED on the rear panel in a way that might be confusing or misleading. Workaround: Clarification of Cache Status LED details: If the LED is blinking evenly, a cache flush is in progress. When a controller module loses power and the write cache contains data that has not been written to disk, the super-capacitor pack provides backup power to flush (copy) data from the write cache to the CompactFlash memory. When the cache flush is complete, the LED is no longer blinking. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: Replications stop after a few iterations. Workaround: Restart the management controller. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: Newly installed drives reported errors. Workaround: Replace the drive. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: Historical data for a drive is cleared when a controller crashes. Workaround: None. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: A warning about the coin battery was not displayed in the RAIDar events log. Workaround: Reset the controller date and time to be current and restart the management controller. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: RAIDar reports an error of "input too long" when trying to map a volume that is part of a replication image. Workaround: Shorten the length of the Snapshot Name. Selecting defaults in RAIDar adds 4 characters to an image name if the replication occurs when the set is created. If replications are scheduled when the set is created, 4 characters are added as a prefix, and 6 characters are added for the unique snapshot name. In either case 5 characters are added for the exported snapshot name. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: When updating drive firmware, a message is returned stating that the disk is unsupported. Workaround: None. The message is incorrect. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: When creating a volume in RAIDar, if the user changes the units from GB to MB but does not change the volume size, the volume will be created in GB not MB. Workaround: Validate the volume size after creating a new volume. If the volume was created with the wrong units, delete and re-create the volume. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: The controller may take longer than expected to respond to SSH and Telnet requests. Workaround: Restart the management controller. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: The SMI-S modify volume name shows up as non-supported in Windows Server Manager 2012. Workaround: Modify the volume name using RAIDar or the CLI. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: An error message may be displayed when restarting the controller, even when the controller restarted successfully. Workaround: None | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: In the RAIDar Configuration > Remove User page, the User Name field is not enabled, even though the asterisk indicates it is required. Workaround: Select the user from the list using the radio buttons. The User Name field will be automatically filled in. | Issue: RAIDar may incorrectly report the local link status of controller host ports. Workaround: Use the CLI verify links command to verify the local link status. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: During a firmware upgrade of the controller, events will be generated indicating a mismatch of the versions compared to the versions in the firmware bundle (Event 363, Severity Error). This is normal operation, as the checks are done during the management controller boot process. After firmware upgrade is complete on both controllers, verify the versions are correct, and a new informational event the firmware versions match those in the firmware bundle (Event 363, Severity Informational). Workaround: None | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: Historical disk and vdisk performance data is not persistent across controller power events. Workaround: None | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: Vdisk Data Transferred and Data Throughput numbers appear to be much higher when using the CLI historical show vdisk-statistics [vdisk] historical command, compared to CLI live output show vdisk-statistics command. This is caused by the way that the historical and live values are calculated.
Because I/Os from the RAID engine are included, numbers for the historical data appear higher than the numbers for the live data. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: Saving historical disk performance statistics from RAIDar fails with an invalid time-range parameter. Workaround: Change the start date/time of the time range. Make sure the start date/time is after the last reset and, for new systems, is after the system install time. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: In environments using replication, if the controllers have different firmware versions installed, replications may be suspended. Workaround: Ensure that all controllers in replicating systems have the same firmware version installed. When firmware on the controllers is the same version, suspended replications automatically resume. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: When a previously used drive is inserted in the enclosure, it may retain information about vdisks, volumes and volume mappings from its previous use. However, the LUN numbers of these volume mappings may conflict with LUN numbers currently in use in volume mappings on the system. If this occurs, the system resolves those conflicts by removing the mappings. Workaround: Remap the volumes as desired. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: In RAIDar, while trying to modify a vdisk name, / is replaced by a space. Workaround: None | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: In the CLI, the create volume-set command using the same basename parameter for more than 999 volumes generates an error. Workaround: Do not exceed 999 when assigning the volume identifier number. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: In the CLI, the show sensor-status command does not show warning levels or indicate fan status. Workaround: None | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: When a vdisk becomes critical, the array may generate multiple events. Workaround: None | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: In event messages, power supplies are referred to by different terminology. Sometimes power supply 1 is reported as ”left” and sometimes reported as “1”. Likewise, power supply 2 is sometimes reported as “right” and sometimes reported as “2”. Workaround: None | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: A serial number was not generated for SMART drive event 55. Workaround: Identify the drive using the enclosure and slot number. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: When using the CLI show master-volumes command, a volume that has been converted to a standard volume is still included in the display. Workaround: Log out and then log back in to the CLI. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: In RAIDar, global spares have a status of Up even if they are spun down using the drive spin down feature. Workaround: None | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: In the CLI, the set prompt command allows you to enter more than 16 characters. Workaround: None | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: In RAIDar, when logging in using an unsupported browser, the returned display does not show the correct list of which browsers are supported. Workaround: Use only the following supported browsers:
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: On the Configuration > Advanced Settings > System Utilities page, changing the Vdisk Scrub and Managed Logs settings at the same time may result in an error. Workaround: Make these changes one at a time. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: Manually creating a replication-prepared (secondary) volume and associating it with a primary volume originally created with pre-TS230 firmware can fail. Automatically created secondary volumes do not have this problem. Workaround:
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: When using both the primary and secondary paths on both ports of the Qlogic iSCSI HBAs, failover does not work correctly on cable pulls. Workaround: When setting up the Qlogic iSCSI HBAs, set up only the primary path for both ports. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: When creating a volume set with the volumes mapped to LUNs, if there is a LUN conflict, the array stops mapping volumes to LUNs, but creates the volumes as requested. Workaround: Ensure that there are no LUN conflicts before creating the volume set with mapping or map the remaining volumes to LUNs after the conflict. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: For Fibre Channel systems connected directly to the server, the QLogic 8 Gb FC driver version 9.1.9.25 on Microsoft Windows Server 2008 R2 x64 does not see LUNs when the array is set up for point to point. Workaround: Upgrade to the latest driver version available or change the array host ports to loop mode. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: For SAS systems, failover is slow when more then 128 LUNs are accessed from a Red Hat Enterprise Linux 4.x or SuSE Linux Enterprise Server 10 SP3 client. Workaround: Map less then 128 LUNs to SAS clients. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: In RAIDar, the Japanese version of some pages and some error messages displays English text. Workaround: None. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: The array incorrectly accepts a DNS name for the address of the NTP server in the Storage Management Utility. The array does not use DNS, and translates the name into an invalid “255.255.255.255” IP address. Workaround: Instead of a network name, enter the NTP server IP address. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: In the Command Line Interface, the array incorrectly accepts a DNS name for the address for the SMNP, SMTP, and NTP servers. The array does not use DNS, and cannot connect to the server correctly. Workaround: Instead of network names, enter the IP addresses for the servers. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: In Windows configurations, the IQN shown by the NC551 card during POST may not match the IQN seen in the array controller. This occurs when the NC551 was set up in a boot-from-storage configuration. After an operating system is installed, the POST message shows the IQN that is supplied by the iSCSI Software Initiator, but the NC551 BIOS continues to use the IQN setup to boot the OS. Workaround: Using the NC551 BIOS Utility, remove the boot settings and then log back into the array with the new IQN. If the volume used for mapping was explicitly mapped to the host, recreate the mapping for the new IQN. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: When accessing more than 128 LUNs using a Qlogic iSCSI HBA in boot-from-storage configurations, the system may hang when a reset is issued on the array. Workaround: Access 128 LUNs or less via the Qlogic iSCSI HBA when using the card in boot-from-storage configurations. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: RHEL 4.8 may not discover all multipath devices and partitions during boot or reboot. Workaround: This issue is addressed by applying the updated device-mapper-multipath package described in RedHat Bug Fix Advisory RHBA-2009:1524-1, available at http://rhn.redhat.com/errata/RHBA-2009-1524.html. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: Under rare circumstances, some events from one controller are not seen on the other controller. Workaround: Review the events from both controllers. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: During a firmware upgrade, the firmware bundle version may show incorrectly. Workaround: Wait until the firmware upgrade process is complete before checking the firmware bundle version. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: Javascript issues are seen when using Microsoft Internet Explorer in multi-byte language locales, resulting in truncated messaging and hung pop-up windows. These issues will be resolved in a future firmware release. Workaround: This is a display problem only. When a pop-up window remains on screen with no update for a prolonged period, close and then re-open the browser. The Internet Explorer English locale and the Firefox browser do exhibit the issues. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: In SLES 11 environments, when using the iSCSI initiator tools included in SLES 11, the host occasionally does not correctly log into the iSCSI array on reboot, even when set to automatically connect. Workaround: Restart the iSCSI service on the SLES 11 host. This can be done by entering the following command: /etc/init.d/open-iscsi restart | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: SLES 11 may require multiple minutes (15+/-) to create all multipath devices during boot. This typically involves a system with a large number of LUNs and multiple LUN paths. Workaround: None. Wait for the system to complete LUN and path discovery. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: SLES 11 SP1 may not create all devices during boot. This typically involves a system with a large number of LUNs, multiple LUN paths, and the SLES 11 SP1 open-iscsi utilities. Workaround: Do one of the following:
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: In rare conditions, the array controller may report that a supported 10 GbE SFP+, 10GbE Copper Cable, or 10GbE Direct Attach Cable is unsupported. This condition is most likely to occur when a SFP+, Copper Cable, or Direct Attach Cable is hot plugged into the controller while the controller is running. When this occurs, the following Warning message is recorded in the event logs: “An unsupported cable or SFP was inserted." At the same time, the host port does not show a status of "Down." Workaround: Do the following:
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: RHEL 5 Update 5 does not shutdown properly when using the iSCSI initiator utilities shipped in RHEL 5 Update 5 to access the array. Workaround: See issue 583218 on the Red Hat Bugzilla bug-tracking system https://bugzilla.redhat.com/show_bug.cgi?id=583218 for the current status of the issue and possible workarounds. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: When using explicit LUN mapping, using long IQN names for the iSCSI Initiator can cause the array to map the LUN incorrectly. A predefined area is used to store explicit LUN mapping information per LUN and, with longer IQN names, this area can be exhausted. This issue is not dependent on the number of paths to the LUN. Workaround: Shorten the IQN name on the nodes. The following formula is used to calculate the maximum IQN name length based on the number of hosts being explicitly mapped to a LUN on the array:
The following table provides the calculated values based on the number of hosts being explicitly mapped to a LUN on the array:
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: USB CLI becomes unusable after a Management Controller reboot in Windows environments. Workaround:
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: There is no indication that a LUN has failed over to the other controller. Workaround: Using RAIDar, open up system events and scan for failover events. When using the CLI, use the show events command. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: A replication is initiated, but only a snapshot on the primary volume occurs, or the replication is queued. Workaround: Ensure that all systems involved have valid replication licenses installed and that all volumes and vdisks involved in the replication have started, are attached, and are in good health, including vdisks that contain the snap pools for the volumes involved. A replication normally queues when a previous replication involving the same volumes is active. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: A replication set was deleted, but is later shown with the primary volume status of “Offline” and the status-reason is record-missing. Workaround: This generally occurs when the secondary volume is detached and its vdisk stopped when the replication set was deleted, and then the vdisk of the secondary volume restarted. To correct this issue, reattach the secondary volume, set it as the primary volume, and delete the replication set. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: An error message indicating “controller busy” occurs while creating a replication set. Workaround: Creating a replication set immediately following another replication set creation may result in "Controller Busy." This is expected behavior. Wait and try the operation again at a later time. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: In RAIDar, the Vdisk > Provisioning > Create Multiple Snapshots task allows a secondary volume to be selected, but fails the operation. Workaround: User initiated snapshots are not allowed on secondary volumes. Do not select a secondary volume. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: A scheduled replication is missing or replications are queued, but do not complete. Workaround: A best practice is to schedule no more than four volumes to start replicating at the same time and for those replications to recur no less than 30 minutes apart. If you schedule more replications to start at the same time or schedule replications to start more frequently, some scheduled replications may not have time to complete. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: Unable to perform a local replication (a replication where the external view volume and the destination volume reside on the same system) with a single vdisk. Workaround: Create a second vdisk for the destination volume. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: Deleting the replication-set from the destination system fails. Workaround: Delete the replication set from the source system (the system where the external view volume resides.) | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: A replication set is missing the primary volume and the replication set cannot be deleted. Workaround: Set the primary volume to the remaining volume in the set. You should then be able to delete the replication set. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: On rare occasions, deleting a vdisk when volumes are in the process of rolling back may cause communications issues between the management controller and the storage controller. Workaround: Cycle power on the array to resolve the issue. To avoid this situation, allow the rollbacks to complete or delete the volumes before deleting the vdisk. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: Scheduled tasks are not occurring, and there is no indication of a problem in the schedules or the tasks. Workaround: Restart both management controllers (MCs) of the array(s) involved in the tasks. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: Cannot schedule volume copy operations, or scheduled volume copy operations for snapshots and standard volumes do not occur. Workaround: Perform the volume copy manually. Scheduled volume copies of master volumes should complete successfully if the schedule permits completion of the volume copy before the next occurrence. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: Debug logs are incomplete. Workaround: Determine if the logs are incomplete by unzipping the log file retrieved from the array and examining the last line of the store_YYYY_MM_DD__HH_MM_SS.logs file for the lines: End of Data ]]></LOG_CONTENT></RESPONSE>. If the file contains these two lines at the end of the file, it is complete and you can forward it to your service support organization for analysis. If the file does not contain these two lines at the end, it is incomplete and may not be useful. In this case, repeat the log collection process after a 5 minute delay. Should the second collection contain the above specified lines at the end of the file, send it to your service support organization for analysis along with the first set of logs. However, if the second file does not contain the above specified lines at the end of the file, reboot the system and try once more to collect the logs. Be sure to send all collected logs back to your service support organization with a brief note explaining the actions you took and the result. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: In a dual controller system, log in to one of the controllers fails, but log in to the other controller succeeds. Workaround: Log in to the other controller and restart the inaccessible management controller using the CLI restart mc command or RAIDar Tools > Shut Down or Restart Controller page. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: IOPs and Bytes per second may be lower or higher than expected for the workload. Workaround: This is a reporting issue and not a performance issue. The correct values can be calculated by using the change in the Number of Reads and Number of Writes over time to determine IOPs, and the change in Data Read and Data Written over time to determine Bytes per second. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: The array controller may interpret a switch login as an HBA login and erroneously present the switch port as a discovered host. This does not affect storage functionality. Workaround: Either identify the erroneous host and do not attempt to use, or Disable Device Scan on switch ports connected to the array controller and restart the array controller. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: During firmware upgrade, FTP is aborted from a Windows client after starting the upgrade. Workaround: This is a client side FTP application issue. If this issue persists try updating from RAIDar, use another client, or use another FTP application. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: Upgrading firmware failed with the error, “Unwritable cache data present.” Workaround: The controller is not in a state that can reliably perform an upgrade without losing data currently in cache. Resolve the unwritable cache issue and retry the upgrade. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: While updating the array firmware using RAIDar, if the array must reboot the management controller the web page may not automatically log the user out completely resulting in a blank page. Workaround: Refresh the browser window; if the login page is not displayed, close the browser and restart it to access the login page and complete the firmware update. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Issue: While performing a firmware update using RAIDar to multiple arrays, the window showing the status of the upgrade may appear as a blank window. Workaround: Updating multiple arrays at the same time can cause this issue. Perform one firmware update from one client at a time. Updating one array at a time from a client allows the window to refresh more accurately. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Firmware version | AssuredSAN 3000 series model | Release date |
| TS251R004 | All | March 2014 |
| TS250P002 | All | August 2013 |
| TS250R023 | All | April 2013 |
| TS240P004 | All | November 2012 |
| TS240P003 | All | July 2012 |
| TS240P001 | All | June 2012 |
| TS240R037 | All | May 2012 |
| TS230P008 | All | November 2011 |
| TS230P006 | All | August 2011 |
| TS230R044 | All | July 2011 (updated notes to announce support for all AssuredSAN 3000 series System controllers) |
| iSCSI | June 2011 | |
| TS201P007 | FC and hybrid FC/iSCSI | February 2011 |
| TS220R004 | iSCSI | November 2010 |
| TS210R016 | iSCSI | September 2010 |
| TS200R021 | SAS | June 2010 |
Rev. A |
Part number: 83-00004779-16-01 |