Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 357. Отображено 190.
30-08-2018 дата публикации

Schreiben neuer Daten einer ersten Blockgröße in einen RAID-Array, der sowohl die Parität als auch die Daten in einer zweiten Blockgröße speichert

Номер: DE102012103655B4

Verfahren implementiert in einer Vorrichtung, wobei das Verfahren umfasst:Empfangen neuer Daten, durch einen „Redundante Anordnung unabhängiger Platten“ (RAID)-Controller (102), die geschrieben werden sollen, wobei die neuen Daten in Blöcken einer ersten Blockgröße angegeben werden;Lesen alter Daten (606) und alter Parität (608) durch den RAID-Controller (102), die den alten Daten entspricht, gespeichert in Blöcken einer zweiten Blockgröße, die größer ist als die erste Blockgröße;Berechnen einer neuen Parität (610) durch den RAID-Controller (102), basierend auf den neuen Daten, den alten Daten, und der alten Parität; undSchreiben der neuen Daten und der neuen Parität durch den RAID-Controller (102), ausgerichtet auf die Blöcke der zweiten Blockgröße, wobei Teile der alten Daten, die nicht durch den RAID-Controller (102) überschrieben werden, ebenfalls in die Blöcke der zweiten Blockgröße geschrieben werden, wobei der RAID-Controller (102) Platten steuert, die als RAID-5 konfiguriert sind ...

Подробнее
02-05-2013 дата публикации

Verwaltung von Teildatensegmenten in Systemen mit doppeltem Cachespeicher

Номер: DE102012219098A1
Принадлежит:

Es werden verschiedene beispielhafte Ausführungsformen von Verfahren, Systemen und Computerprogrammprodukten zum Verschieben von Teildatensegmenten innerhalb einer Datenverarbeitungs-Speicherumgebung, die durch einen Prozessor untergeordnete und übergeordnete Cachespeicherebenen aufweist, bereitgestellt. Bei einer solchen Ausführungsform wird, lediglich als Beispiel, ein gesamtes Datensegment, das eines der Teildatensegmente enthält, sowohl in die untergeordnete als auch in die übergeordnete Cachespeicherebene umgestuft. Angeforderte Daten des gesamten Datensegments werden aufgeteilt und an einem zuletzt verwendeten (MRU-)Abschnitt einer Herabstufungs-Warteschlange der übergeordneten Cachespeicherebene positioniert. Nicht angeforderte Daten des gesamten Datensegments werden aufgeteilt und an einem am längsten ungenutzten (LRU-)Abschnitt der Herabstufungs-Warteschlange der übergeordneten Cachespeicherebene positioniert. Die nicht angeforderten Daten werden fixiert, bis ein Schreibvorgang ...

Подробнее
13-09-2018 дата публикации

Anzeige eines Schreibvorgangs mit Löschen über eine Benachrichtigung von einem Plattenlaufwerk, das Blöcke einer ersten Blockgrösse innerhalb von Blöcken einer zweiten Blockgrösse emuliert

Номер: DE112012002641B4

Verfahren zur Emulation eines Plattenlaufwerks mit einer kleineren ersten Blockgröße durch ein Plattenlaufwerk mit einer größeren zweiten Blockgröße, wobei das Plattenlaufwerk über die Emulation jeweils eine Vielzahl emulierter Blöcke der ersten Blockgröße in jedem Block der zweiten Blockgröße speichert, aufweisend die Schritte::Empfangen einer Anfrage durch ein Plattenlaufwerk, mindestens einen Block einer ersten Blockgröße zu schreiben,Lesen eines ausgewählten Blocks der zweiten Blockgröße, in den der mindestens eine Block der ersten Blockgröße über die Emulation zu schreiben ist;Wenn beim Lesen des ausgewählten Blocks der zweiten Blockgröße ein Lesefehler auftritt, Durchführen der folgenden Verfahrensschritte durch das Plattenlaufwerk:Durchführen eines Schreibvorgangs mit Löschen an ausgewählten emulierten Blöcken der ersten Blockgröße, die das Erzeugen des Lesefehlers verursacht haben, indem diese Blöcke der ersten Blockgröße gelöscht und als nicht länger gültig angegeben werden;Verfolgen ...

Подробнее
17-04-2014 дата публикации

MERGING AN OUT OF SYNCHRONIZATION INDICATOR AND A CHANGE RECORDING INDICATOR IN RESPONSE TO A FAILURE IN CONSISTENCY GROUP FORMATION

Номер: US20140108349A1

A first data structure stores indications of storage locations that need to be copied for forming a consistency group. A second data structure stores indications of new host writes subsequent to starting a point in time copy operation to form the consistency group. Read access is secured to a metadata storage area and a determination is made as to whether the second data structure indicates that there are any new host writes. In response to determining that the second data structure indicates that there are new host writes, write access is secured to the metadata storage area, the first data structure is updated with contents of the second data structure to determine which additional storage locations need to be copied for formation of a next consistency group, and the second data structure is updated to indicate that that the second data structure is in an initialized state.

Подробнее
28-01-2014 дата публикации

Systems and methods for managing cache destage scan times

Номер: US0008639888B2

A system includes a cache and a processor. The processor is configured to utilize a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilize a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time. One method includes utilizing a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilizing a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time.

Подробнее
11-10-2012 дата публикации

SYSTEMS AND METHODS FOR DESTAGING STORAGE TRACKS FROM CACHE

Номер: US20120260044A1

A system includes a cache and a processor coupled to the cache. The cache stores data in multiple storage tracks and each storage track includes an associated multi-bit counter. The processor is configured to perform the following method. One method includes writing data to the plurality of storage tracks and incrementing the multi-bit counter on each respective storage track a predetermined amount each time the processor writes to a respective storage track. The method further includes scan each of the storage tracks in each of multiple scan cycles, decrementing each multi-bit counter each scan cycle, and destaging each storage track including a zero count.

Подробнее
19-02-2015 дата публикации

EFFICIENT TASK SCHEDULING USING A LOCKING MECHANISM

Номер: US20150052529A1

For efficient task scheduling using a locking mechanism, a new task is allowed to spin on the locking mechanism if a number of tasks spinning on the locking mechanism is less than a predetermined threshold for parallel operations requiring locks between the multiple threads.

Подробнее
27-03-2018 дата публикации

Assigning device adaptors to use to copy source extents to target extents in a copy relationship

Номер: US9928004B2

Provided are a computer program product, system, and method for assigning device adaptors to use to copy source extents in source ranks to target extents in target ranks in a copy relation. A determination is made of an order of the target ranks in the copy relation. Target ranks in the copy relation are selected according to the determined order. For each selected target rank, indication is made in a device adaptor assignment data structure of a source device adaptor and target device adaptor of the device adaptors to use to copy the source rank to the selected target rank indicated in the copy relation, wherein indication is made for the selected target ranks according to the determined order. The source ranks are copied to the selected target ranks using the source and target device adaptors indicated in the device adaptor assignment data structure.

Подробнее
24-07-2018 дата публикации

Raid 10 reads optimized for solid state drives

Номер: US0010031808B2

A mechanism is provided in a data processing system. The mechanism determines a maximum queue depth of a queue for each solid state drive in a plurality of solid state drives. A given data block is mirrored between a group of solid state drives within the plurality of solid state drives. The mechanism tracks outstanding input/output operations in a queue for each of the plurality of solid state drives. For a given read operation to read the given data block, the mechanism identifies a solid state drive within the group of solid state drives based on a number of empty slots in the queue of each solid state drive within the group of solid state drives.

Подробнее
23-08-2016 дата публикации

Adjustment of the number of task control blocks allocated for discard scans

Номер: US0009424196B2

A controller receives a request to perform a release space operation. A determination is made that a new discard scan has to be performed on a cache, in response to the received request to perform the release space operation. A determination is made as to how many task control blocks are to be allocated to the perform the new discard scan, based on how many task control blocks have already been allocated for performing one or more discard scans that are already in progress.

Подробнее
10-10-2017 дата публикации

Asynchronous cleanup after a peer-to-peer remote copy (PPRC) terminate relationship operation

Номер: US0009785553B2

For asynchronous cleanup after a peer-to-peer remote copy (PPRC) terminate relationship operation in a computing storage environment by a processor device, asynchronously cleaning up a plurality of PPRC modified sectors bitmaps using a PPRC terminate-relationship cleanup operation by throttling a number of tasks performing the PPRC terminate-relationship cleanup operation, and terminating the PPRC relationship by calling a cache to perform a terminate cleanup bind segment scan operation on a plurality of bind segments.

Подробнее
19-07-2016 дата публикации

Performing staging or destaging based on the number of waiting discard scans

Номер: US0009396114B2

A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether more than a threshold number of discard scans are waiting to be performed. The controller avoids satisfying the request to perform the staging or the destaging operations or a read hit with respect to the area of the cache, in response to determining that more than the threshold number of discard scans are waiting to be performed.

Подробнее
03-03-2015 дата публикации

Dynamically adjusted threshold for population of secondary cache

Номер: US0008972661B2

The population of data to be inserted into secondary data storage cache is controlled by determining a heat metric of candidate data; adjusting a heat metric threshold; rejecting candidate data provided to the secondary data storage cache whose heat metric is less than the threshold; and admitting candidate data whose heat metric is equal to or greater than the heat metric threshold. The adjustment of the heat metric threshold is determined by comparing a reference metric related to hits of data most recently inserted into the secondary data storage cache, to a reference metric related to hits of data most recently evicted from the secondary data storage cache; if the most recently inserted reference metric is greater than the most recently evicted reference metric, decrementing the threshold; and if the most recently inserted reference metric is less than the most recently evicted reference metric, incrementing the threshold.

Подробнее
30-04-2015 дата публикации

ADJUSTMENT OF THE NUMBER OF TASK CONTROL BLOCKS ALLOCATED FOR DISCARD SCANS

Номер: US20150121007A1
Принадлежит:

A controller receives a request to perform a release space operation. A determination is made that a new discard scan has to be performed on a cache, in response to the received request to perform the release space operation. A determination is made as to how many task control blocks are to be allocated to the perform the new discard scan, based on how many task control blocks have already been allocated for performing one or more discard scans that are already in progress.

Подробнее
01-03-1992 дата публикации

NONSYNCHRONOUS DASD CONTROL

Номер: CA0002046708A1
Принадлежит:

Подробнее
17-02-2010 дата публикации

Apparatus, system, and method for removing cache data

Номер: CN0100590610C
Принадлежит:

An apparatus, system, and method are disclosed for flushing cache data in a cache system. The apparatus includes a zero module and a flush module. The zero module executes an internal processor instruction to zero out a zero memory segment of a nonvolatile memory and a processor cache in response to a loss of primary power to the processor cache. The flush module flushes modified data from an address in the processor cache to a flush memory segment of the nonvolatile memory before the zero module puts a zero in the address. Advantageously, the zero memory segment is reserved within the memoryand used to zero out the processor cache, effectively flushing the existing data from the processor cache to a flush memory segment of the memory.

Подробнее
19-06-2008 дата публикации

SYSTEMS, METHODS AND COMPUTER PROGRAM PRODUCTS FOR AUTOMATICALLY TRIGGERING OPERATIONS ON A QUEUE PAIR

Номер: US2008147822A1
Принадлежит:

Systems, methods and computer program products for automatically triggering operations on a queue pair (QP). Methods include receiving a command at a remote direct memory access (RDMA) capable adapter. A trigger event element associated with the command is determined. The trigger event element is posted on a triggered QP. A triggeror element on a triggeror QP is posted, where the triggeror QP includes a reference to the triggered QP. A notification that the triggeror element has completed is received. The trigger event element is automatically initiated in response to receiving the notification.

Подробнее
08-12-2015 дата публикации

Adaptive record caching for solid state disks

Номер: US0009207867B2

A storage controller receives a request that corresponds to an access of a track. A determination is made as to whether the track corresponds to data stored in a solid state disk. Record staging to a cache from the solid state disk is performed, in response to determining that the track corresponds to data stored in the solid state disk, wherein each track is comprised of a plurality of records.

Подробнее
15-09-2016 дата публикации

GROUPING TRACKS FOR DESTAGING

Номер: US20160267019A1

Various embodiments for grouping tracks for destaging by a processor device in a computing environment are provided. Tracks are selected for destaging from a least recently used (LRU) list and the selected tracks are moved to a destaging wait list. One of the selected tracks is selected from the destaging wait list and the selected tracks are grouped for destaging. A first track and a last track are located from the group of selected tracks of the destaging wait list. The destaging is commenced from the first track in the group of selected tracks. A track is added to the group of selected tracks if the track is one of modified and located in a cache, otherwise, a next one of the selected tracks in the group of selected tracks is moved to.

Подробнее
02-01-1985 дата публикации

METHOD AND APPARATUS FOR HASHING CACHE ADDRESSES IN A CACHED DISK STORAGE SYSTEM

Номер: CA1180463A

A cache is accessed based upon addresses to a backing store having a larger address space than the cache. The backing store consists of plurality of devices exhibiting delay access boundaries. The cache accessing is based upon a hashing method and system derived from the arrangement of the backing store and in an ordered manner for accommodating the delay access boundaries and enable rapidly adjusting the hash parameters in accordance with changes and backing store capability and other hardware changes.

Подробнее
08-12-2015 дата публикации

Indication of a destructive write via a notification from a disk drive that emulates blocks of a first block size within blocks of a second block size

Номер: US0009207883B2

A disk drive receives a request to write at least one block of a first block size, wherein the disk drive is configured to store blocks of a second block size that is larger in size than the first block size. The disk drive stores a plurality of emulated blocks of the first block size in each block of the second block size. The disk drive generates a read error, in response to reading a selected block of the second block size in which the at least block of the first block size is to be written via an emulation. The disk drive performs a destructive write of selected emulated blocks of the first block size that caused the read error to be generated. The disk drive writes the at least one block of the first block size in the selected block of the second block size.

Подробнее
26-05-2015 дата публикации

Automatically preventing large block writes from starving small block writes in a storage device

Номер: US0009043572B2

A mechanism is provided in a storage device for performing a write operation. The mechanism configures a write buffer memory with a plurality of write buffer portions. Each write buffer portion is dedicated to a predetermined block size category within a plurality of block size categories. For each write operation from an initiator, the mechanism determines a block size category of the write operation. The mechanism performs each write operation by writing to a write buffer portion within the plurality of write buffer portions corresponding to the block size category of the write operation.

Подробнее
21-06-2016 дата публикации

Indication of a destructive write via a notification from a disk drive that emulates blocks of a first block size within blocks of a second block size

Номер: US0009372633B2

A disk drive receives a request to write at least one block of a first block size, wherein the disk drive is configured to store blocks of a second block size that is larger in size than the first block size. The disk drive stores a plurality of emulated blocks of the first block size in each block of the second block size. The disk drive generates a read error, in response to reading a selected block of the second block size in which the at least block of the first block size is to be written via an emulation. The disk drive performs a destructive write of selected emulated blocks of the first block size that caused the read error to be generated. The disk drive writes the at least one block of the first block size in the selected block of the second block size.

Подробнее
04-10-2016 дата публикации

Automatically preventing large block writes from starving small block writes in a storage device

Номер: US0009459808B2

A mechanism is provided in a storage device for performing a write operation. The mechanism configures a write buffer memory with a plurality of write buffer portions. Each write buffer portion is dedicated to a predetermined block size category within a plurality of block size categories. For each write operation from an initiator, the mechanism determines a block size category of the write operation. The mechanism performs each write operation by writing to a write buffer portion within the plurality of write buffer portions corresponding to the block size category of the write operation.

Подробнее
18-10-2016 дата публикации

Use of flash cache to improve tiered migration performance

Номер: US0009471252B2

For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, and at a time in which at least one data segment is to be migrated from one level to another level of the tiered levels of storage, a data migration mechanism is initiated by copying data resident in the lower-speed cache corresponding to the at least one data segment to be migrated to a target on the another level, reading remaining data, not previously copied from the lower-speed cache, from a source on the one level, and writing the remaining data to the target, and subsequent to the reading and the writing of the remaining data, destaging updates corresponding to the at least one data segment from either the higher and lower speed caches to the target.

Подробнее
25-10-2016 дата публикации

Assigning device adaptors to use to copy source extents to target extents in a copy relationship

Номер: US0009477418B2

Provided are a computer program product, system, and method for assigning device adaptors to use to copy source extents in source ranks to target extents in target ranks in a copy relation. A determination is made of an order of the target ranks in the copy relation. Target ranks in the copy relation are selected according to the determined order. For each selected target rank, indication is made in a device adaptor assignment data structure of a source device adaptor and target device adaptor of the device adaptors to use to copy the source rank to the selected target rank indicated in the copy relation, wherein indication is made for the selected target ranks according to the determined order. The source ranks are copied to the selected target ranks using the source and target device adaptors indicated in the device adaptor assignment data structure.

Подробнее
12-05-2009 дата публикации

Management method for spare disk drives in a raid system

Номер: US0007533292B2

A RAID system employs a storage controller, a primary storage array having a plurality of primary storage units, and a spare storage pool having one or more spare storage units. A method of operating the storage controller in managing the primary storage array and the spare storage pool involves a testing by the storage controller of at least one repair service threshold representative of one or more operational conditions indicative of a necessity to repair at least one of the primary storage array and the spare storage unit, and a selective initiation by the storage controller of a repair service action for repairing one of the primary storage array and the spare storage unit based on the testing of the at least one repair service threshold.

Подробнее
03-10-2006 дата публикации

Efficient accumulation of performance statistics in a multi-port network

Номер: US0007117118B2

Computer networks are provided with a resource efficient ability to generate link performance statistics. Two counters accumulates the number of I/O operations processed by a link and the time required by the link to complete each I/O operation. The average link utilization per I/O operation may then be calculated. The number of operations per second for a link may be computed by dividing the output from the first counter by a predetermined period of time and the average number of operations using the link may be computed by dividing the output from the second counter by the predetermined period of time. An optional third counter may be employed to accumulate the number of bytes transferred by a link during each I/O operation and used to compute the average size of an I/O operation. The generated statistics are useful for such activities as problem resolution, load balancing and capacity planning.

Подробнее
10-05-2016 дата публикации

Performing asynchronous discard scans with staging and destaging operations

Номер: US0009336150B2

A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether one or more discard scans are being performed or queued for the area of the cache. In response to determining that one or more discard scans are being performed or queued for the area of the cache, the controller avoids satisfying the request to perform the staging or the destaging operations with respect to the area of the cache.

Подробнее
07-06-2016 дата публикации

Grouping tracks for destaging

Номер: US0009361241B2

Tracks are selected for destaging from a least recently used (LRU) list and the selected tracks are moved to a destaging wait list. The selected tracks are grouped and destaged from the destaging wait list.

Подробнее
27-11-2014 дата публикации

MINIMIZING DESTAGING CONFLICTS

Номер: US20140351532A1

Destage grouping of tracks is restricted to a bottom portion of a least recently used (LRU) list without grouping the tracks at a most recently used end of the LRU list to avoid the destaging conflicts. The destage grouping of tracks is destaged from the bottom portion of the LRU list.

Подробнее
18-03-2014 дата публикации

Caching data in a storage system having multiple caches including non-volatile storage cache in a sequential access storage device

Номер: US0008677062B2

Provided are a computer program product, system, and method for caching data in a storage system having multiple caches. A sequential access storage device includes a sequential access storage medium and a non-volatile storage device integrated in the sequential access storage device, received modified tracks are cached in the non-volatile storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium. A spatial index indicates the modified tracks in the non-volatile storage device in an ordering based on their physical location in the sequential access storage medium. The modified tracks are destaged from the non-volatile storage device by comparing a current position of a write head to physical locations of the modified tracks on the sequential access storage medium indicated in the spatial index to select a modified track to destage from the non-volatile storage device to the storage device.

Подробнее
24-07-2014 дата публикации

USE OF FLASH CACHE TO IMPROVE TIERED MIGRATION PERFORMANCE

Номер: US20140208032A1

For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, and at a time in which at least one data segment is to be migrated from one level to another level of the tiered levels of storage, a data migration mechanism is initiated by copying data resident in the lower-speed cache corresponding to the at least one data segment to be migrated to a target on the another level, and reading remaining data, not previously copied from the lower-speed cache, from a source on the one level, and writing the remaining data to the target.

Подробнее
24-02-2015 дата публикации

Populating a first stride of tracks from a first cache to write to a second stride in a second cache

Номер: US0008966178B2

Provided are a computer program product, system, and method for managing data in a cache system comprising a first cache, a second cache, and a storage system. A determination is made of tracks stored in the storage system to demote from the first cache. A first stride is formed including the determined tracks to demote. A determination is made of a second stride in the second cache in which to include the tracks in the first stride. The tracks from the first stride are added to the second stride in the second cache. A determination is made of tracks in strides in the second cache to demote from the second cache. The determined tracks to demote from the second cache are demoted.

Подробнее
10-08-2017 дата публикации

THRESHOLDING TASK CONTROL BLOCKS FOR STAGING AND DESTAGING

Номер: US20170228324A1

For thresholding task control blocks (TCBs) for staging and destaging, a first tier of TCBs are reserved for guaranteeing a minimum number of TCBs for staging and destaging for storage ranks. An additional number of requested TCBs are apportioned from a second tier of TCBs to each of the storage ranks based on a scaling factor that is calculated at predefined time intervals. The scaling factor is multiplied by a total number of a plurality of requests from each of the storage ranks for the TCBs from the second tier of TCBs for determining a maximum number of the TCBs to be allocated to each of the storage ranks.

Подробнее
02-08-2012 дата публикации

ADAPTIVE PRESTAGING IN A STORAGE CONTROLLER

Номер: US20120198148A1

In one aspect of the present description, at least one of the value of a prestage trigger and the value of the prestage amount, may be modified as a function of the drive speed of the storage drive from which the units of read data are prestaged into a cache memory. Thus, cache prestaging operations in accordance with another aspect of the present description may take into account storage devices of varying speeds and bandwidths for purposes of modifying a prestage trigger and the prestage amount. Still further, a cache prestaging operation in accordance with further aspects may decrease one or both of the prestage trigger and the prestage amount as a function of the drive speed in circumstances such as a cache miss which may have resulted from prestaged tracks being demoted before they are used. Conversely, a cache prestaging operation in accordance with another aspect may increase one or both of the prestage trigger and the prestage amount as a function of the drive speed in circumstances such as a cache miss which may have resulted from waiting for a stage to complete. In yet another aspect, the prestage trigger may not be limited by the prestage amount. Instead, the pre-stage trigger may be permitted to expand as conditions warrant it by prestaging additional tracks and thereby effectively increasing the potential range for the prestage trigger. Other features and aspects may be realized, depending upon the particular application. 1. A method , comprising:reading units of data of a sequential stream of read data stored in at least one of a first storage drive having a drive speed and a second storage drive having a drive speed;setting a prestage trigger to initiate prestaging of units of read data;in response to reaching the trigger as the units of the sequential stream of read data are read, prestaging an amount of units of data of the sequential stream of read data from a storage drive into a cache memory in anticipation of a read request; andmodifying at ...

Подробнее
26-04-2016 дата публикации

Systems and methods for background destaging storage tracks

Номер: US0009323694B2

Storage tracks from at least one server are destaged from the write cache rank when it is determined that the at least one server is idle with respect to a first set of ranks, and storage tracks are refrained from being destaged from each rank when it is determined that the at least one server is not idle with respect to a second set of ranks such that storage tracks in the first set of ranks may be destaged while storage tracks in the second set of ranks are not being destaged.

Подробнее
18-07-2013 дата публикации

Herabstufen von partiellen Speicherspuren aus einem ersten Cachespeicher in einen zweiten Cachespeicher

Номер: DE102013200032A1
Принадлежит:

Durchgeführt wird eine Ermittlung einer Speicherspur, die von dem ersten Cachespeicher in den zweiten Cachespeicher herabzustufen ist, wobei die Speicherspur in dem ersten Cachespeicher einer Speicherspur im Speichersystem entspricht und aus einer Vielzahl von Sektoren besteht. Als Antwort auf ein Feststellen, dass der zweite Cachespeicher eine veraltete Version der Speicherspur enthält, die aus dem ersten Cachespeicher herabgestuft wurde, wird eine Ermittlung durchgeführt, ob die veraltete Version der Speicherspur Speicherspursektoren enthält die nicht in der Speicherspur enthalten sind, die aus dem ersten Cachespeicher herabgestuft wird. Die Sektoren aus der Speicherspur, die aus dem ersten Cachespeicher herabgestuft wurde, werden mit Sektoren aus der veralteten Version der Speicherspur, welche nicht in der Speicherspur enthalten sind, die aus dem ersten Cachespeicher herabgestuft wird, zu einer neuen Version der Speicherspur verknüpft. Die neue Version der Speicherspur wird in den zweiten ...

Подробнее
02-06-2016 дата публикации

USE OF FLASH CACHE TO IMPROVE TIERED MIGRATION PERFORMANCE

Номер: US20160154605A1

For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, and at a time in which at least one data segment is to be migrated from one level to another level of the tiered levels of storage, a data migration mechanism is initiated by copying data resident in the lower-speed cache corresponding to the at least one data segment to be migrated to a target on the another level, reading remaining data, not previously copied from the lower-speed cache, from a source on the one level, and writing the remaining data to the target, and subsequent to the reading and the writing of the remaining data, destaging updates corresponding to the at least one data segment from either the higher and lower speed caches to the target.

Подробнее
28-04-2015 дата публикации

Demoting partial tracks from a first cache to a second cache

Номер: US0009021201B2

A determination is made of a track to demote from the first cache to the second cache, wherein the track in the first cache corresponds to a track in the storage system and is comprised of a plurality of sectors. In response to determining that the second cache includes a the stale version of the track being demoted from the first cache, a determination is made as to whether the stale version of the track includes track sectors not included in the track being demoted from the first cache. The sectors from the track demoted from the first cache are combined with sectors from the stale version of the track not included in the track being demoted from the first cache into a new version of the track. The new version of the track is written to the second cache.

Подробнее
05-02-2015 дата публикации

THRESHOLDING TASK CONTROL BLOCKS FOR STAGING AND DESTAGING

Номер: US2015040135A1
Принадлежит:

For thresholding task control blocks (TCBs) for staging and destaging, a first tier of TCBs are reserved for guaranteeing a minimum number of TCBs for staging and destaging for storage ranks An additional number of requested TCBs are apportioned from a second tier of TCBs to each of the storage ranks based on a scaling factor that is calculated at predefined time intervals.

Подробнее
21-03-2017 дата публикации

Asynchronous cleanup after a peer-to-peer remote copy (PPRC) terminate relationship operation

Номер: US0009600277B2

For asynchronous cleanup after a peer-to-peer remote copy (PPRC) terminate relationship operation in a computing storage environment by a processor device, asynchronously cleaning up a plurality of PPRC modified sectors bitmaps using a PPRC terminate-relationship cleanup operation by throttling a number of tasks performing the PPRC terminate-relationship cleanup operation while releasing a plurality of bind segments until completion of the PPRC terminate-relationship cleanup operation.

Подробнее
07-12-2010 дата публикации

Create virtual track buffers in NVS using customer segments to maintain newly written data across a power loss

Номер: US0007849254B2

A method for storing customer data at a non-volatile storage (NVS) at a storage server. A track buffer is maintained for identifying first and second sets of segments that are allocated in the NVS. A flag in the track buffer identifies which of the first and second sets of segments to use for storing customer data for which a write request has been made. The customer data is stored in the NVS in successive commit processes. Following a power loss in the storage server, the NVS uses the track buffer information to identify which of the first and second sets of segments was involved in the current commit process to allow the current commit process to be completed.

Подробнее
26-05-2020 дата публикации

Replicating tracks from a first storage site to a second and third storage sites

Номер: US0010664177B2

Provided are a computer program product, system, and method for replicating tracks from a first storage to a second and third storages. A determination is made of a track in the first storage to transfer to the second storage as part of a point-in-time copy relationship and of a stride of tracks including the target track. The stride of tracks including the target track is staged from the first storage to a cache according to the point-in-time copy relationship. The staged stride is destaged from the cache to the second storage. The stride in the cache is transferred to the third storage as part of a mirror copy relationship. The stride of tracks in the cache is demoted in response to destaging the stride of the tracks in the cache to the second storage and transferring the stride of tracks in the cache to the third storage.

Подробнее
15-12-2015 дата публикации

Adaptive record caching for solid state disks

Номер: US0009213488B2

A storage controller receives a request that corresponds to an access of a track. A determination is made as to whether the track corresponds to data stored in a solid state disk. Record staging to a cache from the solid state disk is performed, in response to determining that the track corresponds to data stored in the solid state disk, wherein each track is comprised of a plurality of records.

Подробнее
19-05-2015 дата публикации

Tiered caching and migration in differing granularities

Номер: US0009037791B2

For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, groups of data segments are migrated between the tiered levels of storage such that uniformly hot ones of the groups of data segments are migrated to use a Solid State Drive (SSD) portion of the tiered levels of storage, clumped hot ones of the groups of data segments are migrated to use the SSD portion while using the lower-speed cache for a remaining portion of the clumped hot ones, and sparsely hot ones of the groups of data segments are migrated to use the lower-speed cache while using a lower one of the tiered levels of storage for a remaining portion of the sparsely hot ones.

Подробнее
14-01-2014 дата публикации

Prefetching data tracks and parity data to use for destaging updated tracks

Номер: US0008631190B2

Provided are a computer program product, system, and method for prefetching data tracks and parity data to use for destaging updated tracks. A write request is received including at least one updated track to the group of tracks. The at least one updated track is stored in a first cache device. A prefetch request is sent to the at least one sequential access storage device to prefetch tracks in the group of tracks to a second cache device. A read request is generated to read the prefetch tracks following the sending of the prefetch request. The read prefetch tracks returned to the read request from the second cache device are stored in the first cache device. New parity data is calculated from the at least one updated track and the read prefetch tracks.

Подробнее
24-07-2014 дата публикации

USE OF DIFFERING GRANULARITY HEAT MAPS FOR CACHING AND MIGRATION

Номер: US20140207995A1

For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, groups of data segments are migrated between the tiered levels of storage such that uniformly hot ones of the groups of data segments are migrated to utilize a Solid State Drive (SSD) portion of the tiered levels of storage, while sparsely hot ones of the groups of data segments are migrated to utilize the lower-speed cache.

Подробнее
03-01-2017 дата публикации

Optimizing peer-to-peer remote copy (PPRC) transfers for partial write operations using a modified sectors bitmap

Номер: US0009535610B2

For optimizing peer-to-peer remote copy (PPRC) transfers for partial write operations in a computing storage environment by a processor device by maintaining a PPRC modified sectors bitmap in bind segments upon demoting a track out of a cache for transferring a partial track after the demoting the track, wherein a hash table is used for locating the PPRC modified sectors bitmap.

Подробнее
19-07-2007 дата публикации

Selecting a path comprising ports on primary and secondary clusters to use to transmit data at a primary volume to a secondary volume

Номер: US2007168581A1
Принадлежит:

Provided are a method, system and program for selecting a path comprising ports on primary and secondary clusters to use to transmit data at a primary volume to a secondary volume. A request is received to copy data from a primary storage location to a secondary storage location. A determination is made from a plurality of primary clusters of an owner primary cluster for the primary storage location, wherein the primary clusters are configured to access the primary storage location. A determination is made as to whether there is at least one port on the owner primary cluster providing an available path to the secondary storage location. One port on the owner primary cluster is selected to use to copy the data to the secondary storage location in response to determining that there is at least one port on the owner primary cluster available to transmit to the secondary storage location.

Подробнее
22-10-2013 дата публикации

Intelligent write caching for sequential tracks

Номер: US0008566518B2

Write caching for sequential tracks is performed by a processor device in a computing storage environment for destaging data from nonvolatile storage (NVS) to a storage unit. If a first track is determined to be sequential, and an earlier track is also determined to be sequential, a temporal bit associated with the earlier track is cleared to allow for destage of data of the earlier track. If a temporal bit for one of a plurality of additional tracks in one of a plurality of strides in a modified cache is determined to be not set, a stride associated with the one of the plurality of additional tracks is selected for a destage operation. If the NVS exceeds a predetermined storage threshold, a predetermined one of the plurality of strides is selected for the destage operation.

Подробнее
21-06-2016 дата публикации

RAID 10 reads optimized for solid state drives

Номер: US0009372642B2

A mechanism is provided in a data processing system. The mechanism determines a maximum queue depth of a queue for each solid state drive in a plurality of solid state drives. A given data block is mirrored between a group of solid state drives within the plurality of solid state drives. The mechanism tracks outstanding input/output operations in a queue for each of the plurality of solid state drives. For a given read operation to read the given data block, the mechanism identifies a solid state drive within the group of solid state drives based on a number of empty slots in the queue of each solid state drive within the group of solid state drives.

Подробнее
13-04-2006 дата публикации

Apparatus and method to manage a data cache

Номер: US2006080510A1
Принадлежит:

A method is disclosed to manage a data cache. The method provides a data cache comprising a plurality of tracks, where each track comprises one or more segments. The method further maintains a first LRU list comprising one or more first tracks having a low reuse potential, maintains a second LRU list comprising one or more second tracks having a high reuse potential, and sets a target size for the first LRU list. The method then accesses a track, and determines if that accessed track comprises a first track. If the method determines that the accessed track comprises a first track, then the method increases the target size for said first LRU list. Alternatively, if the method determines that the accessed track comprises a second track, then the method decreases the target size for said first LRU list. The method demotes tracks from the first LRU list if its size exceeds the target size; otherwise, the method evicts tracks from the second LRU list.

Подробнее
24-01-2013 дата публикации

EFFICIENT TRACK DESTAGE IN SECONDARY STORAGE

Номер: US20130024628A1

Exemplary method, system, and computer program product embodiments for efficient track destage in secondary storage in a more effective manner, are provided. In one embodiment, by way of example only, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, the temporal bits and sequential bits are transferred from the primary storage to the secondary storage. The temporal bits are allowed to age on the secondary storage. Additional system and computer program product embodiments are disclosed and provide related advantages. 1. A method for efficient track destage in secondary storage in a computing storage environment by a processor device , comprising: transferring the plurality of temporal bits and the plurality of sequential bits from the primary storage to the secondary storage, and', 'allowing the plurality of temporal bits to age on the secondary storage., 'for a plurality of temporal bits employed with a plurality of sequential bits for controlling the timing for destaging the track in a primary storage2. The method of claim 1 , further including claim 1 , in conjunction with the transferring claim 1 , performing on the primary storage at least one of:querying a cache for at least one of a determination of whether the track is sequential and, if the track is modified, the plurality of temporal bits, andsaving the at least one of the determination of whether the track is sequential and, if the track is modified, the plurality of temporal bits in a cache directory control block (CDB) before the track is transferred.3. The method of claim 1 , further including claim 1 , in conjunction with the transferring claim 1 , performing on the secondary storage at least one of:receiving at least one of the plurality of temporal bits, the plurality of sequential bits, and the CDB,creating a cache track, andwriting data to the cache track.4. The method of claim 3 , further including performing at least one of: ...

Подробнее
29-11-2012 дата публикации

USING AN ATTRIBUTE OF A WRITE REQUEST TO DETERMINE WHERE TO CACHE DATA IN A STORAGE SYSTEM HAVING MULTIPLE CACHES INCLUDING NON-VOLATILE STORAGE CACHE IN A SEQUENTIAL ACCESS STORAGE DEVICE

Номер: US20120303863A1

Provided are a computer program product, system, and method for using an attribute of a write request to determine where to cache data in a storage system having multiple caches including non-volatile storage cache in a sequential access storage device. Received modified tracks are cached in the non-volatile storage device integrated with the sequential access storage device in response to determining to cache the modified tracks. A write request having modified tracks is received. A determination is made as to whether an attribute of the received write request satisfies a condition. The received modified tracks for the write request are cached in the non-volatile storage device in response to determining that the determined attribute does not satisfy the condition. A destage request is added to a request queue for the received write request having the determined attribute not satisfying the condition. 1. A computer program product for managing data in a sequential access storage device receiving read requests and write requests from a system with respect to tracks stored in a sequential access storage medium , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to perform operations , the operations comprising:caching received modified tracks in a non-volatile storage device integrated with the sequential access storage device in response to determining to cache the modified tracks;receiving a write request having modified tracks;determining whether an attribute of the received write request satisfies a condition;caching the received modified tracks for the write request in the non-volatile storage device in response to determining that the determined attribute does not satisfy the condition;adding a destage request to a request queue for the received write request having the determined attribute not satisfying the condition; andwriting the received modified tracks for the ...

Подробнее
18-08-2011 дата публикации

Integrating A Flash Cache Into Large Storage Systems

Номер: US20110202708A1

An I/O enclosure module is provided with one or more I/O enclosures having a plurality of slots for receiving electronic devices. A host adapter is connected a first slot of the I/O enclosure module and is configured to connect a host to the I/O enclosure. A device adapter is connected to a second slot of the I/O enclosure module and is configured to connect a storage device to the I/O enclosure module. A flash cache is connected to a third slot of the I/O enclosure module and includes a flash-based memory configured to cache data associated with data requests handled through the I/O enclosure module. A primary processor complex manages data requests handled through the I/O enclosure module by communicating with the host adapter, device adapter, and flash cache to manage to the data requests.

Подробнее
20-03-2014 дата публикации

PREFERENTIAL CPU UTILIZATION FOR TASKS

Номер: US20140082631A1

A set of like tasks to be performed is organized into a first group. Upon a determined imbalance between dispatch queue depths greater than a predetermined threshold, the set of like tasks is reassigned to an additional group.

Подробнее
29-11-2012 дата публикации

MANAGING TRACK DISCARD REQUESTS TO INCLUDE IN DISCARD TRACK MESSAGES

Номер: US20120303899A1

Provided are a computer program product, system, and method for managing track discard requests to include in discard track messages. A backup copy of a track in a cache is maintained in the cache backup device. A track discard request is generated to discard tracks in the cache backup device removed from the cache. Track discard requests are queued in a discard track queue. In response to detecting that a predetermined number of track discard requests are queued in the discard track queue while processing in a discard multi-track mode, one discard multiple tracks message is sent indicating the tracks indicated in the queued predetermined number of track discard requests to the cache backup device instructing the cache backup device to discard the tracks indicated in the discard multiple tracks message. In response to determining a predetermined number of periods of inactivity while processing in the discard multi-track mode, processing the track discard requests is switched to a discard single track mode.

Подробнее
24-01-2013 дата публикации

PREFETCHING TRACKS USING MULTIPLE CACHES

Номер: US20130024624A1

Provided are a computer program product, sequential access storage device, and method for managing data in a sequential access storage device receiving read requests and write requests from a system with respect to tracks stored in a sequential access storage medium. A prefetch request indicates prefetch tracks in the sequential access storage medium to read from the sequential access storage medium. The accessed prefetch tracks are cached in a non-volatile storage device integrated with the sequential access storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium. A read request is received for the prefetch tracks following the caching of the prefetch tracks, wherein the prefetch request is designated to be processed at a lower priority than the read request with respect to the sequential access storage medium. The prefetch tracks are returned from the non-volatile storage device to the read request. 1. A computer program product for managing data in a sequential access storage device receiving read requests and write requests from a system with respect to tracks stored in a sequential access storage medium , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to perform operations , the operations comprising:receiving a prefetch request from the system indicating prefetch tracks in the sequential access storage medium;processing the prefetch request to read the prefetch tracks from the sequential access storage medium;caching the accessed prefetch tracks in a non-volatile storage device integrated with the sequential access storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium;receiving a read request for the prefetch tracks following the caching of the prefetch tracks, wherein the prefetch request is designated to be processed at a lower ...

Подробнее
14-06-2012 дата публикации

SYSTEMS AND METHODS FOR MANAGING CACHE DESTAGE SCAN TIMES

Номер: US20120151151A1

Systems and methods for managing destage scan times in a cache are provided. One system includes a cache and a processor. The processor is configured to utilize a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilize a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time. One method includes utilizing a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilizing a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time. Physical computer storage mediums including a computer program product for performing the above method are also provided.

Подробнее
13-03-2014 дата публикации

REPLICATING TRACKS FROM A FIRST STORAGE SITE TO A SECOND AND THIRD STORAGE SITES

Номер: US20140075110A1

Provided are a computer program product, system, and method for replicating tracks from a first storage to a second and third storages. A determination is made of a track in the first storage to transfer to the second storage as part of a point-in-time copy relationship and of a stride of tracks including the target track. The stride of tracks including the target track is staged from the first storage to a cache according to the point-in-time copy relationship. The staged stride is destaged from the cache to the second storage. The stride in the cache is transferred to the third storage as part of a mirror copy relationship. The stride of tracks in the cache is demoted in response to destaging the stride of the tracks in the cache to the second storage and transferring the stride of tracks in the cache to the third storage. 1. A computer program product for copying data from a first storage to a second storage and a third storage , wherein a cache is used to cache data for the second storage , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to cause operations operations , the operations comprising:providing a point-in-time copy relationship to copy tracks as of a point-in-time in the first storage to a second storage;providing a mirror copy relationship to copy data in the second storage to the third storage;determining a track in the first storage to transfer to the second storage as part of the point-in-time copy relationship;determining a stride of tracks including the target track;staging the stride of tracks including the target track from the first storage to the cache according to the point-in-time copy relationship;destaging the staged stride from the cache to the second storage;transferring the stride in the cache to the third storage as part of the mirror copy relationship; anddemoting the stride of tracks in the cache in response to destaging the stride of ...

Подробнее
10-10-2017 дата публикации

Efficient free-space management of multi-target peer-to-peer remote copy (PPRC) modified sectors bitmap in bind segments

Номер: US0009785349B2

For efficient free-space management of multi-target peer-to-peer remote copy (PPRC) modified sectors bitmap in bind segments, maintaining a list of bind segments having a free slots for each storage volume. Each one of the bind segments includes a bitmap of the free slots. The bind segments are used to store a plurality of PPRC modified sectors bitmaps as needed, where all of the bind segments have a header and a plurality of free slots to store the plurality of PPRC modified sectors bitmaps.

Подробнее
21-02-2019 дата публикации

Durchführen von asynchronen Löschscans mit Zwischenspeicherungs- und Auslagerungsvorgängen

Номер: DE102013209318B4

Ein Controller empfängt eine Anforderung zum Durchführen von Zwischenspeicherungs- oder Auslagerungsvorgängen in Bezug auf einen Bereich eines Cachespeichers. Es wird ermittelt, ob ein oder mehrere Löschscans für den Bereich des Cachespeichers durchgeführt werden oder in die Warteschlange gereiht sind. In Reaktion darauf, dass ermittelt wird, dass ein oder mehrere Löschscans für den Bereich des Cachespeichers durchgeführt werden oder in die Warteschlange gereiht sind, vermeidet der Controller, die Anforderung zum Durchführen der Zwischenspeicherungs- oder der Auslagerungsvorgänge oder einen Lesetreffer in Bezug auf den Bereich des Cachespeichers zu bedienen.

Подробнее
20-03-2014 дата публикации

EFFICIENT CACHE VOLUME SIT SCANS

Номер: US20140082283A1

A processor, operable in a computing storage environment, allocates portions of a Scatter Index Table (SIT) disproportionately between a larger portion dedicated for meta data tracks, and a smaller portion dedicated for user data tracks, and processes a storage operation through the disproportionately allocated portions of the SIT using an allocated number of Task Control Blocks (TCB). 1. A method for cache management by a processor device in a computing storage environment , the method comprising:allocating portions of a Scatter Index Table (SIT) disproportionately between a larger portion dedicated for meta data tracks, and a smaller portion dedicated for user data tracks; andprocessing a storage operation through the disproportionately allocated portions of the SIT using an allocated number of Task Control Blocks (TCB).2. The method of claim 1 , further including allocating the number of TCBs.3. The method of claim 1 , further including claim 1 , pursuant to processing the storage operation through the disproportionately allocated portions of the SIT claim 1 , performing a cache scan operation claim 1 , wherein the TCBs are scan TCBs.4. The method of claim 1 , further including determining relative sizes of the disproportionately allocated portions of the SIT.5. The method of claim 1 , further including acquiring a data segment to be processed according to the storage operation.6. The method of claim 5 , further including claim 5 , subsequent to acquiring the data segment to be processed claim 5 , determining claim 5 , by one of the TCBs claim 5 , if the data segment to be processed is a user data segment or a meta data segment.7. The method of claim 6 , further including:if the data segment is a user data segment, moving a pointer to point to a next user track area of the user data track portion of the SIT, and processing the SIT for the user data track, andif the data segment is a meta data segment, moving the pointer to point to a next meta data track area of ...

Подробнее
12-04-2012 дата публикации

MULTIPLE INCREMENTAL VIRTUAL COPIES

Номер: US20120089795A1

Provided are techniques for, in response to establishing each incremental virtual copy from a source to a target, creating a target change recording structure for the target. While performing destage to a source data block at the source, it is determined that there is at least one incremental virtual copy target for this source data block. For each incremental virtual copy relationship where the source data block is newer than the incremental virtual copy relationship and an indicator is set in a target inheritance structure on the target for a corresponding target data block, the source data block is copied to each corresponding target data block, and an indicator is set in each target change recording structure on each target for the target data block corresponding to the source data block being destaged. 1. A method , comprising:using a computer including a processor, in response to establishing each incremental virtual copy from a source to a target, creating a target change recording structure for the target; determining that there is at least one incremental virtual copy target for this source data block; and', copying the source data block to each corresponding target data block; and', 'setting an indicator in each target change recording structure on each target for the target data block corresponding to the source data block being destaged., 'for each incremental virtual copy relationship where the source data block is newer than the incremental virtual copy relationship and an indicator is set in a target inheritance structure on the target for a corresponding target data block,'}], 'while performing destage to a source data block at the source,'}2. The method of claim 1 , further comprising: merging the target change recording structure with the target inheritance structure to indicate source data blocks that are to be copied to corresponding target data blocks;', 'performing a destage scan synchronous to the incremental virtual copy on the source; and', ...

Подробнее
30-06-2015 дата публикации

Systems and methods for destaging storage tracks from cache

Номер: US0009069683B2

A system includes a cache and a processor coupled to the cache. The cache stores data in multiple storage tracks and each storage track includes an associated multi-bit counter. The processor is configured to perform the following method. One method includes incrementing the multi-bit counter on each respective storage track a predetermined amount each time the processor writes to a respective storage track. The method further includes decrementing each multi-bit counter each scan cycle, and destaging each storage track including a zero count.

Подробнее
26-05-2015 дата публикации

Adjustment of the number of task control blocks allocated for discard scans

Номер: US0009043550B2

A controller receives a request to perform a release space operation. A determination is made that a new discard scan has to be performed on a cache, in response to the received request to perform the release space operation. A determination is made as to how many task control blocks are to be allocated to the perform the new discard scan, based on how many task control blocks have already been allocated for performing one or more discard scans that are already in progress.

Подробнее
24-06-2014 дата публикации

Destaging of write ahead data set tracks

Номер: US0008762646B2

Exemplary methods, computer systems, and computer program products for efficient destaging of a write ahead data set (WADS) track in a volume of a computing storage environment are provided. In one embodiment, the computer environment is configured for preventing destage of a plurality of tracks in cache selected for writing to a storage device. For a track N in a stride Z of the selected plurality of tracks, if the track N is a first WADS track in the stride Z, clearing at least one temporal bit for each track in the cache for the stride Z minus 2 (Z2), and if the track N is a sequential track, clearing the at least one temporal bit for the track N minus a variable X (NX).

Подробнее
03-03-2016 дата публикации

Effiziente Aufgabenplanung unter Verwendung eines Sperrmechanismus

Номер: DE112014002754T5

Für eine effiziente Aufgabenplanung unter Verwendung eines Sperrmechanismus wird gestattet, dass sich ehe neue Aufgabe auf dem Sperrmechanismus in eine Warteschleife einreiht, wenn eine Anzahl von Aufgaben, die sich auf dem Sperrmechanismus in einer Warteschleife befinden, kleiner als ein vordefinierter Schwellenwert für parallele Operationen ist, die Sperren zwischen den mehreren Threads erfordern.

Подробнее
24-10-2017 дата публикации

Managing caching of extents of tracks in a first cache, second cache and storage

Номер: US0009798676B2

Provided are a computer program product, system, and method for managing caching of extents of tracks in a first cache, second cache and storage device. A determination is made of an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks. A determination is made of an extent including the eligible track and whether second cache caching for the determined extent is enabled or disabled. The eligible track is demoted from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled. Selection is made not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled.

Подробнее
23-04-2014 дата публикации

Indication of a destructive write via a notification from a disk drive that emulates blocks of a first block size within blocks of a second block size

Номер: CN103748568A
Принадлежит:

A disk drive receives a request to write at least one block of a first block size, wherein the disk drive is configured to store blocks of a second block size that is larger in size than the first block size, and wherein the disk drive stores via emulation a plurality of emulated blocks of the first block size in each block of the second block size. The disk drive generates a read error, in response to reading a selected block of the second block size in which the at least block of the first block size is to be written via the emulation. The disk drive performs a destructive write of selected emulated blocks of the first block size that caused the read error to be generated. The disk drive writes the at least one block of the first block size in the selected block of the second block size. The disk drive sends a notification to indicate the performing of the destructive write.

Подробнее
05-05-2015 дата публикации

Demoting partial tracks from a first cache to a second cache

Номер: US0009026732B2

A determination is made of a track to demote from the first cache to the second cache, wherein the track in the first cache corresponds to a track in the storage system and is comprised of a plurality of sectors. In response to determining that the second cache includes a the stale version of the track being demoted from the first cache, a determination is made as to whether the stale version of the track includes track sectors not included in the track being demoted from the first cache. The sectors from the track demoted from the first cache are combined with sectors from the stale version of the track not included in the track being demoted from the first cache into a new version of the track. The new version of the track is written to the second cache.

Подробнее
03-06-2014 дата публикации

Using an attribute of a write request to determine where to cache data in a storage system having multiple caches including non-volatile storage cache in a sequential access storage device

Номер: US0008745325B2

Provided are a computer program product, system, and method for using an attribute of a write request to determine where to cache data in a storage system having multiple caches including non-volatile storage cache in a sequential access storage device. Received modified tracks are cached in the non-volatile storage device integrated with the sequential access storage device in response to determining to cache the modified tracks. A write request having modified tracks is received. A determination is made as to whether an attribute of the received write request satisfies a condition. The received modified tracks for the write request are cached in the non-volatile storage device in response to determining that the determined attribute does not satisfy the condition. A destage request is added to a request queue for the received write request having the determined attribute not satisfying the condition.

Подробнее
24-06-2014 дата публикации

Prefetching tracks using multiple caches

Номер: US0008762650B2

Provided are a computer program product, sequential access storage device, and method for managing data in a sequential access storage device receiving read requests and write requests from a system with respect to tracks stored in a sequential access storage medium. A prefetch request indicates prefetch tracks in the sequential access storage medium to read from the sequential access storage medium. The accessed prefetch tracks are cached in a non-volatile storage device integrated with the sequential access storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium. A read request is received for the prefetch tracks following the caching of the prefetch tracks, wherein the prefetch request is designated to be processed at a lower priority than the read request with respect to the sequential access storage medium. The prefetch tracks are returned from the non-volatile storage device to the read request.

Подробнее
26-12-2017 дата публикации

NVS thresholding for efficient data management

Номер: US0009852059B2

For data management by a processor device in a computing storage environment, a threshold for an amount of Non Volatile Storage (NVS) space to be consumed by any particular logically contiguous storage space in the computing storage environment is established based on at least one of a Redundant Array of Independent Disks (RAID) type, a number of point-in-time copy source data segments in the logically contiguous storage space, and a storage classification.

Подробнее
17-04-2014 дата публикации

MERGING AN OUT OF SYNCHRONIZATION INDICATOR AND A CHANGE RECORDING INDICATOR IN RESPONSE TO A FAILURE IN CONSISTENCY GROUP FORMATION

Номер: US20140108753A1

A first data structure stores indications of storage locations that need to be copied for forming a consistency group. A second data structure stores indications of new host writes subsequent to starting a point in time copy operation to form the consistency group. Read access is secured to a metadata storage area and a determination is made as to whether the second data structure indicates that there are any new host writes. In response to determining that the second data structure indicates that there are new host writes, write access is secured to the metadata storage area, the first data structure is updated with contents of the second data structure to determine which additional storage locations need to be copied for formation of a next consistency group, and the second data structure is updated to indicate that that the second data structure is in an initialized state.

Подробнее
10-10-2017 дата публикации

Integrating a flash cache into large storage systems

Номер: US0009785561B2

An I/O enclosure module is provided with one or more I/O enclosures having a plurality of slots for receiving electronic devices. A host adapter is connected a first slot of the I/O enclosure module and is configured to connect a host to the I/O enclosure. A device adapter is connected to a second slot of the I/O enclosure module and is configured to connect a storage device to the I/O enclosure module. A flash cache is connected to a third slot of the I/O enclosure module and includes a flash-based memory configured to cache data associated with data requests handled through the I/O enclosure module. A primary processor complex manages data requests handled through the I/O enclosure module by communicating with the host adapter, device adapter, and flash cache to manage to the data requests.

Подробнее
18-12-2018 дата публикации

Preferential CPU utilization for tasks

Номер: US0010157082B2

A set of like tasks to be performed is organized into a first group. A last used processing group assigned to the set of like tasks is stored. The set of like tasks is reassigned to an additional group having a minimal queue length upon a determination that the difference between the queue lengths of the additional processing group and the stored processing group is greater than a predetermined threshold.

Подробнее
20-05-2008 дата публикации

Efficient maintenance of memory list

Номер: US0007376806B2

Data management systems, such as used in disk control units, employ memory entry lists to help keep track of user data. Improved performance of entry list maintenance is provided by the present invention. Much of the protocol employed to conduct such maintenance is preferably performed by hardware-based logic, thereby freeing other system resources to execute other processes. New entries to the memory list are only allowed at predetermined addresses and entries are updated by writing a predetermined data pattern to a previously allocated address. Optionally, improved error detection, such as a longitudinal redundancy check, may also be performed in an efficient manner during entry list maintenance to assure the integrity of the list.

Подробнее
14-06-2016 дата публикации

Merging an out of synchronization indicator and a change recording indicator in response to a failure in consistency group formation

Номер: US0009367598B2

A first data structure stores indications of storage locations that need to be copied for forming a consistency group. A second data structure stores indications of new host writes subsequent to starting a point in time copy operation to form the consistency group. Read access is secured to a metadata storage area and a determination is made as to whether the second data structure indicates that there are any new host writes. In response to determining that the second data structure indicates that there are new host writes, write access is secured to the metadata storage area, the first data structure is updated with contents of the second data structure to determine which additional storage locations need to be copied for formation of a next consistency group, and the second data structure is updated to indicate that that the second data structure is in an initialized state.

Подробнее
16-09-2014 дата публикации

Periodic destages from inside and outside diameters of disks to improve read response time via traversal of a spatial ordering of tracks

Номер: US0008838905B2

A storage controller that includes a cache, receives a command from a host, wherein a set of criteria corresponding to read response times for executing the command have to be satisfied. A destage application that destages tracks based at least on recency of usage and spatial location of the tracks is executed, wherein a spatial ordering of the tracks is maintained in a data structure, and the destage application traverses the spatial ordering of the tracks. Tracks are destaged from at least inside or outside diameters of disks at periodic intervals, while traversing the spatial ordering of the tracks, wherein the set of criteria corresponding to the read response times for executing the command are satisfied.

Подробнее
09-09-2014 дата публикации

Demoting tracks from a first cache to a second cache by using an occupancy of valid tracks in strides in the second cache to consolidate strides in the second cache

Номер: US0008832377B2

Information is maintained on strides configured in a second cache and occupancy counts for the strides indicating an extent to which the strides are populated with valid tracks and invalid tracks. A determination is made of tracks to demote from a first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are to a second stride in the second cache having an occupancy count indicating the stride is empty. A determination is made of a target stride in the second cache based on the occupancy counts of the strides in the second cache. A determination is made of at least two source strides in the second cache having valid tracks based on the occupancy counts of the strides in the second cache. The target stride is populated with the valid tracks from the source strides.

Подробнее
25-08-2005 дата публикации

Restricting the execution of copy services commands

Номер: US2005188251A1
Принадлежит:

A system and method for controlling peer-to-peer remote copy (PPRC) operations initiated from one or more host devices that desire to store data contents written to a first storage system to a second storage system over a communications link. The system enables receipt and generation of copy services commands from host devices and the determination of whether a received command pertains to a copy service over an established PPRC relationship for that particular customer to enable that customer to perform storage operations effecting data written to a first storage server having source volumes and stored in a remote second storage system having target volumes. The copy services command effecting data contents of source volumes and/or remote target volumes will be enabled if it is determined that said PPRC relationship is already established for that customer; and, prevented if the received copy services command does effect any volume not already in a copy services relationship.

Подробнее
09-02-2006 дата публикации

Write unmodified data to controller read cache

Номер: US2006031639A1
Принадлежит:

Disclosed are a method and apparatus, in a data storage environment with multiple devices sharing data, for writing data to one such device in a manner that indicates that the data need not be destaged to a lower tier of the storage hierarchy. As a specific example, a host computer system may issue a write command to a controller that signals the controller that it is not necessary to destage the data from the controller cache because the data has not been modified by the host. In a preferred embodiment, the controller's cache is an extension of the host's cache, rather than a duplication. To achieve this, the controller needs to know: 1) what data, being requested by the host, is being cached by the host, and should not be cached by the controller, and 2) what data has been cast out of the host's cache, and should now be cached by the controller.

Подробнее
03-06-2014 дата публикации

Cache management of tracks in a first cache and a second cache for a storage

Номер: US0008745332B2

Provided a computer program product, system, and method for cache management of tracks in a first cache and a second cache for a storage. The first cache maintains modified and unmodified tracks in the storage subject to Input/Output (I/O) requests. Modified and unmodified tracks are demoted from the first cache. The modified and the unmodified tracks demoted from the first cache are promoted to the second cache. The unmodified tracks demoted from the second cache are discarded. The modified tracks in the second cache that are at proximate physical locations on the storage device are grouped and the grouped modified tracks are destaged from the second cache to the storage device.

Подробнее
29-07-2014 дата публикации

Cache management of tracks in a first cache and a second cache for a storage

Номер: US8793436B2

Provided a computer program product, system, and method for cache management of tracks in a first cache and a second cache for a storage. The first cache maintains modified and unmodified tracks in the storage subject to Input/Output (I/O) requests. Modified and unmodified tracks are demoted from the first cache. The modified and the unmodified tracks demoted from the first cache are promoted to the second cache. The unmodified tracks demoted from the second cache are discarded. The modified tracks in the second cache that are at proximate physical locations on the storage device are grouped and the grouped modified tracks are destaged from the second cache to the storage device.

Подробнее
10-09-2020 дата публикации

Anpassungsfähiges Zwischenspeichern von Datensätzen für Halbleiterplatten

Номер: DE112012002452B4

Verfahren (800), das aufweist:Empfangen (800) einer Anfrage in einer Speichersteuereinheit (102), wobei die Anfrage einem Zugriff auf eine Spur (312) in einem Cachespeicher (112) entspricht;Bestimmen (804), ob die Spur Daten entspricht, die auf einer Halbleiterplatte (108) gespeichert sind; undAusführen eines Bereitstellens eines Datensatzes (338) von der Halbleiterplatte in den Cachespeicher, als Reaktion auf das Bestimmen, dass die Spur Daten entspricht, die auf der Halbleiterplatte gespeichert sind, wobei jede Spur eine Vielzahl von Datensätzen aufweist und wobeidas Bereitstellen (504) des Datensatzes (338) von der Halbleiterplatte in den Cachespeicher als Reaktion auf ein Bestimmen (810, 506) ausgeführt wird, dass ein Langzeitzugriffsverhältnis für wechselnde Datensätze kleiner ist als ein erster vordefinierter Wert, wobei das Bereitstellen des Datensatzes ein standardmäßiger Bereitstellungsvorgang für die Halbleiterplatte ist;ein Bereitstellen (508) eines Teils (346) der Spur als Reaktion ...

Подробнее
01-04-2014 дата публикации

Management of partial data segments in dual cache systems

Номер: US0008688913B2

For movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor, a whole data segment containing one of the partial data segments is promoted to both the lower and higher levels of cache. Requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache. Unrequested data of the whole data segment is split and positioned at a Least Recently Used (LRU) portion of the demotion queue of the higher level of cache. The unrequested data is pinned in place until a write of the whole data segment to the lower level of cache completes.

Подробнее
22-09-2015 дата публикации

Adaptive prestaging in a storage controller

Номер: US0009141525B2

In one aspect of the present description, at least one of the value of a prestage trigger and the value of the prestage amount, may be modified as a function of the drive speed of the storage drive from which the units of read data are prestaged into a cache memory. Thus, cache prestaging operations in accordance with another aspect of the present description may take into account storage devices of varying speeds and bandwidths for purposes of modifying a prestage trigger and the prestage amount. Still further, a cache prestaging operation in accordance with further aspects may decrease one or both of the prestage trigger and the prestage amount as a function of the drive speed in circumstances such as a cache miss which may have resulted from prestaged tracks being demoted before they are used. Conversely, a cache prestaging operation in accordance with another aspect may increase one or both of the prestage trigger and the prestage amount as a function of the drive speed in circumstances ...

Подробнее
24-06-2014 дата публикации

Destaging of write ahead data set tracks

Номер: US0008762645B2

Exemplary computer systems and computer program products for efficient destaging of a write ahead data set (WADS) track in a volume of a computing storage environment are provided. In one embodiment, the computer environment is configured for preventing destage of a plurality of tracks in cache selected for writing to a storage device. For a track N in a stride Z of the selected plurality of tracks, if the track N is a first WADS track in the stride Z, clearing at least one temporal bit for each track in the cache for the stride Z minus 2 (Z2), and if the track N is a sequential track, clearing the at least one temporal bit for the track N minus a variable X (NX).

Подробнее
15-04-2014 дата публикации

Managing unmodified tracks maintained in both a first cache and a second cache

Номер: US0008700854B2

Provided are a computer program product, system, and method for managing unmodified tracks maintained in both a first cache and a second cache. The first cache has unmodified tracks in the storage subject to Input/Output (I/O) requests. Unmodified tracks are demoted from the first cache to a second cache. An inclusive list indicates unmodified tracks maintained in both the first cache and a second cache. An exclusive list indicates unmodified tracks maintained in the second cache but not the first cache. The inclusive list and the exclusive list are used to determine whether to promote to the second cache an unmodified track demoted from the first cache.

Подробнее
02-06-2021 дата публикации

Herabstufen von partiellen Speicherspuren aus einem ersten Cachespeicher in einen zweiten Cachespeicher

Номер: DE102013200032B4

Computerprogrammprodukt zum Verwalten von Daten in einem Cachespeicher-System (4), das einen ersten Cachespeicher (14), einen zweiten Cachespeicher (18) und ein Speichersystem (10) aufweist, wobei das Computerprogrammprodukt ein nichtflüchtiges computerlesbares Speichermedium aufweist, das darin verkörperten computerlesbaren Programmcode aufweist, der ausgeführt wird, um Funktionen auszuführen, wobei die Funktionen aufweisen:Ermitteln einer Speicherspur (280), die von dem ersten Cachespeicher in den zweiten Cachespeicher herabzustufen ist, wobei die Speicherspur eine partielle Speicherspur aufweist, die Daten für weniger als alle Sektoren der Speicherspur enthält, wobei die Speicherspur in dem ersten Cachespeicher einer Speicherspur in dem Speichersystem entspricht und aus einer Vielzahl von Sektoren besteht;Ermitteln (282), ob der zweite Cachespeicher eine veraltete Version der Speicherspur enthält, die aus dem ersten Cachespeicher herabgestuft wird;als Antwort auf das Feststellen (282 ...

Подробнее
13-03-2014 дата публикации

Anpassungsfähiges Zwischenspeichern von Datensätzen für Halbleiterplatten

Номер: DE112012002452T5
Принадлежит: IBM, INTERNATIONAL BUSINESS MACHINES CORP.

Eine Speichersteuereinheit empfängt eine Anfrage, die einem Zugriff auf eine Spur in einem Cachespeicher entspricht. Es wird eine Bestimmung durchgeführt, ob die Spur Daten entspricht, die auf einer Halbleiterplatte gespeichert sind. Als Reaktion darauf, dass die Spur Daten entspricht, die auf der Halbleiterplatte gespeichert sind, wird ein Bereitstellen eines Datensatzes von der Halbleiterplatte in den Cachespeicher ausgeführt, wobei jede Spur eine Vielzahl von Datensätzen aufweist.

Подробнее
14-07-2015 дата публикации

Source-target relations mapping

Номер: US0009081511B2

A data preservation function is provided which, in one embodiment, includes mapping in a plurality of maps for a target storage device, map extent ranges of each map, to corresponding target extent ranges of storage locations on the target storage device. Usage of a particular map extent range by a relationship between a source extent range of storage locations on a source storage device containing data to be preserved in the source extent range, and the target extent range mapped to the map particular extent range, may be indicated by the map. In another aspect, in response to receipt of a data preservation command, a data preservation operation is performed including determining whether a map indicates availability of a map extent range mapped to the identified target extent range. Upon determining that a particular map indicates availability of a map extent range mapped to the identified target extent range, a relationship between the identified source extent range and the identified ...

Подробнее
28-10-1994 дата публикации

SYSTEM AND METHOD FOR ADJUSTING ACCESS TO DIRECT ACCESS STORAGE DEVICE

Номер: JP0006301627A
Принадлежит:

PURPOSE: To provide a data processing system which adjusts the access to a plurality of direct access storage devices from a plurality of host computers. CONSTITUTION: This data processing system is provided with a storage controller 12 connected to one host computer through one or more channels for data communication and the controller 12 is connected to direct access storage devices 22, 24, 26, 28, 30, and 32 used as auxiliary storing mechanisms. Upon receiving a request from one of the host computers 14, 16, 18, and 20, the controller 12 selectively sets a communication link for data transfer between the host computer and direct access storage device. The channels are temporarily deprived of the right of acquiring the control of these devices during the contention period of access to the storage devices or controller 12. The transmission of a device end signal and a controller end signal is also performed by selecting a channel. By specifying an duplicating state, the failure of one ...

Подробнее
04-10-2012 дата публикации

MANAGING METADATA FOR DATA IN A COPY RELATIONSHIP

Номер: US20120254547A1

Provided are a computer program product, system, and method for managing metadata for data in a copy relationship copied from a source storage to a target storage. Information is maintained on a copy relationship of source data in the source storage and target data in the target storage. The source data is copied from the source storage to the cache to copy to target data in the target storage indicated in the copy relationship. Target metadata is generated for the target data comprising the source data copied to the cache. An access request to requested target data comprising the target data in the cache is processed and access is provided to the requested target data in the cache. A determination is made as to whether the requested target data in the cache has been destaged to the target storage. The target metadata for the requested target data in the target storage is discarded in response to determining that the requested target data in the cache has not been destaged to the target storage. 1. A computer program product for managing data being copied from a source storage to a target storage , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to communicate with the source storage , the target storage and a cache and to perform operations , the operations comprising:maintaining information on a copy relationship of source data in the source storage and target data in the target storage;copying source data from the source storage to the cache to copy to target data in the target storage indicated in the copy relationship;generating target metadata for the target data comprising the source data copied to the cache;processing an access request to requested target data comprising the target data in the cache;providing access to the requested target data in the cache;determining whether the requested target data in the cache has been destaged to the target storage; ...

Подробнее
06-05-2014 дата публикации

Storage in tiered environment for colder data segments

Номер: US0008719529B2

Exemplary system and computer program embodiments for storing data by a processor device in a computing environment are provided. In one embodiment, by way of example only, from a plurality of available data segments, a data segment having a storage activity lower than a predetermined threshold is identified as a colder data segment. A chunk of storage is located to which the colder data segment is assigned. The colder data segment is compressed. The colder data segment is migrated to the chunk of storage. A status of the chunk of storage is maintained in a compression data segment bitmap.

Подробнее
03-11-2015 дата публикации

Promotion of partial data segments in flash cache

Номер: US0009176884B2

For efficient track destage in secondary storage in a more effective manner, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, a preference of movement to lower speed cache level is implemented based on at least one of an amount of holes and a data heat metric. If a first bit has at least one of a lower amount of holes and a hotter data heat metric, it is moved to the lower speed cache level ahead of a second bit that has at least one of a higher amount of holes and a cooler data heat. If the first bit has a hotter data heat and greater than a predetermined number of holes, the first bit is discarded.

Подробнее
14-06-2012 дата публикации

SYSTEMS AND METHODS FOR DESTAGING STORAGE TRACKS FROM CACHE

Номер: US20120151140A1

Systems and methods for destaging storage tracks from cache are provided. One system includes a cache and a processor coupled to the cache. The cache stores data in multiple storage tracks and each storage track includes an associated multi-bit counter. The processor is configured to perform the following method. One method includes writing data to the plurality of storage tracks and incrementing the multi-bit counter on each respective storage track a predetermined amount each time the processor writes to a respective storage track. The method further includes scan each of the storage tracks in each of multiple scan cycles, decrementing each multi-bit counter each scan cycle, and destaging each storage track including a zero count. Also provided are physical computer storage mediums including a computer program product for performing the above method. 1. A system for destaging storage tracks from cache , comprising:a cache configured to store data in a plurality of storage tracks, each storage track including a multi-bit counter; and write data to the plurality of storage tracks, and', 'increment the multi-bit counter on each respective storage track a predetermined amount each time the processor writes to a respective storage track, and, 'a processor coupled to the cache, wherein the processor is configured to execute a first thread and a second thread, wherein the first thread is configured to scan each of the plurality of storage tracks in each of a plurality of scan cycles,', 'decrement each multi-bit counter each scan cycle, and', 'destage each storage track that includes a zero count., 'wherein the second thread is configured to2. The system of claim 1 , wherein the first thread is further configured to:determine if a first storage track currently being written to is sequential with respect to an immediately previous storage track written to;locate a second storage track positioned a predetermined number of storage tracks prior to the first storage track; ...

Подробнее
14-06-2012 дата публикации

SYSTEMS AND METHODS FOR MANAGING DESTAGE CONFLICTS

Номер: US20120151147A1

Systems and methods for managing destage conflicts in cache are provided. One system includes a cache partitioned into multiple ranks configured to store multiple storage tracks and a processor coupled to the cache. The processor is configured to perform the following method. One method includes allocating an amount of storage space in the cache to each rank and monitoring a current amount of storage space used by each rank with respect to the amount of storage space allocated to each respective rank. The method further includes destaging storage tracks from each rank until the current amount of storage space used by each respective rank is equal to a predetermined minimum amount of storage space with respect to the amount of storage space allocated to each rank. Also provided are physical computer storage mediums including code that, when executed by a processor, cause the processor to perform the above method. 1. A system for managing destage conflicts in cache , comprising:a cache partitioned into a plurality of ranks configured to store a plurality of storage tracks; and allocate an amount of storage space in the cache to each rank,', 'monitor a current amount of storage space used by each rank with respect to the amount of storage space allocated to each respective rank, and', 'destage storage tracks from each rank until the current amount of storage space used by each respective rank is equal to a predetermined minimum amount of storage space with respect to the amount of storage space allocated to each rank., 'a processor coupled to the cache, wherein the processor is configured to2. The system of claim 1 , wherein the processor is further configured to cease destaging storage tracks from each rank that includes the predetermined minimum amount of storage space.3. The system of claim 3 , wherein claim 3 , when allocating the amount of storage space in the cache claim 3 , the processor is configured to claim 3 , allocate no more that twenty five percent (25%) ...

Подробнее
14-06-2012 дата публикации

Systems and methods for background destaging storage tracks

Номер: US20120151148A1
Принадлежит: International Business Machines Corp

Systems and methods for background destaging storage tracks from cache when one or more hosts are idle are provided. One system includes a write cache configured to store a plurality of storage tracks and configured to be coupled to one or more hosts, and a processor coupled to the write cache. The processor includes code that, when executed by the processor, causes the processor to perform the method below. One method includes monitoring the write cache for write operations from the host(s) and determining if the host(s) is/are idle based on monitoring the write cache for write operations from the host(s). The storage tracks are destaged from the write cache if the host(s) is/are idle and are not destaged from the write cache if one or more of the hosts is/are not idle. Also provided are physical computer storage mediums including a computer program product for performing the above method.

Подробнее
02-08-2012 дата публикации

ASSIGNING DEVICE ADAPTORS AND BACKGROUND TASKS TO USE TO COPY SOURCE EXTENTS TO TARGET EXTENTS IN A COPY RELATIONSHIP

Номер: US20120198150A1

Provided are a computer program product, system, and method for assigning device adaptors and background tasks to use to copy source extents to target extents in a copy relationship. A relation is provided of a plurality of source extents in source ranks to copy to a plurality of target extents in target ranks in the storage system. One target rank in the relation is used to determine an order in which the target ranks in the relation are selected to register for copying. For each selected target rank in the relation selected according to the determined order, an iteration of a registration operation is performed to register the selected target rank and a selected source rank copied to the selected target rank in the relation. The registration operation comprises indicating in a device adaptor assignment data structure a source device adaptor and target device adaptor to use to copy the selected rank to the selected target rank and adding an entry to a priority queue for the relation for the selected target rank. The selected source rank is copied to the selected target rank using as the source and target device adaptors indicated in the device adaptor assignment data structure for the selected target rank in response to processing the entry in the priority queue added to the priority queue for the selected target rank. 1. A computer program product for copying data in a storage system comprised of ranks of extents of data by performing operations , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to communicate with device adaptors and to perform operations , the operations comprising:providing a relation of a plurality of source extents in source ranks to copy to a plurality of target extents in target ranks in the storage system;using one target rank in the relation to determine an order in which the target ranks in the relation are selected to register for copying; ...

Подробнее
09-08-2012 дата публикации

COMPRESSION ON THIN PROVISIONED VOLUMES USING EXTENT BASED MAPPING

Номер: US20120203983A1

A set of logical extents, each having compressed logical tracks of data, is mapped to a head physical extent and, if the head physical extent is determined to have been filled, to at least one overflow extent having spatial proximity to the head physical extent. Pursuant to at least one subsequent write operation and destage operation, the at least one subsequent write operation and destage operation determined to be associated with the head physical extent, the write operation is mapped to one of the head physical extent, the at least one overflow extent, and an additional extent having spatial proximity to the at least one overflow extent. 1. A method for facilitating data compression using a processor in communication with a memory device , comprising:mapping a set of logical extents, each of the set of logical extents having compressed logical tracks of data, to a head physical extent and, if the head physical extent is determined to have been filled, to at least one overflow extent having spatial proximity to the head physical extent; andpursuant to at least one subsequent write operation and destage operation, the at least one subsequent write operation and destage operation determined to be associated with the head physical extent, mapping the at least one subsequent write operation and destage operation to one of the head physical extent, the at least one overflow extent, and an additional extent having spatial proximity to the at least one overflow extent.2. The method of claim 1 , further including maintaining the mapping of the set of logical extents and the at least one subsequent write operation and destage operation in a metadata directory.3. The method of claim 2 , wherein maintaining the mapping of the set of logical extents and the at least one subsequent write operation and destage operation includes storing track location metadata describing the head physical extent in the head physical extent.4. The method of claim 3 , further including claim 3 , ...

Подробнее
30-08-2012 дата публикации

MULTIPLE INCREMENTAL VIRTUAL COPIES

Номер: US20120221823A1

Provided are techniques for, in response to establishing each incremental virtual copy from a source to a target, creating a target change recording structure for the target. While performing destage to a source data block at the source, it is determined that there is at least one incremental virtual copy target for this source data block. For each incremental virtual copy relationship where the source data block is newer than the incremental virtual copy relationship and an indicator is set in a target inheritance structure on the target for a corresponding target data block, the source data block is copied to each corresponding target data block, and an indicator is set in each target change recording structure on each target for the target data block corresponding to the source data block being destaged. 1. A method , comprising:using a computer including a processor, in response to establishing each incremental virtual copy from a source to a target, creating a target change recording structure for the target; determining that there is at least one incremental virtual copy target for this source data block; and', copying the source data block to each corresponding target data block; and', 'setting an indicator in each target change recording structure on each target for the target data block corresponding to the source data block being destaged., 'for each incremental virtual copy relationship where the source data block is newer than the incremental virtual copy relationship and an indicator is set in a target inheritance structure on the target for a corresponding target data block,'}], 'while performing destage to a source data block at the source,'}2. The method of claim 1 , further comprising: merging the target change recording structure with the target inheritance structure to indicate source data blocks that are to be copied to corresponding target data blocks;', 'performing a destage scan synchronous to the incremental virtual copy on the source; and', ...

Подробнее
13-09-2012 дата публикации

Deleting relations between sources and space-efficient targets in multi-target architectures

Номер: US20120233136A1
Принадлежит: International Business Machines Corp

A method for deleting a relation between a source and a target in a multi-target architecture is described. The multi-target architecture includes a source and multiple space-efficient (SE) targets mapped thereto. In one embodiment, such a method includes initially identifying a relation for deletion from the multi-target architecture. A space-efficient (SE) target associated with the relation is then identified. A mapping structure maps data in logical tracks of the SE target to physical tracks of a repository. The method then identifies a sibling SE target that inherits data from the SE target. Once the SE target and the sibling SE target are identified, the method modifies the mapping structure to map the data in the physical tracks of the repository to the logical tracks of the sibling SE target. The relation is then deleted between the source and the SE target.

Подробнее
13-09-2012 дата публикации

DELETING RELATIONS IN MULTI-TARGET, POINT-IN-TIME-COPY ARCHITECTURES WITH DATA DEDUPLICATION

Номер: US20120233404A1

A method for deleting a relation between a source and a target in a multi-target architecture is described. The multi-target architecture includes a source and multiple targets mapped thereto. In one embodiment, such a method includes initially identifying a relation for deletion from the multi-target architecture. A target associated with the relation is then identified. The method then identifies a sibling target that inherits data from the target. Once the target and the sibling target are identified, the method copies the data from the target to the sibling target. The relation between the source and the target is then deleted. A corresponding computer program product is also disclosed and claimed herein. 1. A method for deleting a relation between a source and a target in a multi-target architecture , the multi-target architecture comprising a source and a plurality of targets mapped thereto , the method comprising:identifying a first relation for deletion from a multi-target architecture;identifying a target associated with the first relation;identifying a sibling target that inherits data from the target;copying the data from the target to the sibling target; anddeleting the first relation.2. The method of claim 1 , wherein the sibling target is the closest older sibling (COS).3. The method of claim 1 , wherein copying the data from the target to the sibling target comprises (1) identifying data stored in the target that is not stored in the sibling target claim 1 , (2) copying the data that is stored in the target but not stored in the sibling target from the target to the sibling target.4. The method of claim 3 , further comprising identifying a second relation for deletion from the multi-target architecture.5. The method of claim 4 , further comprising determining whether the second relation is older than the first relation.6. The method of claim 4 , further comprising deleting the second relation before the first relation if the second relation is older ...

Подробнее
13-09-2012 дата публикации

INTELLIGENT WRITE CACHING FOR SEQUENTIAL TRACKS

Номер: US20120233408A1

Write caching for sequential tracks is performed by a processor device in a computing storage environment for destaging data from nonvolatile storage (NVS) to a storage unit. If a first track is determined to be sequential, and an earlier track is also determined to be sequential, a temporal bit associated with the earlier track is cleared to allow for destage of data of the earlier track. If a temporal bit for one of a plurality of additional tracks in one of a plurality of strides in a modified cache is determined to be not set, a stride associated with the one of the plurality of additional tracks is selected for a destage operation. If the NVS exceeds a predetermined storage threshold, a predetermined one of the plurality of strides is selected for the destage operation. 1. In a computing storage environment for destaging data from nonvolatile storage (NVS) to a storage unit , a method for write caching for sequential tracks by a processor device , comprising:if a first track is determined to be sequential, and an earlier track is also determined to be sequential, clearing a temporal bit associated with the earlier track to allow for destage of data of the earlier track; andif a temporal bit for one of a plurality of additional tracks in one of a plurality of strides in a modified cache is determined to be not set, selecting a stride associated with the one of the plurality of additional tracks for a destage operation, wherein if the NVS exceeds a predetermined storage threshold, selecting a predetermined one of the plurality of strides for the destage operation.2. The method of claim 1 , further including examining the first track to determine if the first track is sequential.3. The method of claim 1 , wherein clearing the temporal bit associated with the earlier track is performed pursuant to a host write request.4. The method of claim 1 , wherein selecting the stride associated with the one of the plurality of additional tracks is performed pursuant to a ...

Подробнее
13-09-2012 дата публикации

CYCLIC POINT-IN-TIME-COPY ARCHITECTURE WITH DATA DEDUPLICATION

Номер: US20120233421A1

A method for performing a write to a volume x in a cyclic point-in-time-copy architecture is described. In one embodiment, such a method includes determining whether the volume x has a child volume. The method then determines whether the target bit maps (TBMs) of both the volume x and the child volume are set. If the TBMs are set, the method finds a higher source (HS) volume from which to copy the desired data to the child volume. Once the HS volume is found, the method determines whether the HS volume and the child volume are the same volume. If the HS volume and the child volume are not the same volume, the method copies the data from the HS volume to the child volume. The method then performs the write to the volume x. 1. A method for performing a write to a volume x in a cyclic point-in-time-copy architecture , the method comprising:determining whether the volume x has a child volume, wherein each of the volume x and the child volume have a target bit map (TBM) associated therewith;determining whether the TBMs of both the volume x and the child volume are set:if the TBMs are set, finding a higher source (HS) volume from which to copy data to the child volume, wherein finding the HS volume comprises traveling up the cyclic architecture until the source of the data is found;determining whether the HS volume and the child volume are the same volume;copying the data from the HS volume to the child volume if the HS volume and the child volume are not the same volume; andperforming the write on the volume x.2. The method of claim 1 , wherein finding the HS volume further comprises finding the source volume associated with the volume x.3. The method of claim 2 , wherein finding the source volume comprises determining the mapping relationship between the volume x and the source volume.4. The method of claim 3 , wherein the mapping relationship is defined by a generation number (GN) associated with the volume x and a generation number (GN) associated with the source ...

Подробнее
04-10-2012 дата публикации

NEAR CONTINUOUS SPACE-EFFICIENT DATA PROTECTION

Номер: US20120254122A1

A method for providing rolling continuous data protection of source data is disclosed. In one embodiment, such a method includes enabling a user to select source data and establish a first interval when point-in-time copies of the source data are generated. The method further enables the user to specify a first number of point-in-time copies to retain at the first interval. The method further enables the user to specify a second number of point-in-time copies to retain at a second interval, wherein the second interval is a (n≧2) multiple of the first interval. The method further enables the user to specify a third number of point-in-time copies to retain at a third interval, wherein the third interval is a (n≧2) multiple of the second interval. A corresponding apparatus and computer program product are also disclosed. 1. A method for providing rolling continuous data protection of source data , the method comprising:enabling a user to select source data;enabling the user to establish a first interval when point-in-time copies of the source data are generated;enabling the user to specify a first number of point-in-time copies to retain at the first interval;enabling the user to specify a second number of point-in-time copies to retain at a second interval, wherein the second interval is a (n≧2) multiple of the first interval; andenabling the user to specify a third number of point-in-time copies to retain at a third interval, wherein the third interval is a (n≧2) multiple of the second interval.2. The method of claim 1 , wherein the source data comprises at least one source volume.3. The method of claim 1 , wherein the point-in-time copies are contained in one of conventional target volumes and space-efficient target volumes.4. The method of claim 1 , further comprising providing functionality to delete point-in-copies that are not retained in accordance with the first claim 1 , second claim 1 , and third numbers.5. The method of claim 4 , wherein deleting point-in- ...

Подробнее
04-10-2012 дата публикации

SYSTEMS AND METHODS FOR MANAGING DESTAGE CONFLICTS

Номер: US20120254544A1

A system includes a cache partitioned into multiple ranks configured to store multiple storage tracks and a processor coupled to the cache. The processor is configured to perform the following method. One method includes allocating an amount of storage space in the cache to each rank and monitoring a current amount of storage space used by each rank with respect to the amount of storage space allocated to each respective rank. The method further includes destaging storage tracks from each rank until the current amount of storage space used by each respective rank is equal to a predetermined minimum amount of storage space with respect to the amount of storage space allocated to each rank. 1. A method for managing destage conflicts in a cache partitioned into a plurality of ranks configured to store a plurality of storage tracks , the method comprising:allocating, by a processor coupled to the cache, an amount of storage space in the cache to each rank;monitoring a current amount of storage space used by each rank with respect to the amount of storage space allocated to each respective rank; anddestaging storage tracks from each rank until the current amount of storage space used by each respective rank is equal to a predetermined minimum amount of storage space with respect to the amount of storage space allocated to each rank.2. The method of claim 1 , further comprising ceasing to destage storage tracks from each rank that includes the predetermined minimum amount of storage space.3. The method of claim 2 , wherein allocating the amount of storage space in the cache comprises allocating no more that twenty five percent (25%) of a total amount of storage space in the cache to any one rank.4. The method of claim 3 , wherein ceasing to destage storage tracks from each rank comprises ceasing to destage storage tracks from each rank that includes less than or equal to thirty percent (30%) of its respective allocated storage space.5. The method of claim 3 , wherein ...

Подробнее
04-10-2012 дата публикации

SYSTEMS AND METHODS FOR BACKGROUND DESTAGING STORAGE TRACKS

Номер: US20120254545A1

A system includes a write cache configured to store a plurality of storage tracks and configured to be coupled to one or more hosts, and a processor coupled to the write cache. The processor includes code that, when executed by the processor, causes the processor to perform the method below. One method includes monitoring the write cache for write operations from the host(s) and determining if the host(s) is/are idle based on monitoring the write cache for write operations from the host(s). The storage tracks are destaged from the write cache if the host(s) is/are idle and are not destaged from the write cache if one or more of the hosts is/are not idle. 1. A method for background destaging storage tracks from a write cache configured to store a plurality of storage tracks when at least one host is idle , the method comprising:monitoring, by a processor coupled to the write cache, the write cache for write operations from the at least one host;determining if the at least one host is idle based on monitoring the write cache for write operations from the at least one host;destaging storage tracks from the write cache if the at least one host is idle; andrefraining from destaging storage tracks from the write cache if the at least one host is not idle.2. The method of claim 1 , wherein the write cache is partitioned into a plurality ranks each comprising a portion of the plurality of storage tracks claim 1 , the method further comprising:monitoring each rank for write operations from the at least one host; anddetermining if the at least one host is idle with respect to each respective rank based on monitoring each rank for write operations from the at least one host such that the at least one host may be determined to be idle with respect to a first rank and not idle with respect to a second rank.3. The method of claim 2 , further comprising:destaging storage tracks from each respective rank when it is determined that the at least one host is idle with respect to a ...

Подробнее
11-10-2012 дата публикации

FABRICATING KEY FIELDS

Номер: US20120260043A1

Exemplary methods, computer systems, and computer program products for fabricating key fields by a processor device in a computer environment are provided. In one embodiment, the computer environment is configured for, as an alternative to reading Count-Key-Data (CKD) data in order to change the key field, providing a hint to fabricate a new key field, thereby overwriting a previous key field and updating the CKD data.

Подробнее
18-10-2012 дата публикации

COMPRESSION ON THIN PROVISIONED VOLUMES USING EXTENT BASED MAPPING

Номер: US20120265766A1

For facilitating data compression, a set of logical extents, each having compressed logical tracks of data, is mapped to a head physical extent and, if the head physical extent is determined to have been filled, to at least one overflow extent having spatial proximity to the head physical extent. Pursuant to at least one subsequent write operation and destage operation, the at least one subsequent write operation and destage operation determined to be associated with the head physical extent, the write operation is mapped to one of the head physical extent, the at least one overflow extent, and an additional extent having spatial proximity to the at least one overflow extent. 1. A system for facilitating data compression , comprising: mapping a set of logical extents, each of the set of logical extents having compressed logical tracks of data, to a head physical extent and, if the head physical extent is determined to have been filled, to at least one overflow extent having spatial proximity to the head physical extent, and', 'pursuant to at least one subsequent write operation and destage operation, the at least one subsequent write operation and destage operation determined to be associated with the head physical extent, mapping the at least one subsequent write operation and destage operation to one of the head physical extent, the at least one overflow extent, and an additional extent having spatial proximity to the at least one overflow extent;', 'wherein the mapping of the at least one subsequent write operation and destage operation to the head physical extent is performed using a hash function incorporating a compressed volume identification (id) and an extent number in a compressed volume associated with the compressed volume id, 'a processor device in communication with a memory device, wherein the processor device is configured for2. The system of claim 1 , wherein the processor device is further configured for maintaining the mapping of the set of ...

Подробнее
29-11-2012 дата публикации

MAGNETIC DISK DRIVE USING A NON-VOLATILE STORAGE DEVICE AS CACHE FOR MODIFIED TRACKS

Номер: US20120300329A1

Provided are a computer program product, system, and method for a magnetic disk drive. The disk drive has at least one disk platter having at least one recordable disk surface having an areal density of at least 200 gigabits per square inch. Either a diameter of the at least one disk platter is greater than 3.5 inches or the at least one disk platter rotates at less than 5400 RPMs. A read/write head reads and writes tracks of data with respect to the at least one disk surface. Modified tracks from write requests to write to the at least one disk surface on the at least one disk platter are cached in a non-volatile storage device for caching modified tracks. Modified tracks are cached in the non-volatile storage device to later destage to the at least one disk surface. 1. A disk drive assembly , comprising:at least one disk platter having at least one recordable disk surface having an areal density of at least 200 gigabits per square inch, wherein either a diameter of the at least one disk platter is greater than 3.5 inches or the at least one disk platter rotates at less than 5400 RPMs;an actuator assembly for controlling a movement of a read/write head with respect to the at least one disk platter to read and write tracks of data with respect to the at least one disk surface;a non-volatile storage device for caching modified tracks from write requests to write to the at least one disk surface on the at least one disk platter; anda disk controller for processing read and write requests and caching modified tracks in the non-volatile storage device to later destage to the at least one disk surface.2. The disk drive assembly of claim 1 , wherein the disk controller further performs:receiving a read request for requested tracks on the at least one disk surface;determining whether the requested tracks are in the non-volatile storage device; andaccessing the determined requested tracks from the at least one disk surface to return to the read request.3. The disk drive ...

Подробнее
29-11-2012 дата публикации

MAGNETIC DISK DRIVE USING A NON-VOLATILE STORAGE DEVICE AS CACHE FOR MODIFIED TRACKS

Номер: US20120300336A1

Provided are a computer program product, system, and method for a magnetic disk drive. The disk drive has at least one disk platter having at least one recordable disk surface having an areal density of at least 200 gigabits per square inch. Either a diameter of the at least one disk platter is greater than 3.5 inches or the at least one disk platter rotates at less than 5400 RPMs. A read/write head reads and writes tracks of data with respect to the at least one disk surface. Modified tracks from write requests to write to the at least one disk surface on the at least one disk platter are cached in a non-volatile storage device for caching modified tracks. Modified tracks are cached in the non-volatile storage device to later destage to the at least one disk surface. 116-. (canceled)17. A method , comprising:accessing at least one disk platter having at least one recordable disk surface having an areal density of at least 200 gigabits per square inch, wherein either a diameter of the at least one disk platter is greater than 3.5 inches or the at least one disk platter rotates at less than 5400 RPMs;controlling a movement of a read/write head with respect to the at least one disk platter to read and write tracks of data with respect to the at least one disk surface;caching in a non-volatile storage device modified tracks from write requests to write to the at least one disk surface on the at least one disk platter; andcaching modified tracks in the non-volatile storage device to later destage to the at least one disk surface.18. The method of claim 17 , further comprising:receiving a read request for requested tracks on the at least one disk surface;determining whether the requested tracks are in the non-volatile storage device; andaccessing the determined requested tracks from the at least one disk surface to return to the read request.19. The method of claim 17 , further comprising:receiving a write request having modified tracks to write to the disk surface; ...

Подробнее
29-11-2012 дата публикации

POPULATING STRIDES OF TRACKS TO DEMOTE FROM A FIRST CACHE TO A SECOND CACHE

Номер: US20120303861A1

Provided are a computer program product, system, and method for populating strides of tracks to demote from a first cache to a second cache. A first cache maintains modified and unmodified tracks from a storage system subject to Input/Output (I/O) requests. A determination is made to demote tracks from the first cache. A determination is made as to whether there are enough tracks ready to demote to form a stride, wherein tracks are written to a second cache in strides defined for a Redundant Array of Independent Disk (RAID) configuration. A stride is populated with tracks ready to demote in response to determining that there are enough tracks ready to demote to form the stride. The stride of tracks, to demote from the first cache, are promoted to the second cache. The tracks in the second cache that are modified are destaged to the storage system. 1. A computer program product for managing data in a cache system comprising a first cache , a second cache , and a storage system comprised of storage devices , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to perform operations , the operations comprising:maintaining in the first cache modified and unmodified tracks from the storage system subject to Input/Output (I/O) requests;determining to demote tracks from the first cache;determining whether there are enough tracks ready to demote to form a stride, wherein tracks are written to the second cache in strides defined for a Redundant Array of Independent Disk (RAID) configuration;populating a stride with tracks ready to demote in response to determining that there are enough tracks ready to demote to form the stride;promoting the stride of tracks, to demote from the first cache, to the second cache; anddestaging the tracks in the second cache that are modified to the storage system.2. The computer program product of claim 1 , wherein the first cache is a faster access device ...

Подробнее
29-11-2012 дата публикации

CACHING DATA IN A STORAGE SYSTEM HAVING MULTIPLE CACHES INCLUDING NON-VOLATILE STORAGE CACHE IN A SEQUENTIAL ACCESS STORAGE DEVICE

Номер: US20120303862A1

Provided are a computer program product, system, and method for caching data in a storage system having multiple caches. A sequential access storage device includes a sequential access storage medium and a non-volatile storage device integrated in the sequential access storage device, received modified tracks are cached in the non-volatile storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium. A spatial index indicates the modified tracks in the non-volatile storage device in an ordering based on their physical location in the sequential access storage medium. The modified tracks are destaged from the non-volatile storage device by comparing a current position of a write head to physical locations of the modified tracks on the sequential access storage medium indicated in the spatial index to select a modified track to destage from the non-volatile storage device to the storage device. 1. A computer program product for managing data in a cache system comprising a first cache , a second cache , and a storage device , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to perform operations , the operations comprising:maintaining in the first cache modified and unmodified tracks in the storage device subject to Input/Output (I/O) requests;demoting modified and unmodified tracks from the first cache;promoting the unmodified tracks demoted from the first cache to the second cache;destaging modified tracks demoted from the first cache to the storage device, bypassing the second cache; anddiscarding unmodified tracks demoted from the second cache.2. The computer program product of claim 1 , wherein the first cache comprises a Random Access Memory (RAM) claim 1 , the second cache comprises a flash device claim 1 , and the storage device comprises a sequential write device.3. The computer program product of claim 1 ...

Подробнее
29-11-2012 дата публикации

HANDLING HIGH PRIROITY REQUESTS IN A SEQUENTIAL ACCESS STORAGE DEVICE HAVING A NON-VOLATILE STORAGE CACHE

Номер: US20120303869A1

Modified tracks for write requests to a sequential access storage medium in a sequential access storage device are cached in a non-volatile storage, which is a faster access device than the sequential access storage medium. A request queue includes destage requests to destage the modified tracks in the non-volatile storage device to the sequential access storage medium and read requests to access read requested tracks from the sequential access storage medium. A comparison is made of a current position of a read/write mechanism with respect to physical locations on the sequential access storage medium of the tracks subject to the destage requests indicated in the request queue. A determination is made of one of the destage requests to process based on the comparison. The modified track for the determined destage request is written from the non-volatile storage device to the sequential access storage medium. 1. A computer program product for managing data in a sequential access storage device receiving read requests and write requests from a system with respect to tracks stored in a sequential access storage medium , wherein the sequential access storage device includes a read/write mechanism to read and write data with respect to the sequential access storage medium , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to perform operations , the operations comprising:caching received modified tracks for write requests in a non-volatile storage device integrated with the sequential access storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium;maintaining a request queue including destage requests to destage the modified tracks in the non-volatile storage device to the sequential access storage medium and read requests to access read requested tracks from the sequential access storage medium;comparing a ...

Подробнее
29-11-2012 дата публикации

Populating strides of tracks to demote from a first cache to a second cache

Номер: US20120303875A1
Принадлежит: International Business Machines Corp

Provided are a computer program product, system, and method for populating strides of tracks to demote from a first cache to a second cache. A first cache maintains modified and unmodified tracks from a storage system subject to Input/Output (I/O) requests. A determination is made to demote tracks from the first cache. A determination is made as to whether there are enough tracks ready to demote to form a stride, wherein tracks are written to a second cache in strides defined for a Redundant Array of Independent Disk (RAID) configuration. A stride is populated with tracks ready to demote in response to determining that there are enough tracks ready to demote to form the stride. The stride of tracks, to demote from the first cache, are promoted to the second cache. The tracks in the second cache that are modified are destaged to the storage system.

Подробнее
29-11-2012 дата публикации

USING AN ATTRIBUTE OF A WRITE REQUEST TO DETERMINE WHERE TO CACHE DATA IN A STORAGE SYSTEM HAVING MULTIPLE CACHES INCLUDING NON-VOLATILE STORAGE CACHE IN A SEQUENTIAL ACCESS STORAGE DEVICE

Номер: US20120303877A1

Provided are a computer program product, system, and method for using an attribute of a write request to determine where to cache data in a storage system having multiple caches including non-volatile storage cache in a sequential access storage device. Received modified tracks are cached in the non-volatile storage device integrated with the sequential access storage device in response to determining to cache the modified tracks. A write request having modified tracks is received. A determination is made as to whether an attribute of the received write request satisfies a condition. The received modified tracks for the write request are cached in the non-volatile storage device in response to determining that the determined attribute does not satisfy the condition. A destage request is added to a request queue for the received write request having the determined attribute not satisfying the condition. 110-. (canceled)11. A method for managing data in a sequential access storage device receiving read requests and write requests from a system with respect to tracks stored in a sequential access storage medium , comprising:caching received modified tracks in a non-volatile storage device integrated with the sequential access storage device in response to determining to cache the modified tracks;receiving a write request having modified tracks;determining whether an attribute of the received write request satisfies a condition;caching the received modified tracks for the write request in the non-volatile storage device in response to determining that the determined attribute does not satisfy the condition;adding a destage request to a request queue for the received write request having the determined attribute not satisfying the condition; andwriting the received modified tracks for the write request having the determined attribute satisfying the condition at a higher priority than modified tracks for write requests having the attribute not satisfying the condition.12. ...

Подробнее
29-11-2012 дата публикации

WRITING OF NEW DATA OF A FIRST BLOCK SIZE IN A RAID ARRAY THAT STORES BOTH PARITY AND DATA IN A SECOND BLOCK SIZE

Номер: US20120303890A1

A Redundant Array of Independent Disks (RAID) controller receives new data that is to be written, wherein the new data is indicated in blocks of a first block size. The RAID controller reads old data, and old parity that corresponds to the old data, stored in blocks of a second block size that is larger in size than the first block size. The RAID controller computes new parity based on the new data, the old data, and the old parity. The RAID controller writes the new data and the new parity aligned to the blocks of the second block size, wherein portions of the old data that are not overwritten by the RAID controller are also written to the blocks of the second block size. 15-. (canceled)6. A system , comprising:a memory; and receiving new data that is to be written, wherein the new data is indicated in blocks of a first block size;', 'reading old data, and old parity that corresponds to the old data, stored in blocks of a second block size that is larger in size than the first block size;', 'computing new parity based on the new data, the old data, and the old parity; and', 'writing the new data and the new parity aligned to the blocks of the second block size, wherein portions of the old data that are not overwritten by the RAID controller are also written to the blocks of the second block size., 'a processor coupled to the memory, wherein the processor performs7. The system of claim 6 , wherein the system is a RAID controller that controls disks that are configured as RAID-5 claim 6 , wherein:the reading is performed via two sets of read operations from the disks, wherein a first set of read operations include reading the old data and a second set of read operations include reading the old parity; andthe writing is performed via two sets of write operations to the disks, wherein a first set of write operations include writing the new data and portions of the old data that are not overwritten, and a second set of write operations include writing the new parity.8. ...

Подробнее
29-11-2012 дата публикации

WRITING OF NEW DATA OF A FIRST BLOCK SIZE IN A RAID ARRAY THAT STORES BOTH PARITY AND DATA IN A SECOND BLOCK SIZE

Номер: US20120303892A1

A Redundant Array of Independent Disks (RAID) controller receives new data that is to be written, wherein the new data is indicated in blocks of a first block size. The RAID controller reads old data, and old parity that corresponds to the old data, stored in blocks of a second block size that is larger in size than the first block size. The RAID controller computes new parity based on the new data, the old data, and the old parity. The RAID controller writes the new data and the new parity aligned to the blocks of the second block size, wherein portions of the old data that are not overwritten by the RAID controller are also written to the blocks of the second block size. 1. A method implemented in a device , the method comprising:receiving, by a Redundant Array of Independent Disks (RAID) controller, new data that is to be written, wherein the new data is indicated in blocks of a first block size;reading, by the RAID controller, old data, and old parity that corresponds to the old data, stored in blocks of a second block size that is larger in size than the first block size;computing, by the RAID controller, new parity based on the new data, the old data, and the old parity; andwriting, by the RAID controller, the new data and the new parity aligned to the blocks of the second block size, wherein portions of the old data that are not overwritten by the RAID controller are also written to the blocks of the second block size.2. The method of claim 1 , wherein the RAID controller is implemented in hardware claim 1 , and wherein the RAID controller controls disks that are configured as RAID-5 claim 1 , wherein:the reading is performed via two sets of read operations from the disks, wherein a first set of read operations include reading the old data and a second set of read operations include reading the old parity; andthe writing is performed via two sets of write operations to the disks, wherein a first set of write operations include writing the new data and ...

Подробнее
29-11-2012 дата публикации

Writing of data of a first block size in a raid array that stores and mirrors data in a second block size

Номер: US20120303893A1
Принадлежит: International Business Machines Corp

Data that is to be written is received, wherein the data is indicated in one or more blocks of a first block size. Each of the one or more blocks of the first block size is written in consecutive blocks of a second block size that is larger is size than the first block size, wherein each of the consecutive blocks of the second block size stores only one block of the first block size, and wherein each of the consecutive blocks of the second block size has empty space remaining, subsequent to the writing of each of the one or more blocks of the first block size. Filler data is written in the empty space remaining in each of the consecutive blocks of the second block size.

Подробнее
29-11-2012 дата публикации

HANDLING HIGH PRIORITY REQUESTS IN A SEQUENTIAL ACCESS STORAGE DEVICE HAVING A NON-VOLATILE STORAGE CACHE

Номер: US20120303895A1

Provided are a computer program product, system, and method for handling high priority requests in a sequential access storage device. Received modified tracks for write requests are cached in a non-volatile storage device integrated with the sequential access storage device. A destage request is added to a request queue for a received write request having modified tracks for the sequential access storage medium cached in the non-volatile storage device. A read request indicting a priority is received. A determination is made of a priority of the read request as having a first priority or a second priority. The read request is added to the request queue in response to determining that the determined priority is the first priority. The read request is processed at a higher priority than the read and destage requests in the request queue in response to determining that the determined priority is the second priority. 1. A computer program product for managing data in a sequential access storage device receiving read requests and write requests from a system with respect to tracks stored in a sequential access storage medium , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to perform operations , the operations comprising:caching received modified tracks for write requests in a non-volatile storage device integrated with the sequential access storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium;adding a destage request to a request queue for a received write request having modified tracks for the sequential access storage medium cached in the non-volatile storage device;receiving a read request indicting a priority;determining a priority of the read request as having a first priority or a second priority;adding the read request to the request queue in response to determining that the determined priority ...

Подробнее
29-11-2012 дата публикации

Managing unmodified tracks maintained in both a first cache and a second cache

Номер: US20120303898A1
Принадлежит: International Business Machines Corp

Provided are a computer program product, system, and method for managing unmodified tracks maintained in both a first cache and a second cache. The first cache has unmodified tracks in the storage subject to Input/Output (I/O) requests. Unmodified tracks are demoted from the first cache to a second cache. An inclusive list indicates unmodified tracks maintained in both the first cache and a second cache. An exclusive list indicates unmodified tracks maintained in the second cache but not the first cache. The inclusive list and the exclusive list are used to determine whether to promote to the second cache an unmodified track demoted from the first cache.

Подробнее
29-11-2012 дата публикации

MANAGING UNMODIFIED TRACKS MAINTAINED IN BOTH A FIRST CACHE AND A SECOND CACHE

Номер: US20120303904A1

Provided are a computer program product, system, and method for managing unmodified tracks maintained in both a first cache and a second cache. The first cache has unmodified tracks in the storage subject to Input/Output (I/O) requests. Unmodified tracks are demoted from the first cache to a second cache. An inclusive list indicates unmodified tracks maintained in both the first cache and a second cache. An exclusive list indicates unmodified tracks maintained in the second cache but not the first cache. The inclusive list and the exclusive list are used to determine whether to promote to the second cache an unmodified track demoted from the first cache. 117-. (canceled)18. A method , comprising:maintaining in a first cache unmodified tracks in a storage device subject to Input/Output (I/O) requests;demoting unmodified tracks from the first cache to a second cache;maintaining an inclusive list indicating unmodified tracks maintained in both the first cache and a second cache;maintaining an exclusive list indicating unmodified tracks maintained in the second cache but not the first cache; andusing the inclusive list and the exclusive list to determine whether to promote to the second cache an unmodified track demoted from the first cache.19. The method of claim 18 , wherein the first cache is a faster access device than the second cache and wherein the second cache is a faster access device than the storage.20. The method of claim 18 , further comprising:selecting an unmodified track in the first cache to demote;demoting the selected unmodified track;determining whether the selected unmodified track is in the inclusive list; andpromoting the selected unmodified track to the second cache in response to determining that the selected unmodified tracks is not in the inclusive list, wherein the selected unmodified track is not promoted to the second cache if the selected unmodified track is in the inclusive list.21. The method of claim 18 , further comprising:adding the ...

Подробнее
20-12-2012 дата публикации

Apparatus and Method to Copy Data

Номер: US20120324171A1

An apparatus and method for copying data are disclosed. A data track to be replicated using a peer-to-peer remote copy (PPRC) operation is identified. The data track is encoded in a non-transitory computer readable medium disposed in a first data storage system. At a first time, a determination of whether the data track is stored in a data cache is made. At a second time, the data track is replicated to a non-transitory computer readable medium disposed in a second data storage system. The second time is later than the first time. If the data track was stored in the data cache at the first time, a cache manager is instructed to not demote the data track from the data cache. If the data track was not stored in the data cache at the first time, the cache manager is instructed that the data track may be demoted. 1. A method to copy data , comprising:identifying a data track to be replicated using a peer-to-peer remote copy (PPRC) operation, wherein the data track is encoded in a non-transitory computer readable medium disposed in a first data storage system;at a first time, determining whether the data track is stored in a data cache;at a second time, replicating the data track from the data cache to a non-transitory computer readable medium disposed in a second data storage system, wherein the second time is later than the first time;if the data track was stored in the data cache at the first time, instructing a cache manager to not demote the data track from the data cache; andif the data track was not stored in the data cache at the first time, instructing the cache manager that the data track may be demoted.2. The method of claim 1 , further comprising:assigning a different data identifier to one or more data tracks stored in the data cache at the first time; andarranging the one or more data identifiers in a least recently used (LRU) list.3. The method of claim 1 , further comprising priming the data track in the data cache if the data track is not stored in the ...

Подробнее
20-12-2012 дата публикации

EFFICIENT DISCARD SCANS

Номер: US20120324173A1

Exemplary method, system, and computer program product embodiments for performing a discard scan operation are provided. In one embodiment, by way of example only, a plurality of tracks is examined for meeting criteria for a discard scan. In lieu of waiting for a completion of a track access operation, at least one of the plurality of tracks is marked for demotion. An additional discard scan may be subsequently performed for tracks not previously demoted. The discard and additional discard scans may proceed in two phases. Additional system and computer program product embodiments are disclosed and provide related advantages. 1. A method for performing a discard scan operation by a processor device in a computing storage environment , comprising:examining a plurality of tracks for meeting a criteria for a discard scan; andin lieu of waiting for a completion of a track access operation, marking at least one of the plurality of tracks for demotion.2. The method of claim 1 , further including performing the marking in a first phase.3. The method of claim 2 , further including performing a cleanup demotion by a subsequent discard scan for those of the at least one of the plurality of tracks not demoted previously claim 2 , wherein the subsequent discard scan is performed in a second phase.4. The method of claim 2 , further including performing at least one of:commencing the discard scan in a hash table by marking the beginning of the at least one of the plurality of tracks,continuing the marking until reaching an end of the at least one of the plurality of tracks,setting a bit in the at least one of the plurality of tracks, andupdating a first index and a last index in the hash table.5. The method of claim 3 , further including performing at least one of:scanning a hash table between a first index and a last index, andsetting a flag to indicate a first phase and the second phase.6. The method of claim 5 , further including claim 5 , in conjunction with the setting the ...

Подробнее
24-01-2013 дата публикации

PREFETCHING DATA TRACKS AND PARITY DATA TO USE FOR DESTAGING UPDATED TRACKS

Номер: US20130024613A1

Provided are a computer program product, system, and method for prefetching data tracks and parity data to use for destaging updated tracks. A write request is received including at least one updated track to the group of tracks. The at least one updated track is stored in a first cache device. A prefetch request is sent to the at least one sequential access storage device to prefetch tracks in the group of tracks to a second cache device. A read request is generated to read the prefetch tracks following the sending of the prefetch request. The read prefetch tracks returned to the read request from the second cache device are stored in the first cache device. New parity data is calculated from the at least one updated track and the read prefetch tracks. 1. A computer program product for processing a group of data tracks and parity data in at least one sequential access storage device and communicating with a first cache device and a second cache device , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to perform operations , the operations comprising:receiving a write request including at least one updated track to the group of tracks;storing the at least one updated track in the first cache device;sending a prefetch request to the at least one sequential access storage device to prefetch tracks in the group of tracks to the second cache device;generating a read request to read the prefetch tracks following the sending of the prefetch request;storing the read prefetch tracks returned to the read request from the second cache device in the first cache device; andcalculating new parity data from the at least one updated track and the read prefetch tracks.2. The computer program product of claim 1 , wherein the operations further comprise:destaging the group of tracks comprising the updated track and the read prefetch tracks and the new parity data to the at least one ...

Подробнее
24-01-2013 дата публикации

PREFETCHING TRACKS USING MULTIPLE CACHES

Номер: US20130024625A1

Provided are a computer program product, sequential access storage device, and method for managing data in a sequential access storage device receiving read requests and write requests from a system with respect to tracks stored in a sequential access storage medium. A prefetch request indicates prefetch tracks in the sequential access storage medium to read from the sequential access storage medium. The accessed prefetch tracks are cached in a non-volatile storage device integrated with the sequential access storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium. A read request is received for the prefetch tracks following the caching of the prefetch tracks, wherein the prefetch request is designated to be processed at a lower priority than the read request with respect to the sequential access storage medium. The prefetch tracks are returned from the non-volatile storage device to the read request. 120-. (canceled)21. A method , comprising:managing data in a sequential access storage device receiving read requests and write requests from a system with respect to tracks stored in a sequential access storage medium;receiving a prefetch request from the system indicating prefetch tracks in the sequential access storage medium;processing the prefetch request to read the prefetch tracks from the sequential access storage medium;caching the accessed prefetch tracks in a non-volatile storage device integrated with the sequential access storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium;receiving a read request for the prefetch tracks following the caching of the prefetch tracks, wherein the prefetch request is designated to be processed at a lower priority than the read request with respect to the sequential access storage medium; andreturning the prefetch tracks from the non-volatile storage device to the read request.22. The ...

Подробнее
24-01-2013 дата публикации

PREFETCHING SOURCE TRACKS FOR DESTAGING UPDATED TRACKS IN A COPY RELATIONSHIP

Номер: US20130024626A1

A point-in-time copy relationship associates tracks in a source storage with tracks in a target storage. The target storage stores the tracks in the source storage as of a point-in-time. A write request is received including an updated source track for a point-in-time source track in the source storage in the point-in-time copy relationship. The point-in-time source track was in the source storage at the point-in-time the copy relationship was established. The updated source track is stored in a first cache device. A prefetch request is sent to the source storage to prefetch the point-in-time source track in the source storage subject to the write request to a second cache device. A read request is generated to read the source track in the source storage following the sending of the prefetch request. The read source track is copied to a corresponding target track in the target storage. 1. A computer program product for maintaining a copy relationship between tracks in a source storage and a target storage and communicating with a first cache device and a second cache device , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to perform operations , the operations comprising:maintaining a point-in-time copy relationship associating tracks in the source storage with tracks in the target storage, wherein the target storage stores the tracks in the source storage as of a point-in-time;receiving a write request including an updated source track for a point-in-time source track in the source storage in the point-in-time copy relationship, wherein the point-in-time source track was in the source storage at the point-in-time the copy relationship was established;storing the updated source track in the first cache device;sending a prefetch request to the source storage to prefetch the point-in-time source track in the source storage subject to the write request to the second cache ...

Подробнее
24-01-2013 дата публикации

PREFETCHING DATA TRACKS AND PARITY DATA TO USE FOR DESTAGING UPDATED TRACKS

Номер: US20130024627A1

Provided are a computer program product, system, and method for prefetching data tracks and parity data to use for destaging updated tracks. A write request is received including at least one updated track to the group of tracks. The at least one updated track is stored in a first cache device. A prefetch request is sent to the at least one sequential access storage device to prefetch tracks in the group of tracks to a second cache device. A read request is generated to read the prefetch tracks following the sending of the prefetch request. The read prefetch tracks returned to the read request from the second cache device are stored in the first cache device. New parity data is calculated from the at least one updated track and the read prefetch tracks. 119-. (canceled)20. A method for processing a group of data tracks and parity data in at least one sequential access storage device , comprising:receiving a write request including at least one updated track to the group of tracks;storing the at least one updated track in a first cache device;sending a prefetch request to the at least one sequential access storage device to prefetch tracks in the group of tracks to a second cache device;generating a read request to read the prefetch tracks following the sending of the prefetch request;storing the read prefetch tracks returned to the read request from the second cache device in the first cache device; andcalculating new parity data from the at least one updated track and the read prefetch tracks.21. The method of claim 20 , further comprising:destaging the group of tracks comprising the updated track and the read prefetch tracks and the new parity data to the at least one sequential access storage device, wherein the read prefetch tracks include the tracks in the group other than the at least one updated track in the at least one sequential access storage device.22. The method of claim 20 , wherein the prefetch request is generated in response to receiving the write ...

Подробнее
31-01-2013 дата публикации

ADAPTIVE RECORD CACHING FOR SOLID STATE DISKS

Номер: US20130031295A1

A storage controller receives a request that corresponds to an access of a track. A determination is made as to whether the track corresponds to data stored in a solid state disk. Record staging to a cache from the solid state disk is performed, in response to determining that the track corresponds to data stored in the solid state disk, wherein each track is comprised of a plurality of records. 1. A method , comprising:receiving, by a storage controller, a request that corresponds to an access of a track;determining whether the track corresponds to data stored in a solid state disk; andperforming, record staging to a cache from the solid state disk, in response to determining that the track corresponds to data stored in the solid state disk, wherein each track is comprised of a plurality of records.2. The method of claim 1 , the method further comprising: 'selecting among performing partial track staging, full track staging and record staging to the cache from the hard disk, based on a criterion maintained by the storage controller, in response to determining that the track corresponds to data stored in the hard disk, wherein in full track staging an entire track is staged, in partial track staging all sectors starting from the start of requested sectors to the end of the track are staged, and in record staging only the requested sectors are staged.', 'determining whether the track corresponds to data stored in a hard disk; and'}3. The method of claim 1 , wherein the record staging from the solid state disk is performed when the track has not been accessed relatively recently claim 1 , and selecting at least among performing partial track staging and performing full track staging when the track has been accessed relatively recently.4. The method of claim 1 , further comprising:maintaining a least recently used list for tracks, wherein each track in the list is numbered sequentially in a monotonically increasing order as each track is accessed in the cache and then ...

Подробнее
31-01-2013 дата публикации

ADAPTIVE RECORD CACHING FOR SOLID STATE DISKS

Номер: US20130031297A1

A storage controller receives a request that corresponds to an access of a track. A determination is made as to whether the track corresponds to data stored in a solid state disk. Record staging to a cache from the solid state disk is performed, in response to determining that the track corresponds to data stored in the solid state disk, wherein each track is comprised of a plurality of records. 15-. (canceled)6. A system , comprising: a processor coupled to the memory, wherein the processor performs operations, the operations comprising:', 'receiving a request that corresponds to an access of a track;', 'determining whether the track corresponds to data stored in a solid state disk; and', 'performing, record staging to a cache from the solid state disk, in response to determining that the track corresponds to data stored in the solid state disk, wherein each track is comprised of a plurality of records., 'a memory; and'}7. The system of claim 6 , the operations further comprising: 'selecting among performing partial track staging, full track staging and record staging to the cache from the hard disk, based on a maintained criterion, in response to determining that the track corresponds to data stored in the hard disk, wherein in full track staging an entire track is staged, in partial track staging all sectors starting from the start of requested sectors to the end of the track are staged, and in record staging only the requested sectors are staged.', 'determining whether the track corresponds to data stored in a hard disk; and'}8. The system of claim 6 , wherein the record staging from the solid state disk is performed when the track has not been accessed relatively recently claim 6 , and selecting at least among performing partial track staging and performing full track staging when the track has been accessed relatively recently.9. The system of claim 6 , the operations further comprising:maintaining a least recently used list for tracks, wherein each track in ...

Подробнее
21-02-2013 дата публикации

INDICATION OF A DESTRUCTIVE WRITE VIA A NOTIFICATION FROM A DISK DRIVE THAT EMULATES BLOCKS OF A FIRST BLOCK SIZE WITHIN BLOCKS OF A SECOND BLOCK SIZE

Номер: US20130046932A1

A disk drive receives a request to write at least one block of a first block size, wherein the disk drive is configured to store blocks of a second block size that is larger in size than the first block size, and wherein the disk drive stores via emulation a plurality of emulated blocks of the first block size in each block of the second block size. The disk drive generates a read error, in response to reading a selected block of the second block size in which the at least block of the first block size is to be written via the emulation. The disk drive performs a destructive write of selected emulated blocks of the first block size that caused the read error to be generated. The disk drive writes the at least one block of the first block size in the selected block of the second block size. The disk drive sends a notification to indicate the performing of the destructive write. 15-. (canceled)6. A disk drive , comprising:a memory; anda processor coupled to the memory, wherein the processor performs operations, the operations comprising:receiving a request to write at least one block of a first block size, wherein the disk drive is configured to store blocks of a second block size that is larger in size than the first block size, and wherein the disk drive stores via emulation a plurality of emulated blocks of the first block size in each block of the second block size;generating a read error, in response to reading a selected block of the second block size in which the at least block of the first block size is to be written via the emulation;performing a destructive write of selected emulated blocks of the first block size that caused the read error to be generated;writing the at least one block of the first block size in the selected block of the second block size; andsending a notification to indicate the performing of the destructive write.7. The disk drive of claim 6 , wherein:the first block size is 512 bytes; andthe second block size is 4 Kilobytes.8. The disk ...

Подробнее
21-02-2013 дата публикации

INDICATION OF A DESTRUCTIVE WRITE VIA A NOTIFICATION FROM A DISK DRIVE THAT EMULATES BLOCKS OF A FIRST BLOCK SIZE WITHIN BLOCKS OF A SECOND BLOCK SIZE

Номер: US20130047033A1

A disk drive receives a request to write at least one block of a first block size, wherein the disk drive is configured to store blocks of a second block size that is larger in size than the first block size. The disk drive stores a plurality of emulated blocks of the first block size in each block of the second block size. The disk drive generates a read error, in response to reading a selected block of the second block size in which the at least block of the first block size is to be written via an emulation. The disk drive performs a destructive write of selected emulated blocks of the first block size that caused the read error to be generated. The disk drive writes the at least one block of the first block size in the selected block of the second block size. 1. A method , comprising:receiving, by a disk drive, a request to write at least one block of a first block size, wherein the disk drive is configured to store blocks of a second block size that is larger in size than the first block size, and wherein the disk drive stores via emulation a plurality of emulated blocks of the first block size in each block of the second block size; generating, by the disk drive, a read error, in response to reading a selected block of the second block size in which the at least block of the first block size is to be written via the emulation;performing, by the disk drive, a destructive write of selected emulated blocks of the first block size that caused the read error to be generated;writing, by the disk drive, the at least one block of the first block size in the selected block of the second block size; andsending, by the disk drive, a notification to indicate the performing of the destructive write.2. The method of claim 1 , wherein:the first block size is 512 bytes; andthe second block size is 4 Kilobytes.3. The method of claim 1 , wherein the notification is sent asynchronously to a controller claim 1 , the method further comprising:maintaining, by the disk drive, an ...

Подробнее
02-05-2013 дата публикации

PROMOTION OF PARTIAL DATA SEGMENTS IN FLASH CACHE

Номер: US20130111106A1

Exemplary method, system, and computer program product embodiments for efficient track destage in secondary storage in a more effective manner, are provided. In one embodiment, by way of example only, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, the temporal bits and sequential bits are transferred from the primary storage to the secondary storage. The temporal bits are allowed to age on the secondary storage. Additional system and computer program product embodiments are disclosed and provide related advantages. 1. A method for promoting partial data segments in a computing storage environment having lower and higher speed levels of cache by a processor , comprising: allowing the partial data segments to remain in the higher speed cache level for a time period longer that at least one whole data segment, and', 'implementing a preference for movement of the partial data segments to the lower speed cache level based on at least one of an amount of holes and a data heat metric, wherein a first of the partial data segments having at least one of a lower amount of holes and a hotter data heat is moved to the lower speed cache level ahead of a second of the partial data segments having at least one of a higher amount of holes and a cooler data heat., 'configuring a data moving mechanism adapted for performing at least one of2. The method of claim 1 , further including claim 1 , pursuant to configuring the data mover mechanism claim 1 , writing one of the partial data segments to the lower speed cache level as a whole data segment.3. The method of claim 1 , further including claim 1 , pursuant to configuring the data mover mechanism claim 1 , densely packing one of the partial data segments into a Cache Flash Element (CFE).4. The method of claim 1 , further including writing fixed portions of the partial data segment to portions of the lower speed cache corresponding to an associated storage ...

Подробнее
02-05-2013 дата публикации

DYNAMICALLY ADJUSTED THRESHOLD FOR POPULATION OF SECONDARY CACHE

Номер: US20130111131A1

The population of data to be inserted into secondary data storage cache is controlled by determining a heat metric of candidate data; adjusting a heat metric threshold; rejecting candidate data provided to the secondary data storage cache whose heat metric is less than the threshold; and admitting candidate data whose heat metric is equal to or greater than the heat metric threshold. The adjustment of the heat metric threshold is determined by comparing a reference metric related to hits of data most recently inserted into the secondary data storage cache, to a reference metric related to hits of data most recently evicted from the secondary data storage cache; if the most recently inserted reference metric is greater than the most recently evicted reference metric, decrementing the threshold; and if the most recently inserted reference metric is less than the most recently evicted reference metric, incrementing the threshold. 1. A method for populating data into a secondary data storage cache of a computer-implemented cache data storage system , comprising:determining a heat metric of candidate data to be inserted into said secondary data storage cache;adjusting a heat metric threshold in accordance with caching efficiency of a present state of said secondary data storage cache;rejecting candidate data provided to said secondary data storage cache whose heat metric is less than said threshold; andadmitting to said secondary data storage cache, candidate data provided to said secondary data storage cache whose heat metric is equal to or greater than said heat metric threshold.2. The method of claim 1 , additionally comprising:maintaining a reference metric related to hits of data most recently inserted into said secondary data storage cache;maintaining a reference metric related to hits of data most recently evicted from said secondary data storage cache; andsaid adjusting step comprises adjusting said heat metric threshold in accordance with said reference metric ...

Подробнее
02-05-2013 дата публикации

Dynamically adjusted threshold for population of secondary cache

Номер: US20130111133A1
Принадлежит: International Business Machines Corp

The population of data to be inserted into secondary data storage cache is controlled by determining a heat metric of candidate data; adjusting a heat metric threshold; rejecting candidate data provided to the secondary data storage cache whose heat metric is less than the threshold; and admitting candidate data whose heat metric is equal to or greater than the heat metric threshold. The adjustment of the heat metric threshold is determined by comparing a reference metric related to hits of data most recently inserted into the secondary data storage cache, to a reference metric related to hits of data most recently evicted from the secondary data storage cache; if the most recently inserted reference metric is greater than the most recently evicted reference metric, decrementing the threshold; and if the most recently inserted reference metric is less than the most recently evicted reference metric, incrementing the threshold.

Подробнее
02-05-2013 дата публикации

MANAGEMENT OF PARTIAL DATA SEGMENTS IN DUAL CACHE SYSTEMS

Номер: US20130111134A1

Various embodiments for movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor are provided. In one such embodiment, a whole data segment containing one of the partial data segments is promoted to both the lower and higher levels of cache. Requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache. Unrequested data of the whole data segment is split and positioned at a Least Recently Used (LRU) portion of the demotion queue of the higher level of cache. The unrequested data is pinned in place until a write of the whole data segment to the lower level of cache completes. Additional system and computer program product embodiments are disclosed and provide related advantages. 1. A method for movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor , comprising: requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache,', 'unrequested data of the whole data segment is split and positioned at a Least Recently Used (LRU) portion of the demotion queue of the higher level of cache, and', 'the unrequested data is pinned in place until a write of the whole data segment to the lower level of cache completes., 'promoting a whole data segment containing one of the partial data segments to both the lower and higher levels of cache, wherein2. The method of claim 1 , wherein promoting the whole data segment occurs pursuant to a read request for the one of the partial data segments.3. The method of claim 1 , further including claim 1 , previous to promoting the whole data segment claim 1 , determining if the one of the partial data segments should be cached on the lower level of cache.4. The method of claim 3 , wherein determining if the one of the partial ...

Подробнее
02-05-2013 дата публикации

SELECTIVE POPULATION OF SECONDARY CACHE EMPLOYING HEAT METRICS

Номер: US20130111146A1

The population of data to be admitted into secondary data storage cache of a data storage system is controlled by determining heat metrics of data of the data storage system. If candidate data is submitted for admission into the secondary cache, data is selected to tentatively be evicted from the secondary cache; candidate data provided to the secondary data storage cache is rejected if its heat metric is less than the heat metric of the tentatively evicted data; and candidate data submitted for admission to the secondary data storage cache is admitted if its heat metric is equal to or greater than the heat metric of the tentatively evicted data. 1. A method for populating data into a secondary data storage cache of a computer-implemented data storage system , comprising:determining heat metrics of data of said data storage system;selecting data to tentatively be evicted from said secondary cache;comparing said heat metric of candidate data submitted for admission to said secondary cache, to said heat metric of said tentatively evicted data;rejecting candidate data provided to said secondary data storage cache whose heat metric is less than said heat metric of said tentatively evicted data; andadmitting to said secondary data storage cache, candidate data provided to said secondary data storage cache whose heat metric is equal to or greater than said heat metric of said tentatively evicted data.2. The method of claim 1 , wherein said cache data storage system additionally comprises a first data storage cache and data storage; and wherein said heat metrics are based on heat of said data while said data was stored in any of said first data storage cache claim 1 , said secondary data storage cache and said data storage claim 1 , of said data storage system.3. The method of claim 1 , wherein said tentatively evicted data is determined with an LRU algorithm claim 1 , and said heat metric of said tentatively evicted data is based on heat metrics of a plurality of data at ...

Подробнее
02-05-2013 дата публикации

SELECTIVE SPACE RECLAMATION OF DATA STORAGE MEMORY EMPLOYING HEAT AND RELOCATION METRICS

Номер: US20130111160A1

Space of a data storage memory of a data storage memory system is reclaimed by determining heat metrics of data stored in the data storage memory; determining relocation metrics related to relocation of the data within the data storage memory; determining utility metrics of the data relating the heat metrics to the relocation metrics for the data; and making the data whose utility metric fails a utility metric threshold, available for space reclamation. Thus, data that otherwise may be evicted or demoted, but that meets or exceeds the utility metric threshold, is exempted from space reclamation and is instead maintained in the data storage memory. 1. A method for reclaiming space of a data storage memory of a data storage memory system , comprising:determining heat metrics of data stored in said data storage memory;determining relocation metrics related to relocation of said data within said data storage memory;determining utility metrics of said data relating said heat metrics to said relocation metrics for said data;making said data whose utility metric fails a utility metric threshold, available for space reclamation; andexempting from space reclamation said data whose utility metric meets or exceeds said utility metric threshold.2. The method of claim 1 , additionally comprising:exempting from space reclamation eligibility, data recently added to said data storage memory.3. The method of claim 1 , additionally comprising:exempting from space reclamation eligibility, data designated as ineligible by space management policy.4. The method of claim 1 , wherein said utility metric threshold is determined from an average of utility metrics for data of said data storage memory.5. The method of claim 4 , wherein said average of utility metrics for data of said data storage memory is determined over a period of time.6. The method of claim 4 , wherein said average of utility metrics for data of said data storage memory is determined over a predetermined number of space ...

Подробнее
16-05-2013 дата публикации

PREFETCHING SOURCE TRACKS FOR DESTAGING UPDATED TRACKS IN A COPY RELATIONSHIP

Номер: US20130124803A1

A point-in-time copy relationship associates tracks in a source storage with tracks in a target storage. The target storage stores the tracks in the source storage as of a point-in-time. A write request is received including an updated source track for a point-in-time source track in the source storage in the point-in-time copy relationship. The point-in-time source track was in the source storage at the point-in-time the copy relationship was established. The updated source track is stored in a first cache device. A prefetch request is sent to the source storage to prefetch the point-in-time source track in the source storage subject to the write request to a second cache device. A read request is generated to read the source track in the source storage following the sending of the prefetch request. The read source track is copied to a corresponding target track in the target storage. 119-. (canceled)20. A method , comprising:maintaining a point-in-time copy relationship associating tracks in a source storage with tracks in a target storage, wherein the target storage stores the tracks in the source storage as of a point-in-time;receiving a write request including an updated source track for a point-in-time source track in the source storage in the point-in-time copy relationship, wherein the point-in-time source track was in the source storage at the point-in-time the copy relationship was established;storing the updated source track in a first cache device;sending a prefetch request to the source storage to prefetch the point-in-time source track in the source storage subject to the write request to a second cache device;generating a read request to read the source track in the source storage following the sending of the prefetch request; andcopying the read source track to a corresponding target track in the target storage.21. The method of claim 20 , further comprising:destaging the updated source track to the source track in the source volume in response to ...

Подробнее
23-05-2013 дата публикации

PERIODIC DESTAGES FROM INSIDE AND OUTSIDE DIAMETERS OF DISKS TO IMPROVE READ RESPONSE TIMES

Номер: US20130132664A1

A storage controller that includes a cache, receives a command from a host, wherein a set of criteria corresponding to read response times for executing the command have to be satisfied. A destage application that destages tracks based at least on recency of usage and spatial location of the tracks is executed, wherein a spatial ordering of the tracks is maintained in a data structure, and the destage application traverses the spatial ordering of the tracks. Tracks are destaged from at least inside or outside diameters of disks at periodic intervals, while traversing the spatial ordering of the tracks, wherein the set of criteria corresponding to the read response times for executing the command are satisfied. 16-. (canceled)7. A system , comprising: 'a processor coupled to the memory, wherein the processor performs operations, the operations comprising:', 'a memory; and'} executing a destage application that destages tracks based at least on recency of usage and spatial location of the tracks, wherein a spatial ordering of the tracks is maintained in a data structure, and the destage application traverses the spatial ordering of the tracks; and', 'destaging tracks from at least inside or outside diameters of disks at periodic intervals, while traversing the spatial ordering of the tracks, wherein the set of criteria corresponding to the read response times for executing the command are satisfied., 'receiving a command from a host, wherein a set of criteria corresponding to read response times for executing the command have to be satisfied;'}8. The system of claim 7 , wherein by destaging tracks from the inside and outside diameters of disks at the periodic intervals claim 7 , read tracks that are relatively distant from a current location of a head are serviced by overriding the spatial ordering.9. The system of claim 7 , wherein the set of criteria specifies:average read response time is to be less than a first threshold; anda predetermined percentage of reads are ...

Подробнее
23-05-2013 дата публикации

ADJUSTMENT OF DESTAGE RATE BASED ON READ AND WRITE RESPONSE TIME REQUIREMENTS

Номер: US20130132667A1

A storage controller that includes a cache receives a command from a host, wherein a set of criteria corresponding to read and write response times for executing the command have to be satisfied. The storage controller determines ranks of a first type and ranks of a second type corresponding to a plurality of volumes coupled to the storage controller, wherein the command is to be executed with respect to the ranks of the first type. Destage rate corresponding to the ranks of the first type are adjusted to be less than a default destage rate corresponding to the ranks of the second type, wherein the set of criteria corresponding to the read and write response times for executing the command are satisfied. 16-. (canceled)7. A system , comprising:a memory; and receiving, by a storage controller that includes a cache, a command from a host, wherein a set of criteria corresponding to read and write response times for executing the command have to be satisfied;', 'determining, by the storage controller, ranks of a first type and ranks of a second type corresponding to a plurality of volumes coupled to the storage controller, wherein the command is to be executed with respect to the ranks of the first type; and', 'adjusting destage rate corresponding to the ranks of the first type to be less than a default destage rate corresponding to the ranks of the second type, wherein the set of criteria corresponding to the read and write response times for executing the command are satisfied., 'a processor coupled to the memory, wherein the processor performs operations, the operations comprising8. The system of claim 7 , wherein the adjusted destage rate corresponding to the ranks of the first type allow a rate of I/O operations to the ranks of the first type to be maximized subject to the read and write response times for executing the command being satisfied claim 7 , and wherein the set of criteria specifies:average read response time is to be less than a first threshold;a ...

Подробнее
06-06-2013 дата публикации

MANAGING METADATA FOR DATA IN A COPY RELATIONSHIP

Номер: US20130145100A1

Provided is a method for managing metadata for data in a copy relationship copied from a source storage to a target storage. Information is maintained on a copy relationship of source data in the source storage and target data in the target storage. The source data is copied from the source storage to the cache to copy to target data in the target storage indicated in the copy relationship. Target metadata is generated for the target data comprising the source data copied to the cache. An access request to requested target data comprising the target data in the cache is processed and access is provided to the requested target data in the cache. The target metadata for the requested target data in the target storage is discarded in response to determining that the requested target data in the cache has not been destaged to the target storage. 1. A method , comprising:maintaining information on a copy relationship of source data in a source storage and target data in a target storage;copying source data from the source storage to a cache to copy to target data in the target storage indicated in the copy relationship;generating target metadata for the target data comprising the source data copied to the cache;processing an access request to requested target data comprising the target data in the cache;providing access to the requested target data in the cache;determining whether the requested target data in the cache has been destaged to the target storage; anddiscarding the target metadata for the requested target data in the target storage in response to determining that the requested target data in the cache has not been destaged to the target storage.2. The method of claim 1 , wherein the determining whether the requested target data in the cache has been destaged and discarding the target metadata are performed after the access request is processed.3. The method of claim 1 , wherein the request comprises a read request.4. The method of claim 3 , further comprising ...

Подробнее
27-06-2013 дата публикации

DESTAGING OF WRITE AHEAD DATA SET TRACKS

Номер: US20130166837A1

Exemplary methods, computer systems, and computer program products for efficient destaging of a write ahead data set (WADS) track in a volume of a computing storage environment are provided. In one embodiment, the computer environment is configured for preventing destage of a plurality of tracks in cache selected for writing to a storage device. For a track N in a stride Z of the selected plurality of tracks, if the track N is a first WADS track in the stride Z, clearing at least one temporal bit for each track in the cache for the stride Z minus 2 (Z−2), and if the track N is a sequential track, clearing the at least one temporal bit for the track N minus a variable X (N−X). 1. A method for efficient destaging of a write ahead data set (WADS) track in a volume by a processor device in a computing storage environment , comprising:preventing destage of a plurality of tracks in cache selected for writing to a storage device; and if the track N is a first WADS track in the stride Z, clearing at least one temporal bit for each track in the cache for the stride Z minus 2 (Z−2), and', 'if the track N is a sequential track, clearing the at least one temporal bit for the track N minus a variable X (N−X)., 'for a track N in a stride Z of the selected plurality of tracks2. The method of claim 1 , further including prestaging data to the plurality of tracks such that the stride Z includes complete tracks claim 1 , enabling subsequent destage of complete WADS tracks.3. The method of claim 1 , further including incrementing the at least one temporal bit.4. The method of claim 1 , further including taking a track access to the WADS track and completing a write operation on the WADS track.5. The method of claim 1 , further including ending a track access to the WADS track upon a completion of a write operation and adding the WADS track to a wise order writing (WOW) list.6. The method of claim 5 , further including checking the WOW list and examining a left neighbor and a right ...

Подробнее
27-06-2013 дата публикации

Storage in tiered environment for colder data segments

Номер: US20130166844A1
Принадлежит: International Business Machines Corp

Exemplary embodiments for storing data by a processor device in a computing environment are provided. In one embodiment, by way of example only, from a plurality of available data segments, a data segment having a storage activity lower than a predetermined threshold is identified as a colder data segment. A chunk of storage is located to which the colder data segment is assigned. The colder data segment is compressed. The colder data segment is migrated to the chunk of storage. A status of the chunk of storage is maintained in a compression data segment bitmap.

Подробнее
04-07-2013 дата публикации

SOURCE-TARGET RELATIONS MAPPING

Номер: US20130173878A1

A data preservation function is provided which, in one embodiment, includes indicating by a map, usage of a particular map extent range by a relationship between a source extent range of storage locations on a source storage device containing data to be preserved in the source extent range, and a target extent range mapped to the map particular extent range. In another aspect, in response to receipt of a data preservation command, a data preservation operation is performed including determining whether a map indicates availability of a map extent range mapped to the identified target extent range. Upon determining that a particular map indicates availability of a map extent range mapped to the identified target extent range, a relationship between the identified source extent range and the identified target extent range is established. Other features and aspects may be realized, depending upon the particular application. 1. A method , comprising:mapping in a plurality of maps for a target storage device, map extent ranges of each map, to corresponding target extent ranges of storage locations on the target storage device; andindicating for a particular map extent range of a particular map, usage of the particular map extent range by a relationship between a source extent range of storage locations on a source storage device containing data to be preserved in the source extent range, and the target extent range mapped to the map particular extent range.2. The method of claim 1 , further comprising:receiving a data preservation command identifying a source extent range containing data to be preserved on the source storage device, and identifying a target extent range on the target storage device;in response to the command, performing a data preservation operation including:determining whether a map indicates availability of a map extent range mapped to the identified target extent range; andupon determining that a particular map indicates availability of a map extent ...

Подробнее
18-07-2013 дата публикации

DEMOTING TRACKS FROM A FIRST CACHE TO A SECOND CACHE BY USING AN OCCUPANCY OF VALID TRACKS IN STRIDES IN THE SECOND CACHE TO CONSOLIDATE STRIDES IN THE SECOND CACHE

Номер: US20130185476A1

Information is maintained on strides configured in a second cache and occupancy counts for the strides indicating an extent to which the strides are populated with valid tracks and invalid tracks. A determination is made of tracks to demote from a first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are to a second stride in the second cache having an occupancy count indicating the stride is empty. A determination is made of a target stride in the second cache based on the occupancy counts of the strides in the second cache. A determination is made of at least two source strides in the second cache having valid tracks based on the occupancy counts of the strides in the second cache. The target stride is populated with the valid tracks from the source strides. 1. A computer program product for managing data in a computer readable cache system comprising a first cache , a second cache , and a storage system comprised of storage devices , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to perform operations , the operations comprising:maintaining information on strides configured in the second cache and occupancy counts for the strides indicating an extent to which the strides are populated with valid tracks and invalid tracks, wherein a stride having no valid tracks is empty;determining tracks to demote from the first cache;forming a first stride including the determined tracks to demote;adding the tracks from the first stride to a second stride in the second cache having an occupancy count indicating the stride is empty;determining a target stride in the second cache based on the occupancy counts of the strides in the second cache;determining at least two source strides in the second cache having valid tracks based on the occupancy counts of the strides in the second cache; andpopulating the target stride with ...

Подробнее
18-07-2013 дата публикации

Populating a first stride of tracks from a first cache to write to a second stride in a second cache

Номер: US20130185478A1
Принадлежит: International Business Machines Corp

Provided are a computer program product, system, and method for managing data in a cache system comprising a first cache, a second cache, and a storage system. A determination is made of tracks stored in the storage system to demote from the first cache. A first stride is formed including the determined tracks to demote. A determination is made of a second stride in the second cache in which to include the tracks in the first stride. The tracks from the first stride are added to the second stride in the second cache. A determination is made of tracks in strides in the second cache to demote from the second cache. The determined tracks to demote from the second cache are demoted.

Подробнее
18-07-2013 дата публикации

DEMOTING TRACKS FROM A FIRST CACHE TO A SECOND CACHE BY USING A STRIDE NUMBER ORDERING OF STRIDES IN THE SECOND CACHE TO CONSOLIDATE STRIDES IN THE SECOND CACHE

Номер: US20130185489A1

Information on strides configured in the second cache includes information indicating a number of valid tracks in the strides, wherein a stride has at least one of valid tracks and free tracks not including valid data. A determination is made of tracks to demote from the first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are added to a second stride in the second cache that has no valid tracks. A target stride in the second cache is selected based on a stride most recently used to consolidate strides from at least two strides into one stride. Data from the valid tracks is copied from at least two source strides in the second cache to the target stride. 1. A method for managing data in a computer readable cache system comprising a first cache , a second cache , and a storage system comprised of storage devices , comprising:maintaining information on strides configured in the second cache, including information indicating a number of valid tracks in the strides, wherein a stride has at least one of valid tracks and free tracks not including valid data;determining tracks to demote from the first cache;forming a first stride including the determined tracks to demote;adding the tracks from the first stride to a second stride in the second cache that has no valid tracks;selecting a target stride in the second cache based on a stride most recently used to consolidate strides from at least two strides into one stride; andcopying data from the valid tracks from at least two source strides in the second cache to the target stride.2. The method of claim 1 , further comprising;maintaining indication of a number of free strides having no valid tracks;determining whether the number of free strides is below a free stride threshold, wherein the operations of selecting a target stride and copying the data from the valid tracks from the at least two source strides is performed in response to determining that the number of ...

Подробнее
18-07-2013 дата публикации

MANAGING CACHING OF EXTENTS OF TRACKS IN A FIRST CACHE, SECOND CACHE AND STORAGE

Номер: US20130185493A1

Provided are a computer program product, system, and method for managing caching of extents of tracks in a first cache, second cache and storage device. A determination is made of an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks. A determination is made of an extent including the eligible track and whether second cache caching for the determined extent is enabled or disabled. The eligible track is demoted from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled. Selection is made not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled. 1. A computer program product for managing data in a first cache , a second cache , and a storage device , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to perform operations , the operations comprising:determining an eligible track in the first cache eligible for demotion to the second cache, wherein the tracks are stored in extents configured in the storage device, wherein each extent is comprised of a plurality of tracks;determining an extent including the eligible track;determining whether second cache caching for the determined extent is enabled or disabled;demoting the eligible track from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled; andselecting not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled.2. The computer program product of claim 1 , wherein the first cache is a faster access device than the second cache and wherein the second cache is a faster access device than ...

Подробнее
18-07-2013 дата публикации

POPULATING A FIRST STRIDE OF TRACKS FROM A FIRST CACHE TO WRITE TO A SECOND STRIDE IN A SECOND CACHE

Номер: US20130185494A1

Provided are a computer program product, system, and method for managing data in a cache system comprising a first cache, a second cache, and a storage system. A determination is made of tracks stored in the storage system to demote from the first cache. A first stride is formed including the determined tracks to demote. A determination is made of a second stride in the second cache in which to include the tracks in the first stride. The tracks from the first stride are added to the second stride in the second cache. A determination is made of tracks in strides in the second cache to demote from the second cache. The determined tracks to demote from the second cache are demoted. 1. A computer program product for managing data in a cache system comprising a first cache , a second cache , and a storage system , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to perform operations , the operations comprising:determining tracks stored in the storage system to demote from the first cache;forming a first stride including the determined tracks to demote;determining a second stride in the second cache in which to include the tracks in the first stride;adding the tracks from the first stride to the second stride in the second cache;determining tracks in strides in the second cache to demote from the second cache; anddemoting the determined tracks to demote from the second cache.2. The computer program product of claim 1 , wherein the first cache is a faster access device than the second cache and wherein the second cache is a faster access device than the storage system.3. The computer program product of claim 1 , wherein the first cache comprises a Dynamic Random Access Memory (RAM) claim 1 , the second cache comprises a plurality of flash devices claim 1 , and the storage system is comprised of a plurality of slower access devices than the flash devices.4. The computer program ...

Подробнее
18-07-2013 дата публикации

DEMOTING TRACKS FROM A FIRST CACHE TO A SECOND CACHE BY USING A STRIDE NUMBER ORDERING OF STRIDES IN THE SECOND CACHE TO CONSOLIDATE STRIDES IN THE SECOND CACHE

Номер: US20130185495A1

Information on strides configured in the second cache includes information indicating a number of valid tracks in the strides, wherein a stride has at least one of valid tracks and free tracks not including valid data. A determination is made of tracks to demote from the first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are added to a second stride in the second cache that has no valid tracks. A target stride in the second cache is selected based on a stride most recently used to consolidate strides from at least two strides into one stride. Data from the valid tracks is copied from at least two source strides in the second cache to the target stride. 1. A computer program product for managing data in a cache system comprising a first cache , a second cache , and a storage system comprised of storage devices , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to perform operations , the operations comprising:maintaining information on strides configured in the second cache, including information indicating a number of valid tracks in the strides, wherein a stride has at least one of valid tracks and free tracks not including valid data;determining tracks to demote from the first cache;forming a first stride including the determined tracks to demote;adding the tracks from the first stride to a second stride in the second cache that has no valid tracks;selecting a target stride in the second cache based on a stride most recently used to consolidate strides from at least two strides into one stride; andcopying data from the valid tracks from at least two source strides in the second cache to the target stride.2. The computer program product of claim 1 , wherein the operations further comprise;maintaining indication of a number of free strides having no valid tracks;determining whether the number of free strides is ...

Подробнее
18-07-2013 дата публикации

MANAGING CACHING OF EXTENTS OF TRACKS IN A FIRST CACHE, SECOND CACHE AND STORAGE

Номер: US20130185497A1

Provided are a computer program product, system, and method for managing caching of extents of tracks in a first cache, second cache and storage device. A determination is made of an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks. A determination is made of an extent including the eligible track and whether second cache caching for the determined extent is enabled or disabled. The eligible track is demoted from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled. Selection is made not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled. 1. A method for managing data , comprising:determining an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks;determining an extent including the eligible track;determining whether second cache caching for the determined extent is enabled or disabled;demoting the eligible track from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled; andselecting not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled.2. The method of claim 1 , wherein the first cache is a faster access device than the second cache and wherein the second cache is a faster access device than the storage device.3. The method of claim 2 , wherein the first cache comprises at least one dynamic random access memory (DRAM) claim 2 , the second cache comprises at least one solid state storage device (SSD) claim 2 , and the storage device comprises at least one magnetic hard ...

Подробнее
18-07-2013 дата публикации

CACHING SOURCE BLOCKS OF DATA FOR TARGET BLOCKS OF DATA

Номер: US20130185501A1

Provided are a computer program product, system, and method for processing a read operation for a target block of data. A read operation for the target block of data in target storage is received, wherein the target block of data is in an instant virtual copy relationship with a source block of data in source storage. It is determined that the target block of data in the target storage is not consistent with the source block of data in the source storage. The source block of data is retrieved. The data in the source block of data in the cache is synthesized to make the data appear to be retrieved from the target storage. The target block of data is marked as read from the source storage. In response to the read operation completing, the target block of data that was read from the source storage is demoted. 1. A computer program product for processing a read operation for a target block of data , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein , wherein the computer readable program code , when executed by a processor of a computer , performs operations , the operations comprising:receiving the read operation for the target block of data in target storage, wherein the target block of data is in an instant virtual copy relationship with a source block of data in source storage;determining that the target block of data in the target storage is not consistent with the source block of data in the source storage;retrieving the source block of data;synthesizing data in the source block of data in a cache to make the data appear to be retrieved from the target storage;marking the target block of data as read from the source storage; andin response to the read operation completing, demoting the target block of data that was read from the source storage.2. The computer program product of claim 1 , wherein the operations further comprise:determining that the target block of data in the target ...

Подробнее
18-07-2013 дата публикации

DEMOTING PARTIAL TRACKS FROM A FIRST CACHE TO A SECOND CACHE

Номер: US20130185502A1

A determination is made of a track to demote from the first cache to the second cache, wherein the track in the first cache corresponds to a track in the storage system and is comprised of a plurality of sectors. In response to determining that the second cache includes a the stale version of the track being demoted from the first cache, a determination is made as to whether the stale version of the track includes track sectors not included in the track being demoted from the first cache. The sectors from the track demoted from the first cache are combined with sectors from the stale version of the track not included in the track being demoted from the first cache into a new version of the track. The new version of the track is written to the second cache. 1. A method for managing data in a cache system comprising a first cache , a second cache , and a storage system , comprising:determining a track to demote from the first cache to the second cache, wherein the track in the first cache corresponds to a track in the storage system and is comprised of a plurality of sectors;determining whether the second cache includes a stale version of the track being demoted from the first cache;in response to determining that the second cache includes the stale version of the track, determining whether the stale version of the track includes track sectors not included in the track being demoted from the first cache;combining the sectors from the track demoted from the first cache with sectors from the stale version of the track not included in the track being demoted from the first cache into a new version of the track; andwriting the new version of the track to the second cache.2. The computer program product of claim 1 , wherein the operations further comprise:invalidating the stale version of the track in the second cache in response to writing the new version of the track to the second cache.3. The method of claim 1 , wherein the operations further comprise:determining ...

Подробнее
18-07-2013 дата публикации

DEMOTING PARTIAL TRACKS FROM A FIRST CACHE TO A SECOND CACHE

Номер: US20130185504A1

A determination is made of a track to demote from the first cache to the second cache, wherein the track in the first cache corresponds to a track in the storage system and is comprised of a plurality of sectors. In response to determining that the second cache includes a the stale version of the track being demoted from the first cache, a determination is made as to whether the stale version of the track includes track sectors not included in the track being demoted from the first cache. The sectors from the track demoted from the first cache are combined with sectors from the stale version of the track not included in the track being demoted from the first cache into a new version of the track. The new version of the track is written to the second cache. 1. A computer program product for managing data in a cache system comprising a first cache , a second cache , and a storage system , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to perform operations , the operations comprising:determining a track to demote from the first cache to the second cache, wherein the track in the first cache corresponds to a track in the storage system and is comprised of a plurality of sectors;determining whether the second cache includes a stale version of the track being demoted from the first cache;in response to determining that the second cache includes the stale version of the track, determining whether the stale version of the track includes track sectors not included in the track being demoted from the first cache;combining the sectors from the track demoted from the first cache with sectors from the stale version of the track not included in the track being demoted from the first cache into a new version of the track; andwriting the new version of the track to the second cache.2. The computer program product of claim 1 , wherein the operations further comprise:invalidating the ...

Подробнее
18-07-2013 дата публикации

CACHING SOURCE BLOCKS OF DATA FOR TARGET BLOCKS OF DATA

Номер: US20130185510A1

Provided is a method for processing a read operation for a target block of data. A read operation for the target block of data in target storage is received, wherein the target block of data is in an instant virtual copy relationship with a source block of data in source storage. It is determined that the target block of data in the target storage is not consistent with the source block of data in the source storage. The source block of data is retrieved. The data in the source block of data in the cache is synthesized to make the data appear to be retrieved from the target storage. The target block of data is marked as read from the source storage. In response to the read operation completing, the target block of data that was read from the source storage is demoted. 1. A method for processing a read operation for a target block of data , comprising:receiving, using a processor of a computer, the read operation for the target block of data in target storage, wherein the target block of data is in an instant virtual copy relationship with a source block of data in the source storage;determining that the target block of data in the target storage is not consistent with the source block of data in the source storage;retrieving the source block of data;synthesizing data in the source block of data in a cache to make the data appear to be retrieved from the target storage;marking the target block of data as read from the source storage; andin response to the read operation completing, demoting the target block of data that was read from the source storage.2. The method of claim 1 , further comprising:determining that the target block of data in the target storage is consistent with the corresponding source block of data in the source storage;reading the target block of data from the target storage; andcompleting the read operation.3. The method of claim 1 , further comprising:determining that the source block of data exists in cache, wherein the source block of data is ...

Подробнее
18-07-2013 дата публикации

MANAGEMENT OF PARTIAL DATA SEGMENTS IN DUAL CACHE SYSTEMS

Номер: US20130185512A1

For movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor, a whole data segment containing one of the partial data segments is promoted to both the lower and higher levels of cache. Requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache. Unrequested data of the whole data segment is split and positioned at a Least Recently Used (LRU) portion of the demotion queue of the higher level of cache. The unrequested data is pinned in place until a write of the whole data segment to the lower level of cache completes. 1. A method for movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor , comprising: requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache,', 'unrequested data of the whole data segment is split and positioned at a Least Recently Used (LRU) portion of the demotion queue of the higher level of cache, and', 'the unrequested data is pinned in place until a write of the whole data segment to the lower level of cache completes., 'promoting a whole data segment containing one of the partial data segments to both the lower and higher levels of cache, wherein2. The method of claim 1 , wherein promoting the whole data segment occurs pursuant to a read request for the one of the partial data segments.3. The method of claim 1 , further including claim 1 , previous to promoting the whole data segment claim 1 , determining if the one of the partial data segments should be cached on the lower level of cache.4. The method of claim 3 , wherein determining if the one of the partial data segments should be cached on the lower level of cache includes considering an Input/Output Performance (IOP) metric claim 3 , a bandwidth metric claim 3 , and ...

Подробнее
18-07-2013 дата публикации

CACHE MANAGEMENT OF TRACK REMOVAL IN A CACHE FOR STORAGE

Номер: US20130185513A1

In one embodiment, a cache manager releases a list lock during a scan when a track has been identified as a track for cache removal processing such as demoting the track, for example. By releasing the list lock, other processors have access to the list while the identified track is processed for cache removal. In one aspect, the position of the previous entry in the list may be stored in a cursor or pointer so that the pointer value points to the prior entry in the list. Once the cache removal processing of the identified track is completed, the list lock may be reacquired and the scan may be resumed at the list entry identified by the pointer. Other features and aspects may be realized, depending upon the particular application. 17-. (canceled)8. A computer program product for managing a cache , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein for execution by a processor to perform managing operations , the managing operations comprising:maintaining in a cache, tracks in the storage subject to Input/Output (I/O) requests;scanning a list of tracks in cache to identify candidates for cache removal processing which includes one of demoting an identified track from the cache, and destaging an identified track to storage;locking the list to prevent access to the list by other processors while the list of tracks is being scanned;identifying a track of the list of tracks to be cache removal processed;interrupting the scanning of the list of tracks;storing a pointer to a position in the list of tracks as a function of the position in the list at which the scanning was interrupted;releasing the locking of the list to allow access to the list by other processors while the identified track is being cache removal processed; andcache removal processing the identified track by one of demoting the identified track from the cache, and destaging the identified track to storage.9. The computer program ...

Подробнее
18-07-2013 дата публикации

CACHE MANAGEMENT OF TRACK REMOVAL IN A CACHE FOR STORAGE

Номер: US20130185514A1

In one embodiment, a cache manager releases a list lock during a scan when a track has been identified as a track for cache removal processing such as demoting the track, for example. By releasing the list lock, other processors have access to the list while the identified track is processed for cache removal. In one aspect, the position of the previous entry in the list may be stored in a cursor or pointer so that the pointer value points to the prior entry in the list. Once the cache removal processing of the identified track is completed, the list lock may be reacquired and the scan may be resumed at the list entry identified by the pointer. Other features and aspects may be realized, depending upon the particular application. 1. A method , comprising:maintaining in a cache, tracks in the storage subject to Input/Output (I/O) requests;scanning a list of tracks in cache to identify candidates for cache removal processing which includes one of demoting an identified track from the cache, and destaging an identified track to storage;locking the list to prevent access to the list by other processors while the list of tracks is being scanned;identifying a track of the list of tracks to be cache removal processed;interrupting the scanning of the list of tracks;storing a pointer to a position in the list of tracks as a function of the position in the list at which the scanning was interrupted;releasing the locking of the list to allow access to the list by other processors while the identified track is being cache removal processed; andcache removal processing the identified track by one of demoting the identified track from the cache, and destaging the identified track to storage.2. The method of further comprising:resuming locking of the list to prevent access to the list by other processors while the list of tracks is being scanned; andresuming the scanning of the list of tracks at a position in the list as a function of the stored pointer.3. The method of wherein ...

Подробнее
25-07-2013 дата публикации

ADJUSTMENT OF DESTAGE RATE BASED ON READ AND WRITE RESPONSE TIME REQUIREMENTS

Номер: US20130191596A1

A storage controller that includes a cache receives a command from a host, wherein a set of criteria corresponding to read and write response times for executing the command have to be satisfied. The storage controller determines ranks of a first type and ranks of a second type corresponding to a plurality of volumes coupled to the storage controller, wherein the command is to be executed with respect to the ranks of the first type. Destage rate corresponding to the ranks of the first type are adjusted to be less than a default destage rate corresponding to the ranks of the second type, wherein the set of criteria corresponding to the read and write response times for executing the command are satisfied. 1. A method , comprising:receiving, by a storage controller that includes a cache, a command from a host, wherein a set of criteria corresponding to read and write response times for executing the command have to be satisfied;determining, by the storage controller, ranks of a first type and ranks of a second type corresponding to a plurality of volumes coupled to the storage controller, wherein the command is to be executed with respect to the ranks of the first type; andadjusting destage rate corresponding to the ranks of the first type to be less than a default destage rate corresponding to the ranks of the second type, wherein the set of criteria corresponding to the read and write response times for executing the command are satisfied.2. The method of claim 1 , wherein the adjusted destage rate corresponding to the ranks of the first type allow a rate of I/O operations to the ranks of the first type to be maximized subject to the read and write response times for executing the command being satisfied claim 1 , and wherein the set of criteria specifies:average read response time is to be less than a first threshold;a predetermined percentage of reads are to be performed in a time less than a second threshold;average write response time is to be less than a third ...

Подробнее
01-08-2013 дата публикации

MANAGING TRACK DISCARD REQUESTS TO INCLUDE IN DISCARD TRACK MESSAGES

Номер: US20130198461A1

Provided is a method for managing track discard requests. A backup copy of a track in a cache is maintained in a cache backup device. A track discard request is generated to discard tracks in the cache backup device removed from the cache. Track discard requests are queued in a discard track queue. If a predetermined number of track discard requests are queued in the discard track queue while processing in a discard multi-track mode, one discard multiple tracks message is sent to the cache backup device indicating the tracks indicated in the queued predetermined number of track discard requests to instruct the cache backup device to discard the tracks indicated in the discard multiple tracks message. If a predetermined number of periods of inactivity while processing in the discard multi-track mode, processing the track discard requests is switched to a discard single track mode. 1. A method , comprising:maintaining a backup copy of a track in a cache in a cache backup device;generating track discard request to discard tracks in the cache backup device removed from the cache;queuing track discard requests in a discard track queue;in response to detecting that a predetermined number of track discard requests are queued in the discard track queue while processing in a discard multi-track mode, sending one discard multiple tracks message indicating the tracks indicated in the queued predetermined number of track discard requests to the cache backup device instructing the cache backup device to discard the tracks indicated in the discard multiple tracks message; andin response to determining a predetermined number of periods of inactivity while processing in the discard multi-track mode, switching to processing the track discard requests in a discard single track mode.2. The method of claim 1 , further comprising:in response to processing in the discard single track mode, sending a discard single track message indicating a single track comprising the track indicated in ...

Подробнее
01-08-2013 дата публикации

INCREASED DESTAGING EFFICIENCY

Номер: US20130198751A1

For increased destaging efficiency by smoothing destaging tasks to reduce long input/output (I/O) read operations in a computing environment, destaging tasks are calculated according to one of a standard time interval and a variable recomputed destaging task interval. The destaging of storage tracks between a desired number of destaging tasks and a current number of destaging tasks is smoothed according to the calculating. 18-. (canceled)9. A system for increased destaging efficiency by smoothing destaging tasks to reduce long input/output (I/O) read operations in a computing environment , the system comprising:a processor device operable in the computing storage environment, wherein the processor device:calculates destaging tasks according to one of a standard time interval and a variable recomputed destaging task interval, andsmoothes the destaging of storage tracks between a desired number of destaging tasks and a current number of destaging tasks according to the calculating.10. The system of claim 9 , wherein the processor device is further performs the smoothing based upon the calculating the destaging tasks according to the variable recomputed destaging task interval when a delta value between the desired number of destaging tasks and the current number of destaging tasks is greater than a predetermined delta value.11. The system of claim 9 , wherein the processor device performs the smoothing based upon the calculating the destaging tasks according to the standard time interval when a delta value between the desired number of destaging tasks and the current number of destaging tasks is less than a predetermined delta value.12. The system of claim 9 , wherein the variable recomputed destaging task interval is a time period equal to a variable time period obtained by historical data divided by a delta value between the desired number of destaging tasks and the current number of destaging tasks.13. The system of claim 9 , wherein the processor device performs ...

Подробнее
01-08-2013 дата публикации

INCREASED DESTAGING EFFICIENCY

Номер: US20130198752A1

For increased destaging efficiency by smoothing destaging tasks to reduce long input/output (I/O) read operations in a computing environment, destaging tasks are calculated according to one of a standard time interval and a variable recomputed destaging task interval. The destaging of storage tracks between a desired number of destaging tasks and a current number of destaging tasks is smoothed according to the calculating. 1. A method for increased destaging efficiency by smoothing destaging tasks to reduce long input/output (I/O) read operations by a processor device in a computing environment , the method comprising:calculating destaging tasks according to one of a standard time interval and a variable recomputed destaging task interval; andsmoothing the destaging of storage tracks between a desired number of destaging tasks and a current number of destaging tasks according to the calculating.2. The method of claim 1 , further including claim 1 , performing the smoothing based upon the calculating the destaging tasks according to the variable recomputed destaging task interval when a delta value between the desired number of destaging tasks and the current number of destaging tasks is greater than a predetermined delta value.3. The method of claim 1 , further including claim 1 , performing the smoothing based upon the calculating the destaging tasks according to the standard time interval when a delta value between the desired number of destaging tasks and the current number of destaging tasks is less than a predetermined delta value.4. The method of claim 1 , wherein the variable recomputed destaging task interval is a time period equal to a variable time period obtained by historical data divided by a delta value between the desired number of destaging tasks and the current number of destaging tasks.5. The method of claim 1 , further including claim 1 , performing one of ramping up and ramping down the smoothing between the desired number of destaging tasks and ...

Подробнее
08-08-2013 дата публикации

PROMOTION OF PARTIAL DATA SEGMENTS IN FLASH CACHE

Номер: US20130205077A1

For efficient track destage in secondary storage in a more effective manner, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, the temporal bits and sequential bits are transferred from the primary storage to the secondary storage. The temporal bits are allowed to age on the secondary storage. 1. A method for promoting partial data segments in a computing storage environment having lower and higher speed levels of cache by a processor , comprising: allowing the partial data segments to remain in the higher speed cache level for a time period longer that at least one whole data segment, and', 'implementing a preference for movement of the partial data segments to the lower speed cache level based on at least one of an amount of holes and a data heat metric, wherein a first of the partial data segments having at least one of a lower amount of holes and a hotter data heat is moved to the lower speed cache level ahead of a second of the partial data segments having at least one of a higher amount of holes and a cooler data heat., 'configuring a data moving mechanism adapted for performing at least one of2. The method of claim 1 , further including claim 1 , pursuant to configuring the data mover mechanism claim 1 , writing one of the partial data segments to the lower speed cache level as a whole data segment.3. The method of claim 1 , further including claim 1 , pursuant to configuring the data mover mechanism claim 1 , densely packing one of the partial data segments into a Cache Flash Element (CFE).4. The method of claim 1 , further including writing fixed portions of the partial data segment to portions of the lower speed cache corresponding to an associated storage device claim 1 , wherein the fixed portions are located using pointers in an affiliated Cache Flash Control Block (CFCB).5. The method of claim 2 , further including claim 2 , if the first of the partial data segments has a hotter ...

Подробнее
08-08-2013 дата публикации

MULTI-STAGE CACHE DIRECTORY AND VARIABLE CACHE-LINE SIZE FOR TIERED STORAGE ARCHITECTURES

Номер: US20130205088A1

A method in accordance with the invention includes providing first, second, and third storage tiers, wherein the first storage tier acts as a cache for the second storage tier, and the second storage tier acts as a cache for the third storage tier. The first storage tier uses a first cache line size corresponding to an extent size of the second storage tier. The second storage tier uses a second cache line size corresponding to an extent size of the third storage tier. The second cache line size is significantly larger than the first cache line size. The method further maintains, in the first storage tier, a first cache directory indicating which extents from the second storage tier are cached in the first storage tier, and a second cache directory indicating which extents from the third storage tier are cached in the second storage tier. 17-. (canceled)8. A computer program product for improving the efficiency of a tiered storage architecture comprising at least three storage tiers , the computer program product comprising a non-transitory computer-readable storage medium having computer-usable program code embodied therein , the computer-usable program code comprising:computer-usable program code to manage first, second, and third storage tiers, wherein the first storage tier acts as a cache for the second storage tier, and the second storage tier acts as a cache for the third storage tier;computer-usable program code to use, in the first storage tier, a first cache line size corresponding to an extent size of the second storage tier;computer-usable program code to use, in the second storage tier, a second cache line size corresponding to an extent size of the third storage tier, wherein the second cache line size is significantly larger than the first cache line size;computer-usable program code to maintain, in the first storage tier, a first cache directory indicating which extents from the second storage tier are cached in the first storage tier; andcomputer- ...

Подробнее
08-08-2013 дата публикации

EFFICIENT TRACK DESTAGE IN SECONDARY STORAGE

Номер: US20130205094A1

For efficient track destage in secondary storage in a more effective manner, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, the temporal bits and sequential bits are transferred from the primary storage to the secondary storage. The temporal bits are allowed to age on the secondary storage. 1. A method for efficient track destage in secondary storage in a computing storage environment by a processor device , comprising: transferring the plurality of temporal bits and the plurality of sequential bits from the primary storage to the secondary storage, and', 'allowing the plurality of temporal bits to age on the secondary storage., 'for a plurality of temporal bits employed with a plurality of sequential bits for controlling the timing for destaging the track in a primary storage2. The method of claim 1 , further including claim 1 , in conjunction with the transferring claim 1 , performing on the primary storage at least one of:querying a cache for at least one of a determination of whether the track is sequential and, if the track is modified, the plurality of temporal bits, andsaving the at least one of the determination of whether the track is sequential and, if the track is modified, the plurality of temporal bits in a cache directory control block (CDB) before the track is transferred.3. The method of claim 1 , further including claim 1 , in conjunction with the transferring claim 1 , performing on the secondary storage at least one of:receiving at least one of the plurality of temporal bits, the plurality of sequential bits, and the CDB,creating a cache track, andwriting data to the cache track.4. The method of claim 3 , further including performing at least one of:marking the track if the track is sequential, andsetting, in the cache track, those of the plurality of temporal bits corresponding to the plurality of bits saved in the CDB.5. The method of claim 1 , further including ...

Подробнее
08-08-2013 дата публикации

DATA ARCHIVING USING DATA COMPRESSION OF A FLASH COPY

Номер: US20130205109A1

Embodiments of the disclosure relate to archiving data in a storage system. An exemplary embodiment comprises making a flash copy of data in a source volume, compressing data in the flash copy wherein each track of data is compressed into a set of data pages, and storing the compressed data pages in a target volume. Data extents for the target volume may be allocated from a pool of compressed data extents. After each stride worth of data is compressed and stored in the target volume, data may be destaged to avoid destage penalties. Data from the target volume may be decompressed from a flash copy of the target volume in a reverse process to restore each data track, when the archived data is needed. Data may be compressed and uncompressed using a Lempel-Ziv-Welch process. 1. A computer implemented method for archiving data , comprising:making a first flash copy of data in a first storage volume while the first storage volume is off-line;compressing data in the first flash copy, wherein each track of data is compressed into a set of data pages; andstoring the set of compressed data pages into a second storage volume.2. The method of claim 1 , wherein the first flash copy is made in a background operation.3. The method of claim 1 , wherein making a first flash copy comprises:marking the second storage volume as write-inhibit;allocating for the second storage volume a data extent from a pool of compressed data extents; andallocating a new data extent from the compressed data extent pool when there is no more free space in the allocated data extent to store the compressed data.4. The method of claim 3 , further comprising updating a volume structure to indicate that the compressed data extent is allocated to the second storage volume.5. The method of claim 3 , wherein data in the first flash copy is compressed using Lempel-Ziv-Welch (LZW) compression.6. The method of claim 1 , further comprising reading data to be compressed from the first storage volume if the data to ...

Подробнее
22-08-2013 дата публикации

EFFICIENT DISCARD SCANS

Номер: US20130219124A1

A plurality of tracks is examined for meeting criteria for a discard scan. In lieu of waiting for a completion of a track access operation, at least one of the plurality of tracks is marked for demotion. An additional discard scan may be subsequently performed for tracks not previously demoted. The discard and additional discard scans may proceed in two phases. 1. A method for performing a discard scan operation by a processor device in a computing storage environment , comprising:examining a plurality of tracks for meeting a criteria for a discard scan; andin lieu of waiting for a completion of a track access operation, marking at least one of the plurality of tracks for demotion.2. The method of claim 1 , further including performing the marking in a first phase.3. The method of claim 2 , further including performing a cleanup demotion by a subsequent discard scan for those of the at least one of the plurality of tracks not demoted previously claim 2 , wherein the subsequent discard scan is performed in a second phase.4. The method of claim 2 , further including performing at least one of:commencing the discard scan in a hash table by marking the beginning of the at least one of the plurality of tracks,continuing the marking until reaching an end of the at least one of the plurality of tracks,setting a bit in the at least one of the plurality of tracks, andupdating a first index and a last index in the hash table.5. The method of claim 3 , further including performing at least one of:scanning a hash table between a first index and a last index, andsetting a flag to indicate a first phase and the second phase.6. The method of claim 5 , further including claim 5 , in conjunction with the setting the flag to indicate the first phase and the second phase claim 5 , performing at least one of:taking a spin lock, andreleasing the spin lock.7. The method of claim 1 , further including performing at least one of:issuing the discard scan based on a discard scan ...

Подробнее
05-09-2013 дата публикации

ADAPTIVE CACHE PROMOTIONS IN A TWO LEVEL CACHING SYSTEM

Номер: US20130232294A1

Provided are a computer program product, system, and method for managing data in a first cache and a second cache. A reference count is maintained in the second cache for the page when the page is stored in the second cache. It is determined that the page is to be promoted from the second cache to the first cache. In response to determining that the reference count is greater than zero, the page is added to a Least Recently Used (LRU) end of an LRU list in the first cache. In response to determining that the reference count is less than or equal to zero, the page is added to a Most Recently Used (LRU) end of the LRU list in the first cache. 1. A computer program product for managing data in a first cache and a second cache , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to perform operations , the operations comprising:maintaining a reference count in the second cache for a page when the page is stored in the second cache;determining that the page is to be promoted from the second cache to the first cache;in response to determining that the reference count is greater than zero, adding the page to a Least Recently Used (LRU) end of an LRU list in the first cache; andin response to determining that the reference count is less than or equal to zero, adding the page to a Most Recently Used (LRU) end of the LRU list in the first cache.2. The computer program product of claim 1 , wherein the first cache and the second cache are coupled to storage.3. The computer program product of claim 2 , wherein the first cache is a faster access device than the second cache claim 2 , and wherein the second cache is a faster access device than the storage.4. The computer program product of claim 2 , wherein the first cache comprises a Random Access Memory (RAM) claim 2 , the second cache comprises a flash device claim 2 , and the storage comprises a sequential write device.5. The computer ...

Подробнее
05-09-2013 дата публикации

Adaptive cache promotions in a two level caching system

Номер: US20130232295A1
Принадлежит: International Business Machines Corp

Provided are a computer program product, system, and method for managing data in a first cache and a second cache. A reference count is maintained in the second cache for the page when the page is stored in the second cache. It is determined that the page is to be promoted from the second cache to the first cache. In response to determining that the reference count is greater than zero, the page is added to a Least Recently Used (LRU) end of an LRU list in the first cache. In response to determining that the reference count is less than or equal to zero, the page is added to a Most Recently Used (LRU) end of the LRU list in the first cache.

Подробнее
12-09-2013 дата публикации

PERIODIC DESTAGES FROM INSIDE AND OUTSIDE DIAMETERS OF DISKS TO IMPROVE READ RESPONSE TIMES

Номер: US20130235709A1

A storage controller that includes a cache, receives a command from a host, wherein a set of criteria corresponding to read response times for executing the command have to be satisfied. A destage application that destages tracks based at least on recency of usage and spatial location of the tracks is executed, wherein a spatial ordering of the tracks is maintained in a data structure, and the destage application traverses the spatial ordering of the tracks. Tracks are destaged from at least inside or outside diameters of disks at periodic intervals, while traversing the spatial ordering of the tracks, wherein the set of criteria corresponding to the read response times for executing the command are satisfied. 1. A method , comprising:receiving, by a storage controller that includes a cache, a command from a host, wherein a set of criteria corresponding to read response times for executing the command have to be satisfied;executing a destage application that destages tracks based at least on recency of usage and spatial location of the tracks, wherein a spatial ordering of the tracks is maintained in a data structure, and the destage application traverses the spatial ordering of the tracks; anddestaging tracks from at least inside or outside diameters of disks at periodic intervals, while traversing the spatial ordering of the tracks, wherein the set of criteria corresponding to the read response times for executing the command are satisfied.2. The method of claim 1 , wherein by destaging tracks from the inside and outside diameters of disks at the periodic intervals claim 1 , read tracks that are relatively distant from a current location of a head are serviced by overriding the spatial ordering.3. The method of claim 1 , wherein the set of criteria specifies:average read response time is to be less than a first threshold; anda predetermined percentage of reads are to be performed in a time less than a second threshold.4. The method of claim 1 , the method further ...

Подробнее
19-09-2013 дата публикации

Adaptive prestaging in a storage controller

Номер: US20130246691A1
Принадлежит: International Business Machines Corp

In one aspect of the present description, at least one of the value of a prestage trigger and the value of the prestage amount, may be modified as a function of the drive speed of the storage drive from which the units of read data are prestaged into a cache memory. Thus, cache prestaging operations in accordance with another aspect of the present description may take into account storage devices of varying speeds and bandwidths for purposes of modifying a prestage trigger and the prestage amount. Other features and aspects may be realized, depending upon the particular application.

Подробнее
03-10-2013 дата публикации

INDICATION OF A DESTRUCTIVE WRITE VIA A NOTIFICATION FROM A DISK DRIVE THAT EMULATES BLOCKS OF A FIRST BLOCK SIZE WITHIN BLOCKS OF A SECOND BLOCK SIZE

Номер: US20130262763A1
Принадлежит:

A disk drive receives a request to write at least one block of a first block size, wherein the disk drive is configured to store blocks of a second block size that is larger in size than the first block size. The disk drive stores a. plurality of emulated blocks of the first block size in each block of the second block size. The disk drive generates a read error, in response to reading a selected block of the second block size in which the at least block of the first block size is to be written via, an emulation. The disk drive performs a destructive write of selected emulated blocks of the first block size that caused the read error to be generated. The disk drive writes the at least one block of the first block size in the selected block of the second block size. 1. A method comprising:performing, by a disk drive, a destructive write of selected emulated blocks of a first block size that causes a read error to be generated;writing, by the disk drive, at least one block of the first block size in a selected block of a second block size; andsending, by the disk drive, a notification to indicate the performing of the destructive write.2. The method of claim 1 , wherein:the first block size is 512 bytes; andthe second block size is 4 Kilobytes.3. The method of claim 1 , wherein the notification is sent asynchronously to a controller claim 1 , the method further comprising:maintaining, by the disk drive, an indicator that indicates those emulated blocks on which the destructive write is performed, wherein a request to write the at least one block of the first block size is satisfied, even in response to the read error being generated.4. The method of claim 1 , the method further comprising:receiving, by a controller, the notification sent by the disk drive; andrestoring, by the controller, data in the selected emulated blocks on which the destructive write was performed by the disk drive, by copying the data from mirrored data corresponding to the data in the selected ...

Подробнее
14-11-2013 дата публикации

DEMOTING TRACKS FROM A FIRST CACHE TO A SECOND CACHE BY USING AN OCCUPANCY OF VALID TRACKS IN STRIDES IN THE SECOND CACHE TO CONSOLIDATE STRIDES IN THE SECOND CACHE

Номер: US20130304968A1

Information is maintained on strides configured in a second cache and occupancy counts for the strides indicating an extent to which the strides are populated with valid tracks and invalid tracks. A determination is made of tracks to demote from a first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are to a second stride in the second cache having an occupancy count indicating the stride is empty. A determination is made of a target stride in the second cache based on the occupancy counts of the strides in the second cache. A determination is made of at least two source strides in the second cache having valid tracks based on the occupancy counts of the strides in the second cache. The target stride is populated with the valid tracks from the source strides. 1. A method for managing data in a computer readable cache system comprising a first cache , a second cache , and a storage system comprised of storage devices , comprising:maintaining information on strides configured in the second cache and occupancy counts for the strides indicating an extent to which the strides are populated with valid tracks and invalid tracks, wherein a stride having no valid tracks is empty;determining tracks to demote from the first cache;forming a first stride including the determined tracks to demote;adding the tracks from the first stride to a second stride in the second cache having an occupancy count indicating the stride is empty;determining a target stride in the second cache based on the occupancy counts of the strides in the second cache;determining at least two source strides in the second cache having valid tracks based on the occupancy counts of the strides in the second cache; andpopulating the target stride with the valid tracks from the source strides.2. The method of claim 1 , further comprising:invalidating the tracks in the source strides added to the target stride; andreducing the occupancy count of each of ...

Подробнее
12-12-2013 дата публикации

SYNCHRONOUS AND ANSYNCHRONOUS DISCARD SCANS BASED ON THE TYPE OF CACHE MEMORY

Номер: US20130332645A1

A computational device maintains a first type of cache and a second type of cache. The computational device receives a command from the host to release space. The computational device synchronously discards tracks from the first type of cache, and asynchronously discards tracks from the second type of cache. 1. A method , comprising:maintaining, via computational device, a first type of cache and a second type of cache;receiving, from a host, a command to release space;synchronously discarding tracks from the first type of cache; andasynchronously discarding tracks from the second type of cache.2. The method of claim 1 , wherein the first type of cache is smaller in size than the second type of cache.3. The method of claim 2 , wherein the first type of cache is a dynamic random access memory (DRAM) cache and the second type of cache is a flash cache.4. The method of claim 1 , the method further comprising:determining whether discard scans from the first type of cache on an average take a time than is greater than a threshold amount of time; andin response to determining that discard scans from the first type of cache on an average take a time that is greater than the threshold amount of time, set discard scans for the first type of cache to execute asynchronously with the command from the host.5. The method of claim 4 , the method further comprising:in response to determining that discard scans from the first type of cache on a average take a time that is less than or equal to the threshold amount of time, set discard scans for the first type of cache to execute synchronously with the command from the host.6. The method of claim 1 , the method further comprising:determining whether a cache directory corresponding to the first type of cache is greater than a threshold amount of space; andin response to determining that the cache directory corresponding to the first type of cache is greater than the threshold amount of space, set discard scans for the first type of ...

Подробнее
12-12-2013 дата публикации

PERFORMING ASYNCHRONOUS DISCARD SCANS WITH STAGING AND DESTAGING OPERATIONS

Номер: US20130332646A1

A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether one or more discard scans are being performed or queued for the area of the cache. In response to determining that one or more discard scans are being performed or queued for the area of the cache, the controller avoids satisfying the request to perform the staging or the destaging operations with respect to the area of the cache. 1. A method , comprising:receiving, by a controller, a request to perform staging or destaging operations with respect to an area of a cache;determining whether one or more discard scans are being performed or queued for the area of the cache; andin response to determining that one or more discard scans are being performed or queued for the area of the cache, avoiding satisfying the request to perform the staging or the destaging operations or a read hit with respect to the area of the cache.2. The method of claim 1 , the method further comprising:in response to determining that one or more discard scans are not being performed or queued for the area of the cache, satisfying the request to perform the staging or the destaging operations or the read hit with respect to the area of the cache.3. The method of claim 1 , wherein the cache is a flash cache and discard scans are performed asynchronously with respect to a request from a host to the controller to release space in the flash cache.4. The method of claim 1 , wherein the area of the cache corresponds to an extent claim 1 , a track claim 1 , a volume claim 1 , a logical subsystem or any other representation of storage.5. The method of claim 1 , wherein the cache is a flash cache claim 1 , wherein the controller maintains a plurality of logical subsystems claim 1 , wherein each logical subsystem stores a plurality of volumes claim 1 , wherein a logical storage group is a plurality of logical subsystems that is owned for input/output (I/O) ...

Подробнее
16-01-2014 дата публикации

Automatically Preventing Large Block Writes from Starving Small Block Writes in a Storage Device

Номер: US20140019707A1

A mechanism is provided in a storage device for performing a write operation. The mechanism configures a write buffer memory with a plurality of write buffer portions. Each write buffer portion is dedicated to a predetermined block size category within a plurality of block size categories. For each write operation from an initiator, the mechanism determines a block size category of the write operation. The mechanism performs each write operation by writing to a write buffer portion within the plurality of write buffer portions corresponding to the block size category of the write operation. 1. A computer program product comprising a computer readable storage medium having a computer readable program stored therein , wherein the computer readable program , when executed on a processor of a storage device , causes the processor to:configure a write buffer memory with a plurality of write buffer portions, wherein each write buffer portion is dedicated to a predetermined block size category within a plurality of block size categories;for each write operation from an initiator, determine a block size category of the write operation; andperform each write operation by writing to a write buffer portion within the plurality of write buffer portions corresponding to the block size category of the write operation.2. The computer program product of claim 1 , wherein the computer readable program further causes the processor to:responsive to a write buffer portion corresponding to a block size category of a given write operation being full, update a blocking delay value for the block size category of the given write operation.3. The computer program product of claim 2 , wherein the computer readable program further causes the processor to:adjust sizes of the plurality of write buffer portions based on blocking delay values of the block size categories.4. The computer program product of claim 1 , wherein configuring the write buffer memory comprises configuring a shared buffer ...

Подробнее
13-02-2014 дата публикации

ADJUSTMENT OF THE NUMBER OF TASK CONTROL BLOCKS ALLOCATED FOR DISCARD SCANS

Номер: US20140047187A1

A controller receives a request to perform a release space operation. A determination is made that a new discard scan has to be performed on a cache, in response to the received request to perform the release space operation. A determination is made as to how many task control blocks are to be allocated to the perform the new discard scan, based on how many task control blocks have already been allocated for performing one or more discard scans that are already in progress. 1. A method , comprising:receiving, by a controller, a request to perform a release space operation;determining that a new discard scan has to be performed on a cache, in response to the received request to perform the release space operation; anddetermining how many task control blocks are to be allocated to the perform the new discard scan, based on how many task control blocks have already been allocated for performing one or more discard scans that are already in progress.2. The method of claim 1 , the method further comprising:in response to determining that the already allocated number of task control blocks exceed a threshold, allocating only one task control block to perform the new discard scan.3. The method of claim 1 , the method further comprising:in response to determining that the already allocated number of task control blocks does not exceed a threshold, allocating a plurality of task control blocks to perform the new discard scan.4. The method of claim 3 , wherein the allocating of the plurality of task control blocks to perform the new discard scan further comprises:allocating an anchor task control block and then allocating subscan task control blocks to perform the new discard scan, wherein the anchor task control block and each of the subscan task control blocks are executed in parallel.5. The method of claim 1 , the method further comprising:determining, whether input/output (I/O) operations on the cache that cannot be performed exceed a threshold number of PO operations, ...

Подробнее
06-03-2014 дата публикации

PERFORMING ASYNCHRONOUS DISCARD SCANS WITH STAGING AND DESTAGING OPERATIONS

Номер: US20140068163A1
Принадлежит:

A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether one or more discard scans are being performed or queued for the area of the cache. In response to determining that one or more discard scans are being performed or queued for the area of the cache, the controller avoids satisfying the request to perform the staging or the destaging operations or a read hit with respect to the area of the cache. 1. A method , comprising:receiving, by a controller, a request to perform staging or destaging operations with respect to an area of a cache;determining whether one or more discard scans are being performed or queued for the area of the cache; andin response to determining that one or more discard scans are being performed or queued for the area of the cache, avoiding satisfying the request to perform the staging or the destaging operations or a read hit with respect to the area of the cache.2. The method of claim 1 , the method further comprising:in response to determining that one or more discard scans are not being performed or queued for the area of the cache, satisfying the request to perform the staging or the destaging operations or the read hit with respect to the area of the cache.3. The method of claim 1 , wherein the cache is a flash cache and discard scans are performed asynchronously with respect to a request from a host to the controller to release space in the flash cache.4. The method of claim 1 , wherein the area of the cache corresponds to an extent claim 1 , a track claim 1 , a volume claim 1 , a logical subsystem or any other representation of storage.5. The method of claim 1 , wherein the cache is a flash cache claim 1 , wherein the controller maintains a plurality of logical subsystems claim 1 , wherein each logical subsystem stores a plurality of volumes claim 1 , wherein a logical storage group is a plurality of logical subsystems that is owned for input/ ...

Подробнее
06-03-2014 дата публикации

ADJUSTMENT OF THE NUMBER OF TASK CONTROL BLOCKS ALLOCATED FOR DISCARD SCANS

Номер: US20140068189A1
Принадлежит:

A controller receives a request to perform a release space operation. A determination is made that a new discard scan has to be performed on a cache, in response to the received request to perform the release space operation. A determination is made as to how many task control blocks are to be allocated to the perform the new discard scan, based on how many task control blocks have already been allocated for performing one or more discard scans that are already in progress. 1. A method , comprising:receiving, by a controller, a request to perform a release space operation;determining that a new discard scan has to be performed on a cache, in response to the received request to perform the release space operation; anddetermining how many task control blocks are to be allocated to the perform the new discard scan, based on how many task control blocks have already been allocated for performing one or more discard scans that are already in progress.2. The method of claim 1 , the method further comprising:in response to determining that the already allocated number of task control blocks exceed a threshold, allocating only one task control block to perform the new discard scan.3. The method of claim 1 , the method further comprising:in response to determining that the already allocated number of task control blocks does not exceed a threshold, allocating a plurality of task control blocks to perform the new discard scan.4. The method of claim 3 , wherein the allocating of the plurality of task control blocks to perform the new discard scan further comprises:allocating an anchor task control block and then allocating subscan task control blocks to perform the new discard scan, wherein the anchor task control block and each of the subscan task control blocks are executed in parallel.5. The method of claim 1 , the method further comprising:determining, whether input/output (I/O) operations on the cache that cannot be performed exceed a threshold number of I/O operations, ...

Подробнее
06-03-2014 дата публикации

SYNCHRONOUS AND ANSYNCHRONOUS DISCARD SCANS BASED ON THE TYPE OF CACHE MEMORY

Номер: US20140068191A1
Принадлежит:

A computational device maintains a first type of cache and a second type of cache. The computational device receives a command from the host to release space. The computational device synchronously discards tracks from the first type of cache, and asynchronously discards tracks from the second type of cache. 1. A method , comprising:maintaining, via computational device, a first type of cache and a second type of cache;receiving, from a host, a command to release space;synchronously discarding tracks from the first type of cache; andasynchronously discarding tracks from the second type of cache.2. The method of claim 1 , wherein the first type of cache is smaller in size than the second type of cache.3. The method of claim 2 , wherein the first type of cache is a dynamic random access memory (DRAM) cache and the second type of cache is a flash cache.4. The method of claim 1 , the method further comprising:determining whether discard scans from the first type of cache on an average take a time than is greater than a threshold amount of time; andin response to determining that discard scans from the first type of cache on an average take a time that is greater than the threshold amount of time, set discard scans for the first type of cache to execute asynchronously with the command from the host.5. The method of claim 4 , the method further comprising:in response to determining that discard scans from the first type of cache on a average take a time that is less than or equal to the threshold amount of time, set discard scans for the first type of cache to execute synchronously with the command from the host.6. The method of claim 1 , the method further comprising:determining whether a cache directory corresponding to the first type of cache is greater than a threshold amount of space; andin response to determining that the cache directory corresponding to the first type of cache is greater than the threshold amount of space, set discard scans for the first type of ...

Подробнее
13-03-2014 дата публикации

Replicating tracks from a first storage site to a second and third storage sites

Номер: US20140075114A1
Принадлежит: International Business Machines Corp

Provided are a computer program product, system, and method for replicating tracks from a first storage to a second and third storages. A determination is made of a track in the first storage to transfer to the second storage as part of a point-in-time copy relationship and of a stride of tracks including the target track. The stride of tracks including the target track is staged from the first storage to a cache according to the point-in-time copy relationship. The staged stride is destaged from the cache to the second storage. The stride in the cache is transferred to the third storage as part of a mirror copy relationship. The stride of tracks in the cache is demoted in response to destaging the stride of the tracks in the cache to the second storage and transferring the stride of tracks in the cache to the third storage.

Подробнее
20-03-2014 дата публикации

RECOVERY FROM CACHE AND NVS OUT OF SYNC

Номер: US20140082256A1

For cache/data management in a computing storage environment, incoming data segments into a Non Volatile Storage (NVS) device of the computing storage environment are validated against a bitmap to determine if the incoming data segments are currently in use. Those of the incoming data segments determined to be currently in use are designated to the computing storage environment to protect data integrity. 1. A method for data management in a computing storage environment by a processor device , comprising:validating incoming data segments into a Non Volatile Storage (NVS) device of the computing storage environment against a bitmap to determine if the incoming data segments are currently in use; anddesignating those of the incoming data segments determined to be currently in use to the computing storage environment to protect data integrity.2. The method of claim 1 , further including configuring the bitmap.3. The method of claim 1 , further including performing the validating by an NVS Network Adapter (NA) associated with the NVS device.4. The method of claim 1 , further including performing the validating by comparing an incoming Non Volatile Storage Control Block (NVSCB) against the bitmap.5. The method of claim 1 , further including claim 1 , pursuant to designating those of the incoming data segments claim 1 , performing at least one of pinning and reporting the designated incoming data segments as data loss.6. The method of claim 1 , further including claim 1 , at one of an Initial Memory Load (IML) and a Warmstart claim 1 , clearing and rebuilding the bitmap. This application is a Continuation of U.S. patent application Ser. No. 13/617,076, filed on Sep. 14, 2012.The present invention relates in general computing systems, and more particularly to, systems and methods for increased cache and data management efficiency in computing storage environments.In today's society, computer systems are commonplace. Computer systems may be found in the workplace, at home, ...

Подробнее
20-03-2014 дата публикации

EFFICIENT PROCESSING OF CACHE SEGMENT WAITERS

Номер: US20140082277A1

For a plurality of input/output (I/O) operations waiting to assemble complete data tracks from data segments, a process, separate from a process responsible for the data assembly into the complete data tracks, is initiated for waking a predetermined number of the waiting I/O operations. A total number of I/O operations to be awoken at each of an iterated instance of the waking is limited. 1. A method for cache management by a processor device in a computing storage environment , the method comprising:for a plurality of input/output (I/O) operations waiting to assemble complete data tracks from data segments, initiating a process, separate from a process responsible for the data assembly into the complete data tracks, for waking a predetermined number of the waiting I/O operations, wherein a total number of I/O operations to be awoken at each of an iterated instance of the waking is limited.2. The method of claim 1 , further including performing the waking process for a first iteration subsequent to the data assembly process building at least one complete data track.3. The method of claim 2 , further including claim 2 , pursuant to the waking process claim 2 , removing claim 2 , by a first I/O waiter claim 2 , the at least one complete data track off of a free list.4. The method of claim 3 , further including claim 3 , pursuant to the waking process claim 3 , if additional complete data tracks are available on the free list claim 3 , waking at least a second I/O waiter to remove the additional complete data tracks off the free list.5. The method of claim 4 , further including iterating through at least one additional waking process corresponding to a predetermined wake up depth.6. The method of claim 1 , further including setting the predetermined number of waiting I/O operations to be awoken according to the waking process. This application is a Continuation of U.S. patent application Ser. No. 13/616,902, filed on Sep. 14, 2012.The present invention relates in ...

Подробнее