Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 296. Отображено 116.
25-12-2013 дата публикации

Sharing aggregated cache hit and miss data in a storage area network

Номер: GB0002503266A
Принадлежит:

A number of computer systems LS1, LS2 are connected to a shared data storage system CS to access data D. The computer systems each have a local cache CM1, CM2 and run applications A1, A2. The computer systems provide information about cache hits H and misses M to the storage system. The storage system aggregates the information and provides the aggregated information ACD to the computer systems. The computer systems then use the aggregated information to update the cached data. The computer system may populate the cache with one or more subsets of the data identified in the aggregated cache information. The computer system may immediately populate the cache with data identified in some of the subsets and may add data identified in other subsets to a watch list. Data corresponding to a local cache miss may also be put in the watch list.

Подробнее
05-06-2019 дата публикации

Workload optimized data deduplication using ghost fingerprints

Номер: GB0002569060A
Принадлежит:

A controller of a data storage system generates fingerprints of data blocks written to the data storage system. The controller maintains, in a data structure, respective state information for each of a plurality of data blocks. The state information for each data block can be independently set to indicate any of a plurality of states, including at least one deduplication state and at least one non-deduplication state. At allocation of a data block, the controller initializes the state information for the data block to a non-deduplication state and, thereafter, in response to detection of a write of duplicate of the data block to the data storage system, transitions the state information for the data block to a deduplication state. The controller selectively performs data deduplication for data blocks written to the data storage system based on the state information in the data structure and by reference to the fingerprints.

Подробнее
05-11-2014 дата публикации

Dynamically adjusted threshold for population of secondary cache

Номер: GB0002513741A
Принадлежит:

The population of data to be inserted into secondary data storage cache is controlled by determining a heat metric of candidate data; adjusting a heat metric threshold; rejecting candidate data provided to the secondary data storage cache whose heat metric is less than the threshold; and admitting candidate data whose heat metric is equal to or greater than the heat metric threshold. The adjustment of the heat metric threshold is determined by comparing a reference metric related to hits of data most recently inserted into the secondary data storage cache, to a reference metric related to hits of data most recently evicted from the secondary data storage cache; if the most recently inserted reference metric is greater than the most recently evicted reference metric, decrementing the threshold; and if the most recently inserted reference metric is less than the most recently evicted reference metric, incrementing the threshold.

Подробнее
25-06-2014 дата публикации

Promotion of partial data segments in flash cache

Номер: GB0002509289A
Принадлежит:

Exemplary method, system, and computer program product embodiments for efficient track destage in secondary storage in a more effective manner, are provided. In one embodiment, by way of example only, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, the temporal bits and sequential bits are transferred from the primary storage to the secondary storage. The temporal bits are allowed to age on the secondary storage. Additional system and computer program product embodiments are disclosed and provide related advantages.

Подробнее
26-11-2014 дата публикации

Adaptive cache promotions in a two level caching System

Номер: GB0002514501A
Принадлежит:

Provided are a computer program product, system, and method for managing data in a first cache and a second cache. A reference count is maintained in the second cache for the page when the page is stored in the second cache. It is determined that the page is to be promoted from the second cache to the first cache. In response to determining that the reference count is greater than zero, the page is added to a Least Recently Used (LRU) end of an LRU list in the first cache. In response to determining that the reference count is less than or equal to zero, the page is added to a Most Recently Used (MRU) end of the LRU list in the first cache.

Подробнее
31-10-2012 дата публикации

Integrating a flash cache into large storage systems

Номер: GB0002490412A
Принадлежит:

An I/O enclosure module is provided with one or more I/O enclosures having a plurality of slots for receiving electronic devices. A host adapter is connected a first slot of the I/O enclosure module and is configured to connect a host to the I/O enclosure. A device adapter is connected to a second slot of the I/O enclosure module and is configured to connect a storage device to the I/O enclosure module. A flash cache is connected to a third slot of the I/O enclosure module and includes a flash-based memory configured to cache data associated with data requests handled through the I/O enclosure module. A primary processor complex manages data requests handled through the I/O enclosure module by communicating with the host adapter, device adapter, and flash cache to manage to the data requests.

Подробнее
30-01-2013 дата публикации

Determining hot data in a storage system using counting bloom filters

Номер: GB0002493243A
Принадлежит:

Determining a characteristic of a data entity based on a frequency of access to said data entity in a storage system using a counting bloom filter (CBF') comprising a set (S') of counters (C1); and a data structure having a set of elements each corresponding to a counter. To avoid counter overflow the counting bloom filter is operated for an interval in time wherein the set of counters are reset at the start of the interval. Each time said data entity is accessed during the interval a value of at least one counter (C1) to which said data entity is mapped in the counting bloom filter is increased. At the end of the interval the values of the elements in the data structure are updated based on the current value of that element and the value of the counter to which it is assigned. The interval in time may be a predefined number of accesses. A plurality of counting bloom filters can be used. The method may produce a heat map which is used for selectively populating a cache with â hotâ data ...

Подробнее
30-07-2014 дата публикации

Method and system for selective space reclamation of data storage memory employing heat and relocation metrics

Номер: GB0002510308A
Принадлежит:

A method and computer program product for reclaiming space of a data storage memory of a data storage memory system, and a computer-implemented data storage memory system are provided. The method includes: determining heat metrics of data stored in the data storage memory; determining relocation metrics related to relocation of the data within the data storage memory; determining utility metrics of the data relating the heat metrics to the relocation metrics for the data; and making the data whose utility metric fails a utility metric threshold, available for space reclamation. Thus, data that otherwise may be evicted or demoted, but that meets or exceeds the utility metric threshold, is exempted from space reclamation and is instead maintained in the data storage memory.

Подробнее
16-09-2015 дата публикации

Storage device with 2D configuration of phase change memory integrated circuits

Номер: GB0002524003A
Принадлежит:

A storage device 100 and method of operation (figure 5), comprising a channel controller 10 and phase change memory integrated circuits 20, (PCM ICs) arranged in sub-channels 30, wherein each of the sub-channels 30 comprises several PCM ICs connected by at least one data bus line 35, in which at least one data bus line connects to the channel controller which is configured to write data to and/or read data from the PCM ICs according to a matrix configuration of PCM ICs. The number of columns of this matrix configuration respectively corresponds to a number Ns of the sub-channels, Ns ≥ 2, the sub-channels 30 forming a channel; and a number of rows of this matrix configuration respectively corresponds to a number Nl of sub-banks (40), Nl ≥ 2, the sub-banks 40 forming a bank, wherein each of the sub-banks 40 comprises PCM ICs that belong, each, to a distinct sub-channel 30. The channel controller is configured to break data (71,72 figure 2) to be written to the PCM ICs into data chunks ...

Подробнее
18-03-2015 дата публикации

Method and system for allocating resources from storage device into stored optimization operations

Номер: CN104424106A
Принадлежит:

The invention discloses a method for allocating resources from a storage device into stored optimization operations executed by a machine and a system thereof. The method comprises the following steps: monitoring available resources from the storage device; based on historical operation information of the machine and at least one predicated value about performance improvement extents of the stored optimization operations to the machine, determining the allocation proportion of resources allocated into the stored optimization operations; based on the available resources and the allocation proportion, allocating the resources of the storage device into the stored optimization operations. According to the method and the system, the resources can be rationally allocated into the stored optimization operations so that the stored optimization operation can be performed in the machine to improve the long-term storage performance, at the same time, the workload of the normal clients operated in ...

Подробнее
05-03-2015 дата публикации

SELECTIVELY ENABLING WRITE CACHING IN A STORAGE SYSTEM BASED ON PERFORMANCE METRICS

Номер: US20150067271A1

According to a method of cache management in a data storage system including a write cache and bulk storage media, a storage controller of the data storage system caches, in the write cache, write data of write input/output operations (IOPs) received at the storage controller. In response to a first performance-related metric for the data storage system satisfying a first threshold, the storage controller decreases a percentage of write IOPs for which write data is cached in the write cache of the data storage system and increases a percentage of write IOPs for which write data is stored directly in the bulk storage media in lieu of the write cache. In response to a second performance-related metric for the data storage system satisfying a second threshold, the storage controller increases the percentage of write IOPs for which write data is cached in the write cache of the data storage system.

Подробнее
11-04-2019 дата публикации

TECHNIQUES FOR RETENTION AND READ-DISTURB AWARE HEALTH BINNING

Номер: US20190107959A1
Принадлежит:

A technique for performing health binning in a storage system includes in response to a block having a retention time below a first threshold and a read count below a second threshold, utilizing a first error metric for health binning. The first error metric is a current error metric for the block. In response to the block having a retention time above a third threshold that is greater than or equal to the first threshold or a read count above a fourth threshold that is greater than or equal to the second threshold, utilizing a second error metric that is not the same as the current block error metric for health binning. 1. A method of performing health binning in a storage system , comprising:in response to a block having a retention time below a first threshold and a read count below a second threshold, utilizing, by a controller, a first error metric for health binning, wherein the first error metric is a current error metric for the block; andin response to the block having a retention time above a third threshold that is greater than or equal to the first threshold or a read count above a fourth threshold that is greater than or equal to the second threshold, utilizing, by the controller, a second error metric that is not the same as the current block error metric for health binning.2. The method of claim 1 , wherein the second error metric is associated with a previous error metric for the block that is determined when the block had a retention time below the first threshold and a read count below the second threshold.3. The method of claim 1 , wherein the second error metric is an estimated error metric for the block.4. The method of claim 1 , wherein the second error metric is associated with a previous error metric for the block that is determined when the block had a retention time below the first threshold and a read count below the second threshold claim 1 , and wherein in response to a program/erase cycle count of the block increasing above a fifth ...

Подробнее
24-05-2012 дата публикации

WRITE CACHE STRUCTURE IN A STORAGE SYSTEM

Номер: US20120131265A1

A method of writing data units to a storage device. The data units are cached in a first level cache sorted by logical address. A group (G) of sorted data units is transferred from the first level cache to a second level cache embodied in a solid state memory device. Data units of multiple groups (G) are sorted in the second level cache by logical address. The sorted data units stemming from the multiple groups are written to the storage device. 1. A method of writing data units to a storage device , comprising:caching the data units in a first level cache by logical address to form sorted data units in the first level cache;{'sup': 'j', 'handing over a group (G) of the sorted data units in the first level cache to a second level cache embodied in a solid state memory device to form a group of data units residing to in the second level cache;'}{'sup': 'j', 'sorting, by logical address, data units of multiple groups (G) residing in the second level cache to form sorted data units of the multiple groups; and'}writing the sorted data units of the multiple groups to the storage device.2. The method according to claim 1 , wherein the group (G) of the sorted data units to be handed over to the second level cache is selected from the sorted data units in the first level cache according to the time passed since usage of the sorted data unit in the first level cache.3. The method according to claim 1 ,{'sub': 'i', 'wherein the first level cache has an ordered list structure comprising cells (c),'}{'sub': 'i', 'wherein each of the data units is sequentially cached in one of the cells (c) by logical address;'}{'sub': i', 'i', 'i, 'wherein each data unit stored in one of the cells (c) is associated with a flag (f) for indicating an update of the data unit since it has been stored in the cell (c); and'}{'sup': 'j', 'wherein the group (G) of sorted data units to be handed over to the second level cache is formed by selecting all data units not indicating the update.'}4. The ...

Подробнее
22-11-2012 дата публикации

REDUCING ACCESS CONTENTION IN FLASH-BASED MEMORY SYSTEMS

Номер: US20120297128A1

Exemplary embodiments include a method for reducing access contention in a flash-based memory system, the method including selecting a chip stripe in a free state, from a memory device having a plurality of channels and a plurality of memory blocks, wherein the chip stripe includes a plurality of pages, setting the ship stripe to a write state, setting a write queue head in each of the plurality of channels, for each of the plurality of channels in the flash stripe, setting a write queue head to a first free page in a chip belonging to the channel from the chip stripe, allocating write requests according to a write allocation scheduler among the channels, generating a page write and in response to the page write, incrementing the write queue head, and setting the chip stripe into an on-line state when it is full. 1. A method for reducing access contention in a memory chip having a channel and a memory blocks , the method comprising:setting a write state and a write queue head within the channel;setting the write queue head to a first free page in the channel;allocating write requests according to a write allocation scheduler for the channel;generating a page write;in response to the page write, incrementing the write queue head;andsetting an on-line state in the channel.2. The method as claimed in further comprising triggering garbage collection.3. The method as claimed in wherein garbage collection is triggered in response to a change to the write state in the channel.4. The method as claimed in wherein garbage collection is triggered in response to reaching a predetermined threshold in the channel.5. The method as claimed in wherein garbage collection is triggered in response to a memory block becoming full.6. The method as claimed in wherein garbage collection comprises:setting a cleaning state in the channel;reading page meta-data and LBA-to-PBA mapping information;in response to a page being valid:reading full page data;obtaining a target location for the page; ...

Подробнее
21-02-2013 дата публикации

OPTIMIZING LOCATIONS OF DATA ACCESSED BY CLIENT APPLICATIONS INTERACTING WITH A STORAGE SYSTEM

Номер: US20130046930A1

A method for optimizing locations of physical data accessed by one or more client applications interacting with a storage system, with the storage system comprising at least two redundancy groups having physical memory spaces and data bands. Each of the data bands corresponds to physical data stored on several of the physical memory spaces. A virtualized logical address space includes client data addresses utilizable by the one or more client applications. A storage controller is configured to map the client data addresses onto the data bands, such that a mapping is obtained, wherein the one or more client applications can access physical data corresponding to the data bands. 1. A method for optimizing locations of physical data accessed by one or more client applications interacting with a storage system , the storage system comprising:a set of at least two redundancy groups, wherein each of the redundancy groups comprises physical memory spaces and data bands, each of the data bands corresponding to physical data stored on several of the physical memory spaces of the each of the redundancy groups;a virtualized logical address space, comprising client data addresses utilizable by the one or more client applications; and 'swapping locations of physical data corresponding to one data band of one of the at least two redundancy groups with physical data of another data band of another one of the at least two redundancy groups, based on data access needs of the one or more client applications and accordingly updating the mapping.', 'a storage controller configured to map the client data addresses onto the data bands, such that a mapping is obtained, whereby the one or more client applications can access physical data corresponding to the data bands, the method comprising, at the storage controller2. The method of claim 1 , wherein at the step of swapping claim 1 , the one data band and the another data band are constrained to belong to a same data band group claim 1 , ...

Подробнее
21-02-2013 дата публикации

OPTIMIZING LOCATIONS OF DATA ACCESSED BY CLIENT APPLICATIONS INTERACTING WITH A STORAGE SYSTEM

Номер: US20130046931A1

A method for optimizing locations of physical data accessed by one or more client applications interacting with a storage system, with the storage system comprising at least two redundancy groups having physical memory spaces and data bands. Each of the data bands corresponds to physical data stored on several of the physical memory spaces. A virtualized logical address space includes client data addresses utilizable by the one or more client applications. A storage controller is configured to map the client data addresses onto the data bands, such that a mapping is obtained, wherein the one or more client applications can access physical data corresponding to the data bands. 112-. (canceled)13. A storage system configured for optimizing locations of physical data accessed by one or more client applications , comprising:a set of at least two redundancy groups, wherein each of the redundancy groups comprises physical memory spaces and data bands, each of the data bands corresponding to physical data stored on several of the physical memory spaces of the each of the redundancy groups;a virtualized logical address space, comprising client data addresses utilizable by the one or more client applications; and 'map the client data addresses onto the data bands, such that a mapping is obtained, whereby the one or more client applications can access physical data corresponding to the data bands; and', 'a storage controller configured toswap locations of physical data corresponding to one data band of one of the at least two redundancy groups with physical data of another data band of another one of the at least two redundancy groups, based on data access needs of the one or more client applications, and accordingly update the mapping.14. The storage system of claim 13 , wherein the storage controller is further configured to constrain the one data band and the another data band to belong to a same data band group claim 13 , the latter defined within a set of data band ...

Подробнее
02-05-2013 дата публикации

PROMOTION OF PARTIAL DATA SEGMENTS IN FLASH CACHE

Номер: US20130111106A1

Exemplary method, system, and computer program product embodiments for efficient track destage in secondary storage in a more effective manner, are provided. In one embodiment, by way of example only, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, the temporal bits and sequential bits are transferred from the primary storage to the secondary storage. The temporal bits are allowed to age on the secondary storage. Additional system and computer program product embodiments are disclosed and provide related advantages. 1. A method for promoting partial data segments in a computing storage environment having lower and higher speed levels of cache by a processor , comprising: allowing the partial data segments to remain in the higher speed cache level for a time period longer that at least one whole data segment, and', 'implementing a preference for movement of the partial data segments to the lower speed cache level based on at least one of an amount of holes and a data heat metric, wherein a first of the partial data segments having at least one of a lower amount of holes and a hotter data heat is moved to the lower speed cache level ahead of a second of the partial data segments having at least one of a higher amount of holes and a cooler data heat., 'configuring a data moving mechanism adapted for performing at least one of2. The method of claim 1 , further including claim 1 , pursuant to configuring the data mover mechanism claim 1 , writing one of the partial data segments to the lower speed cache level as a whole data segment.3. The method of claim 1 , further including claim 1 , pursuant to configuring the data mover mechanism claim 1 , densely packing one of the partial data segments into a Cache Flash Element (CFE).4. The method of claim 1 , further including writing fixed portions of the partial data segment to portions of the lower speed cache corresponding to an associated storage ...

Подробнее
02-05-2013 дата публикации

DYNAMICALLY ADJUSTED THRESHOLD FOR POPULATION OF SECONDARY CACHE

Номер: US20130111131A1

The population of data to be inserted into secondary data storage cache is controlled by determining a heat metric of candidate data; adjusting a heat metric threshold; rejecting candidate data provided to the secondary data storage cache whose heat metric is less than the threshold; and admitting candidate data whose heat metric is equal to or greater than the heat metric threshold. The adjustment of the heat metric threshold is determined by comparing a reference metric related to hits of data most recently inserted into the secondary data storage cache, to a reference metric related to hits of data most recently evicted from the secondary data storage cache; if the most recently inserted reference metric is greater than the most recently evicted reference metric, decrementing the threshold; and if the most recently inserted reference metric is less than the most recently evicted reference metric, incrementing the threshold. 1. A method for populating data into a secondary data storage cache of a computer-implemented cache data storage system , comprising:determining a heat metric of candidate data to be inserted into said secondary data storage cache;adjusting a heat metric threshold in accordance with caching efficiency of a present state of said secondary data storage cache;rejecting candidate data provided to said secondary data storage cache whose heat metric is less than said threshold; andadmitting to said secondary data storage cache, candidate data provided to said secondary data storage cache whose heat metric is equal to or greater than said heat metric threshold.2. The method of claim 1 , additionally comprising:maintaining a reference metric related to hits of data most recently inserted into said secondary data storage cache;maintaining a reference metric related to hits of data most recently evicted from said secondary data storage cache; andsaid adjusting step comprises adjusting said heat metric threshold in accordance with said reference metric ...

Подробнее
02-05-2013 дата публикации

Dynamically adjusted threshold for population of secondary cache

Номер: US20130111133A1
Принадлежит: International Business Machines Corp

The population of data to be inserted into secondary data storage cache is controlled by determining a heat metric of candidate data; adjusting a heat metric threshold; rejecting candidate data provided to the secondary data storage cache whose heat metric is less than the threshold; and admitting candidate data whose heat metric is equal to or greater than the heat metric threshold. The adjustment of the heat metric threshold is determined by comparing a reference metric related to hits of data most recently inserted into the secondary data storage cache, to a reference metric related to hits of data most recently evicted from the secondary data storage cache; if the most recently inserted reference metric is greater than the most recently evicted reference metric, decrementing the threshold; and if the most recently inserted reference metric is less than the most recently evicted reference metric, incrementing the threshold.

Подробнее
02-05-2013 дата публикации

MANAGEMENT OF PARTIAL DATA SEGMENTS IN DUAL CACHE SYSTEMS

Номер: US20130111134A1

Various embodiments for movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor are provided. In one such embodiment, a whole data segment containing one of the partial data segments is promoted to both the lower and higher levels of cache. Requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache. Unrequested data of the whole data segment is split and positioned at a Least Recently Used (LRU) portion of the demotion queue of the higher level of cache. The unrequested data is pinned in place until a write of the whole data segment to the lower level of cache completes. Additional system and computer program product embodiments are disclosed and provide related advantages. 1. A method for movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor , comprising: requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache,', 'unrequested data of the whole data segment is split and positioned at a Least Recently Used (LRU) portion of the demotion queue of the higher level of cache, and', 'the unrequested data is pinned in place until a write of the whole data segment to the lower level of cache completes., 'promoting a whole data segment containing one of the partial data segments to both the lower and higher levels of cache, wherein2. The method of claim 1 , wherein promoting the whole data segment occurs pursuant to a read request for the one of the partial data segments.3. The method of claim 1 , further including claim 1 , previous to promoting the whole data segment claim 1 , determining if the one of the partial data segments should be cached on the lower level of cache.4. The method of claim 3 , wherein determining if the one of the partial ...

Подробнее
02-05-2013 дата публикации

SELECTIVE POPULATION OF SECONDARY CACHE EMPLOYING HEAT METRICS

Номер: US20130111146A1

The population of data to be admitted into secondary data storage cache of a data storage system is controlled by determining heat metrics of data of the data storage system. If candidate data is submitted for admission into the secondary cache, data is selected to tentatively be evicted from the secondary cache; candidate data provided to the secondary data storage cache is rejected if its heat metric is less than the heat metric of the tentatively evicted data; and candidate data submitted for admission to the secondary data storage cache is admitted if its heat metric is equal to or greater than the heat metric of the tentatively evicted data. 1. A method for populating data into a secondary data storage cache of a computer-implemented data storage system , comprising:determining heat metrics of data of said data storage system;selecting data to tentatively be evicted from said secondary cache;comparing said heat metric of candidate data submitted for admission to said secondary cache, to said heat metric of said tentatively evicted data;rejecting candidate data provided to said secondary data storage cache whose heat metric is less than said heat metric of said tentatively evicted data; andadmitting to said secondary data storage cache, candidate data provided to said secondary data storage cache whose heat metric is equal to or greater than said heat metric of said tentatively evicted data.2. The method of claim 1 , wherein said cache data storage system additionally comprises a first data storage cache and data storage; and wherein said heat metrics are based on heat of said data while said data was stored in any of said first data storage cache claim 1 , said secondary data storage cache and said data storage claim 1 , of said data storage system.3. The method of claim 1 , wherein said tentatively evicted data is determined with an LRU algorithm claim 1 , and said heat metric of said tentatively evicted data is based on heat metrics of a plurality of data at ...

Подробнее
02-05-2013 дата публикации

SELECTIVE SPACE RECLAMATION OF DATA STORAGE MEMORY EMPLOYING HEAT AND RELOCATION METRICS

Номер: US20130111160A1

Space of a data storage memory of a data storage memory system is reclaimed by determining heat metrics of data stored in the data storage memory; determining relocation metrics related to relocation of the data within the data storage memory; determining utility metrics of the data relating the heat metrics to the relocation metrics for the data; and making the data whose utility metric fails a utility metric threshold, available for space reclamation. Thus, data that otherwise may be evicted or demoted, but that meets or exceeds the utility metric threshold, is exempted from space reclamation and is instead maintained in the data storage memory. 1. A method for reclaiming space of a data storage memory of a data storage memory system , comprising:determining heat metrics of data stored in said data storage memory;determining relocation metrics related to relocation of said data within said data storage memory;determining utility metrics of said data relating said heat metrics to said relocation metrics for said data;making said data whose utility metric fails a utility metric threshold, available for space reclamation; andexempting from space reclamation said data whose utility metric meets or exceeds said utility metric threshold.2. The method of claim 1 , additionally comprising:exempting from space reclamation eligibility, data recently added to said data storage memory.3. The method of claim 1 , additionally comprising:exempting from space reclamation eligibility, data designated as ineligible by space management policy.4. The method of claim 1 , wherein said utility metric threshold is determined from an average of utility metrics for data of said data storage memory.5. The method of claim 4 , wherein said average of utility metrics for data of said data storage memory is determined over a period of time.6. The method of claim 4 , wherein said average of utility metrics for data of said data storage memory is determined over a predetermined number of space ...

Подробнее
16-05-2013 дата публикации

LOGICAL TO PHYSICAL ADDRESS MAPPING IN STORAGE SYSTEMS COMPRISING SOLID STATE MEMORY DEVICES

Номер: US20130124794A1

The present idea provides a high read and write performance from/to a solid state memory device. The main memory of the controller is not blocked by a complete address mapping table covering the entire memory device. Instead such table is stored in the memory device itself, and only selected portions of address mapping information are buffered in the main memory in a read cache and a write cache. A separation of the read cache from the write cache enables an address mapping entry being evictable from the read cache without the need to update the related flash memory page storing such entry in the flash memory device. By this design, the read cache may advantageously be stored on a DRAM even without power down protection, while the write cache may preferably be implemented in nonvolatile or other fail-safe memory. This leads to a reduction of the overall provisioning of nonvolatile or fail-safe memory and to an improved scalability and performance. 1. A storage controller for controlling a reading and writing of data from/to a solid state memory device , comprisinga read cache for buffering address mapping information representing a subset of address mapping information stored in the memory device, which address mapping information includes a mapping of logical address information for identifying data in a requesting host to physical address information for identifying data in the memory device, anda write cache for buffering address mapping information to be written to the memory device.2. A storage controller according to claim 1 , wherein the write cache is maintained as a unit separate from the read cache in that content buffered in the read cache is searchable independent from content buffered in the write cache and vice versa.3. A storage controller according to claim 1 , comprising one of a non-volatile memory and a volatile fail-safe memory including the write cache claim 1 , and a volatile memory including the read cache.4. A storage controller according to ...

Подробнее
30-05-2013 дата публикации

SCHEDULING REQUESTS IN A SOLID STATE MEMORY DEVICE

Номер: US20130138912A1

An apparatus and method for a memory controller for managing scheduling requests in a solid state memory device. The memory includes a set of units wherein a unit within the set of units is erasable as a whole by a unit reclaiming process resulting in a free unit available for writing data to. The memory controller further includes a first queue for queuing user requests for reading and/or writing data from/to the memory, and a second queue for queuing unit reclaiming requests for executing the unit reclaiming process. A scheduler is provided for selecting user requests from the first queue and unit reclaiming requests from the second queue for execution according to a defined ratio. The defined ratio is a variable ratio, is dependent on the current number of free units, and permits the memory controller to select requests from both the first queue and the second queue. 1. Memory controller for managing a memory , the memory controller comprising:a first queue for queuing user requests which includes either reading data, writing data or reading and writing data from or to the memory;a second queue for queuing unit reclaiming requests for executing a unit reclaiming process; anda scheduler for selecting the user requests from the first queue and the unit reclaiming requests from the second queue for execution according to a defined ratio.2. The memory controller for managing a memory according to claim 1 , wherein the memory comprises a set of units wherein a unit within the set of units is erasable as a whole by the unit reclaiming process resulting in a free unit available for writing data to.3. The memory controller for managing a memory according to claim 1 , wherein the defined ratio is a variable ratio and is dependent on a current number of free units.4. The memory controller for managing a memory according to claim 3 , wherein the variable ratio permits the memory controller to select requests from both the first queue and the second queue.5. The memory ...

Подробнее
06-06-2013 дата публикации

CACHE MEMORY MANAGEMENT IN A FLASH CACHE ARCHITECTURE

Номер: US20130145089A1

Provided is a method for managing cache memory to cache data units in at least one storage device. A cache controller is coupled to at least two flash bricks, each comprising a flash memory. Metadata indicates a mapping of the data units to the flash bricks caching the data units, wherein the metadata is used to determine the flash bricks on which the cache controller caches received data units. The metadata is updated to indicate the flash brick having the flash memory on which data units are cached. 1. A method for managing cache memory to cache data units in at least one storage device , comprising:providing a cache controller and at least two flash bricks, each comprising a flash memory, coupled to the cache controller;maintaining metadata indicating a mapping of the data units to the flash bricks caching the data units, wherein the metadata is used to determine the flash bricks on which the cache controller caches received data units; andupdating the metadata to indicate the flash brick having the flash memory on which the cache controller caches the data units.2. The method of claim 1 , wherein the cache controller maintains the metadata claim 1 , further comprising:deciding whether to cache received data units; andupdating the metadata to indicate one of the flash bricks on which the received data units are to be cached.3. The method of claim 2 , wherein deciding whether to cache the received data units further comprises deciding which cached data units should be removed from the flash brick to make room for the received data units to be stored on the flash brick.4. The method of claim 1 , further comprising:selecting the flash brick to use to cache the received data units in response to deciding to cache the received data units;determining whether the selected flash brick has decided to accept to cache the received data units; andupdating the metadata for the data units to indicate the selected flash brick as caching the received data units in response to ...

Подробнее
20-06-2013 дата публикации

PROCESSING UNIT RECLAIMING REQUESTS IN A SOLID STATE MEMORY DEVICE

Номер: US20130159609A1
Автор: Haas Robert, Pletka Roman

An apparatus and method for processing unit reclaiming requests in a solid state memory device. The present invention provides a method of managing a memory which includes a set of units. The method includes selecting a unit from the set of units having plurality of subunits. The method further includes determining a number of valid subunits m to be relocated from the units selected for a batch operation where m is at least 2. The selecting is carried out by a unit reclaiming process. 1. A method of managing a memory which includes a set of units , the method comprising the steps of:selecting a unit from the set of units having plurality of subunits; anddetermining a number of valid subunits m to be relocated from the units selected for a batch operation;wherein the selecting is carried out by a unit reclaiming process; andwherein m is at least two.2. The method of claim 1 , further comprising:writing data updates to outdated data;wherein the data updates are written to the subunits so that the updated data is written to a different subunit from the subunit that contains the outdated data; andwherein the subunit that contains the outdated data is an invalid subunit and the subunit that contains the updated data is a valid subunit.3. The method of claim 1 , further comprising:determining the number of the valid subunits m to be relocated in the batch operation based on at least one of:a total number of valid subunits n contained in the selected unit;a number of free units currently available; andat lease one system parameter comprising at least one parameter selected from the group consisting of: a read operation, a write operation, an erase operation, and a queue operation for placing relocation requests for relocating the valid subunits m.4. The method of claim 1 , further comprising:determining a number of valid subunits n contained in the unit selected; andrelocating the valid subunits n by operating relocation requests for the valid subunits n in batches of size ...

Подробнее
27-06-2013 дата публикации

WEAR-LEVEL OF CELLS/PAGES/SUB-PAGES/BLOCKS OF A MEMORY

Номер: US20130166827A1

The invention is directed to a method for wear-leveling cells or pages or sub-pages or blocks of a memory such as a flash memory, the method comprising:—receiving (S) a chunk of data to be written on a cell or page or sub-page or block of the memory;—counting (S) in the received chunk of data the number of times a given type of binary data ‘0’ or ‘I’ is to be written; and—distributing (S) the writing of the received chunk of data amongst cells or pages or sub-pages or blocks of the memory such as to wear-level the memory with respect to the number of the given type of binary data ‘0’ or ‘I’ counted in the chunk of data to be written. 115-. (canceled)16. A method for wear-leveling a cell/page/sub-page/block portion of a memory , the method comprising:receiving a chunk of data to be written on the cell/page/sub-page/block portion of the memory;counting, in a received chunk of data, a number of times a given type of binary data ‘0’ or ‘1’ is to be written; anddistributing writing of the received chunk of data among one of cells, pages, sub-pages and blocks of the memory such as to wear-level the memory with respect to a number of a given type of binary data ‘0’ or ‘1’ counted in the chunk of data to be written.17. The method of claim 16 , wherein the cell/page/sub-page/block portion of the memory includes at least one of: a cell claim 16 , a sub-page claim 16 , a page and a block of the memory.18. The method of claim 16 , wherein the memory is a flash memory.19. The method of claim 16 , wherein distributing the writing of the received chunk of data is further carried out with respect to wear-leveling information associated with each one of the cells or pages or sub-pages or blocks of the memory.20. The method of claim 19 , wherein the wear-leveling information is the number of ‘0s’ already written on a cell or page or sub-page or block of the memory.21. The method of claim 20 , further comprising maintaining a pool of received chunks of data.22. The method of claim 21 ...

Подробнее
18-07-2013 дата публикации

MANAGEMENT OF PARTIAL DATA SEGMENTS IN DUAL CACHE SYSTEMS

Номер: US20130185512A1

For movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor, a whole data segment containing one of the partial data segments is promoted to both the lower and higher levels of cache. Requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache. Unrequested data of the whole data segment is split and positioned at a Least Recently Used (LRU) portion of the demotion queue of the higher level of cache. The unrequested data is pinned in place until a write of the whole data segment to the lower level of cache completes. 1. A method for movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor , comprising: requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache,', 'unrequested data of the whole data segment is split and positioned at a Least Recently Used (LRU) portion of the demotion queue of the higher level of cache, and', 'the unrequested data is pinned in place until a write of the whole data segment to the lower level of cache completes., 'promoting a whole data segment containing one of the partial data segments to both the lower and higher levels of cache, wherein2. The method of claim 1 , wherein promoting the whole data segment occurs pursuant to a read request for the one of the partial data segments.3. The method of claim 1 , further including claim 1 , previous to promoting the whole data segment claim 1 , determining if the one of the partial data segments should be cached on the lower level of cache.4. The method of claim 3 , wherein determining if the one of the partial data segments should be cached on the lower level of cache includes considering an Input/Output Performance (IOP) metric claim 3 , a bandwidth metric claim 3 , and ...

Подробнее
08-08-2013 дата публикации

PROMOTION OF PARTIAL DATA SEGMENTS IN FLASH CACHE

Номер: US20130205077A1

For efficient track destage in secondary storage in a more effective manner, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, the temporal bits and sequential bits are transferred from the primary storage to the secondary storage. The temporal bits are allowed to age on the secondary storage. 1. A method for promoting partial data segments in a computing storage environment having lower and higher speed levels of cache by a processor , comprising: allowing the partial data segments to remain in the higher speed cache level for a time period longer that at least one whole data segment, and', 'implementing a preference for movement of the partial data segments to the lower speed cache level based on at least one of an amount of holes and a data heat metric, wherein a first of the partial data segments having at least one of a lower amount of holes and a hotter data heat is moved to the lower speed cache level ahead of a second of the partial data segments having at least one of a higher amount of holes and a cooler data heat., 'configuring a data moving mechanism adapted for performing at least one of2. The method of claim 1 , further including claim 1 , pursuant to configuring the data mover mechanism claim 1 , writing one of the partial data segments to the lower speed cache level as a whole data segment.3. The method of claim 1 , further including claim 1 , pursuant to configuring the data mover mechanism claim 1 , densely packing one of the partial data segments into a Cache Flash Element (CFE).4. The method of claim 1 , further including writing fixed portions of the partial data segment to portions of the lower speed cache corresponding to an associated storage device claim 1 , wherein the fixed portions are located using pointers in an affiliated Cache Flash Control Block (CFCB).5. The method of claim 2 , further including claim 2 , if the first of the partial data segments has a hotter ...

Подробнее
05-09-2013 дата публикации

ADAPTIVE CACHE PROMOTIONS IN A TWO LEVEL CACHING SYSTEM

Номер: US20130232294A1

Provided are a computer program product, system, and method for managing data in a first cache and a second cache. A reference count is maintained in the second cache for the page when the page is stored in the second cache. It is determined that the page is to be promoted from the second cache to the first cache. In response to determining that the reference count is greater than zero, the page is added to a Least Recently Used (LRU) end of an LRU list in the first cache. In response to determining that the reference count is less than or equal to zero, the page is added to a Most Recently Used (LRU) end of the LRU list in the first cache. 1. A computer program product for managing data in a first cache and a second cache , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to perform operations , the operations comprising:maintaining a reference count in the second cache for a page when the page is stored in the second cache;determining that the page is to be promoted from the second cache to the first cache;in response to determining that the reference count is greater than zero, adding the page to a Least Recently Used (LRU) end of an LRU list in the first cache; andin response to determining that the reference count is less than or equal to zero, adding the page to a Most Recently Used (LRU) end of the LRU list in the first cache.2. The computer program product of claim 1 , wherein the first cache and the second cache are coupled to storage.3. The computer program product of claim 2 , wherein the first cache is a faster access device than the second cache claim 2 , and wherein the second cache is a faster access device than the storage.4. The computer program product of claim 2 , wherein the first cache comprises a Random Access Memory (RAM) claim 2 , the second cache comprises a flash device claim 2 , and the storage comprises a sequential write device.5. The computer ...

Подробнее
05-09-2013 дата публикации

Adaptive cache promotions in a two level caching system

Номер: US20130232295A1
Принадлежит: International Business Machines Corp

Provided are a computer program product, system, and method for managing data in a first cache and a second cache. A reference count is maintained in the second cache for the page when the page is stored in the second cache. It is determined that the page is to be promoted from the second cache to the first cache. In response to determining that the reference count is greater than zero, the page is added to a Least Recently Used (LRU) end of an LRU list in the first cache. In response to determining that the reference count is less than or equal to zero, the page is added to a Most Recently Used (LRU) end of the LRU list in the first cache.

Подробнее
26-12-2013 дата публикации

MANAGING CACHE MEMORIES

Номер: US20130346538A1
Принадлежит:

A method for managing cache memories includes providing a computerized system including a shared data storage system (CS) configured to interact with several local servers that serve applications using respective cache memories, and access data stored in the shared data storage system; providing cache data information from each of the local servers to the shared data storage system, the cache data information comprising cache hit data representative of cache hits of each of the local servers, and cache miss data representative of cache misses of each of the local servers; aggregating, at the shared data storage system, at least part of the cache hit and miss data received and providing the aggregated cache data information to one or more of the local servers; and at the local servers, updating respective one or more cache memories used to serve respective one or more applications based on the aggregated cache data information. 1. A method for managing cache memories , the method comprising:providing a computerized system comprising a shared data storage system (CS) and several local servers, wherein the shared data storage system is configured to interact with the local servers, the local servers serve applications using respective cache memories, and each of the local servers accesses data stored in the shared data storage system;providing cache data information from each of the local servers to the shared data storage system, the cache data information comprising cache hit data representative of cache hits of each of the local servers; and cache miss data representative of cache misses of each of the local servers;aggregating, at the shared data storage system, at least part of the cache hit data and the cache miss data received into aggregated cache data information and providing the aggregated cache data information to one or more of the local servers; andat the one or more of the local servers, updating respective one or more cache memories used to serve ...

Подробнее
05-01-2017 дата публикации

Wear leveling of a memory array

Номер: US20170003880A1
Принадлежит: International Business Machines Corp

In at least one embodiment, a controller of a non-volatile memory array including a plurality of subdivisions stores write data within the non-volatile memory array utilizing a plurality of block stripes of differing numbers of blocks, where all of the blocks within each block stripe are drawn from different ones of the plurality of subdivisions. The controller builds new block stripes for storing write data from blocks selected based on estimated remaining endurances of blocks in each of the plurality of subdivisions.

Подробнее
07-01-2016 дата публикации

SELECTIVE SPACE RECLAMATION OF DATA STORAGE MEMORY EMPLOYING HEAT AND RELOCATION METRICS

Номер: US20160004456A1

Space of a data storage memory of a data storage memory system is reclaimed by determining heat metrics of data stored in the data storage memory; determining relocation metrics related to relocation of the data within the data storage memory; determining utility metrics of the data relating the heat metrics to the relocation metrics for the data; and making the data whose utility metric fails a utility metric threshold, available for space reclamation. 1. A method for reclaiming space of a data storage memory of a data storage memory system , comprising:determining heat metrics of data stored in said data storage memory;determining relocation metrics related to relocation of said data within said data storage memory, said relocation metrics comprising a count of the number of times said data has been relocated during reclamation process iterations;determining utility metrics of said data relating said heat metrics to said relocation metrics for said data; andmaking said data whose utility metric fails a utility metric threshold, available for space reclamation.2. The method of claim 1 , additionally comprising:exempting from space reclamation said data whose utility metric meets or exceeds said utility metric threshold; andexempting from space reclamation eligibility, data recently added to said data storage memory.3. The method of claim 1 , additionally comprising:exempting from space reclamation eligibility, data designated as ineligible by space management policy.4. The method of claim 1 , wherein said utility metric threshold is determined from an average of utility metrics for data of said data storage memory.5. The method of claim 4 , wherein said average of utility metrics for data of said data storage memory is determined over a period of time.6. The method of claim 4 , wherein said average of utility metrics for data of said data storage memory is determined over a predetermined number of space reclamation requests processed.7. The method of claim 1 , ...

Подробнее
07-01-2021 дата публикации

ADAPTING MEMORY BLOCK POOL SIZES USING HYBRID CONTROLLERS

Номер: US20210004158A1
Принадлежит:

A computer-implemented method, according to one embodiment, includes: determining whether a number of blocks included in a first ready-to-use (RTU) queue is in a first range of the first RTU queue. In response to determining that the number of blocks included in the first RTU queue is in the first range, a determination is made as to whether a number of blocks included in a second RTU queue is in a second range of the second RTU queue. Moreover, in response to determining that the number of blocks included in the second RTU queue is not in the second range, valid data is relocated from one of the blocks in a first pool which corresponds to the first RTU queue. The block in the first pool is erased, and transferred from the first pool to the second RTU queue which corresponds to a second pool. 1. A computer-implemented method for adapting block pool sizes in a storage system , comprising:determining whether a number of blocks included in a first ready-to-use (RTU) queue is in a first range of the first RTU queue;in response to determining that the number of blocks included in the first RTU queue is in the first range of the first RTU queue, determining whether a number of blocks included in a second RTU queue is in a second range of the second RTU queue;in response to determining that the number of blocks included in the second RTU queue is not in the second range of the second RTU queue, relocating valid data from one of the blocks in a first pool which corresponds to the first RTU queue;erasing the block in the first pool; andtransferring the block from the first pool to the second RTU queue which corresponds to a second pool,wherein the blocks in the first pool are configured in single-level cell (SLC) mode,wherein the blocks in the second pool are configured in multi-bit-per-cell mode.2. The computer-implemented method of claim 1 , wherein transferring the block from the first pool to the second RTU queue which corresponds to the second pool includes: ...

Подробнее
07-01-2021 дата публикации

BLOCK MODE TOGGLING USING HYBRID CONTROLLERS

Номер: US20210004159A1
Принадлежит:

A computer-implemented method, according to one embodiment, includes: maintaining a block switching metric for each block of memory in the storage system. A determination is made as to whether a first block in a first pool should be transferred to a second pool according to a block switching metric which corresponds to the first block. In response to determining that the first block in the first pool should be transferred to the second pool according to the block switching metric which corresponds to the first block, the first block is erased. The first block is then transferred from the first pool to a second RTU queue which corresponds to the second pool. A second block in the second pool is also erased and transferred from the second pool to a first RTU queue which corresponds to the first pool. 1. A computer-implemented method for toggling block modes in a storage system , comprising:maintaining a block switching metric for each block of memory in the storage system;determining whether a first block in a first pool should be transferred to a second pool according to a block switching metric which corresponds to the first block;in response to determining that the first block in the first pool should be transferred to the second pool according to the block switching metric which corresponds to the first block, erasing the first block;transferring the first block from the first pool to a second ready-to-use (RTU) queue which corresponds to the second pool;erasing a second block; andtransferring the second block from the second pool to a first RTU queue which corresponds to the first pool,wherein the blocks in the first pool are configured in single-level cell (SLC) mode,wherein the blocks in the second pool are configured in multi-bit-per-cell mode.2. The computer-implemented method of claim 1 , wherein the block switching metric includes information selected from the group consisting of: a program/erase (P/E) cycle count for the respective block claim 1 , the P/E ...

Подробнее
08-01-2015 дата публикации

MANAGING METADATA FOR CACHING DEVICES DURING SHUTDOWN AND RESTART PROCEDURES

Номер: US20150012706A1
Принадлежит:

A computer program product, system, and method for managing metadata for caching devices during shutdown and restart procedures. Fragment metadata for each fragment of data from the storage server stored in the cache device is generated. The fragment metadata is written to at least one chunk of storage in the cache device in a metadata directory in the cache device. For each of the at least one chunk in the cache device to which the fragment metadata is written, chunk metadata is generated for the chunk and writing the generated chunk metadata to the metadata directory in the cache device. Header metadata having information on access of the storage server is written to the metadata directory in the cache device. The written header metadata, chunk metadata, and fragment metadata are used to validate the metadata directory and the fragment data in the cache device during a restart operation. 1. A computer program product for caching data from a storage device managed by a storage server in a cache device providing non-volatile storage , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that is executable to perform operations , the operations comprising:generating fragment metadata for each fragment of data from the storage server stored in the cache device;writing the fragment metadata to at least one chunk of storage in the cache device in a metadata directory in the cache device;for each of the at least one chunk in the cache device to which the fragment metadata is written, generating chunk metadata for the chunk and writing the generated chunk metadata to the metadata directory in the cache device;writing header metadata having information on access of the storage server to the metadata directory in the cache device; andusing the written header metadata, chunk metadata, and fragment metadata to validate the metadata directory and the fragment data in the cache device during a restart ...

Подробнее
14-01-2016 дата публикации

RELIABILITY SCHEME USING HYBRID SSD/HDD REPLICATION WITH LOG STRUCTURED MANAGEMENT

Номер: US20160011784A1
Принадлежит:

In one embodiment, a method of managing data includes storing a first copy of data in a solid state memory using a controller of the solid state memory, and storing a second copy of the data in a hard disk drive memory using the controller. Write requests are served substantially simultaneously at both the solid state memory and the hard disk drive memory under control of the controller. In another embodiment, a system for storing data includes a solid state memory, at least one hard disk drive memory, and a controller for controlling storage of data in both the solid state memory and the hard disk drive memory. Other methods, systems, and computer program products are also described according to various embodiments. 1. A computer program product for managing data on a storage system , the computer program product comprising a computer readable storage medium having program instructions embodied therewith , wherein the computer readable storage medium is not a transitory signal per se , the program instructions executable by a controller to cause the controller to perform a method comprising:managing, by the controller, a first copy of data in a solid state memory;managing, by the controller, a second copy of the data in a hard disk drive memory using the controller;receiving, by the controller, a request to erase the data; andcausing, by the controller, erasing of the first copy of the data from the solid state memory in response to receiving the request to erase the data, wherein the second copy of the data is not also immediately erased from the hard disk drive memory.2. The computer program product as recited in claim 1 , wherein the second copy of the data is managed by the controller in a same way as the first copy is managed by the controller.3. The computer program product as recited in claim 1 , wherein read requests are served by the solid state memory.4. The computer program product as recited in claim 1 , wherein write requests are served substantially ...

Подробнее
14-01-2021 дата публикации

DATA PLACEMENT IN WRITE CACHE ARCHITECTURE SUPPORTING READ HEAT DATA SEPARATION

Номер: US20210011852A1
Принадлежит:

A computer-implemented method, according to one approach, includes: receiving write requests, accumulating the write requests in a destage buffer, and determining a current read heat value of each logical page which corresponds to the write requests. Each of the write requests is assigned to a respective write queue based on the current read heat value of each logical page which corresponds to the write requests. Moreover, each of the write queues correspond to a different page stripe which includes physical pages, the physical pages included in each of the respective page stripes being of a same type. Furthermore, data in the write requests is destaged from the write queues to their respective page stripes. Other systems, methods, and computer program products are described in additional approaches. 1. A computer-implemented method , comprising:receiving write requests;accumulating the write requests in a destage buffer;determining a current read heat value of each logical page which corresponds to the write requests;assigning each of the write requests to a respective write queue based on the current read heat value of each logical page which corresponds to the write requests, wherein each of the write queues correspond to a different page stripe which includes physical pages, wherein the physical pages included in each of the respective page stripes are of a same type; anddestaging data in the write requests from the write queues to their respective page stripes.2. The computer-implemented method of claim 1 , comprising:determining whether a given write queue includes enough data in the respective write requests to fill a next page stripe which corresponds thereto;in response to determining the given write queue does not include enough data in the respective write requests to fill the next page stripe which corresponds thereto, determining whether an adjacent write queue includes enough data in the respective write requests to complete filling the next page ...

Подробнее
21-01-2016 дата публикации

PROMOTION OF PARTIAL DATA SEGMENTS IN FLASH CACHE

Номер: US20160019000A1

For efficient track destage in secondary storage in a more effective manner, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, if a first bit has at least one of a lower amount of holes and a hotter data heat metric, it is moved to the lower speed cache level. If the first bit has a hotter data heat and greater than a predetermined number of holes, the first bit is discarded.

Подробнее
08-02-2018 дата публикации

SELECTIVELY DE-STRADDLING DATA PAGES IN NON-VOLATILE MEMORY

Номер: US20180039536A1
Принадлежит:

A computer-implemented method, according to one embodiment, includes: detecting at least one read of a logical page straddled across codewords, storing an indication of a number of detected reads of the straddled logical page, and relocating the straddled logical page to a different physical location in response to the number of detected reads of the straddled logical page. When relocated, the logical page is written to the different physical location in a non-straddled manner. Other systems, methods, and computer program products are described in additional embodiments. 1. A computer-implemented method , comprising:detecting at least one read of a logical page straddled across codewords;storing an indication of a number of detected reads of the straddled logical page; andrelocating the straddled logical page to a different physical location in response to the number of detected reads of the straddled logical page, wherein the logical page is written to the different physical location in a non-straddled manner.2. The computer-implemented method as recited in claim 1 , comprising: detecting reading of the logical page straddling across multiple codewords in a same physical page.3. The computer-implemented method as recited in claim 1 , comprising: detecting reading of the logical page straddling across multiple physical pages.4. The computer-implemented method as recited in claim 1 , comprising:storing the indication of the number of detected reads by increment a straddled page read counter in response to detecting each read of the straddled logical page; andrelocating at least the straddled logical page in response to the read counter exceeding a threshold.5. The computer-implemented method as recited in claim 4 , wherein the straddled page read counter indicates a number of reads of all straddled logical pages on a single physical page.6. The computer-implemented method as recited in claim 4 , wherein the straddled page read counter indicates a number of reads of ...

Подробнее
15-02-2018 дата публикации

ADAPTIVE ASSIGNMENT OF OPEN LOGICAL ERASE BLOCKS TO DATA STREAMS

Номер: US20180046376A1
Принадлежит:

A computer-implemented method, according to one embodiment, includes: assigning data having a first heat to a first data stream, assigning data having a second heat to a second data stream, determining an anticipated throughput of each of the first and second data streams, assigning a first number of logical erase blocks of non-volatile memory to the first data stream based on the anticipated throughput of the first data stream, and assigning a second number of logical erase blocks of non-volatile memory to the second data stream based on the anticipated throughput of the second data stream. Other systems, methods, and computer program products are described in additional embodiments. 1. A computer-implemented method , comprising:assigning data having a first heat to a first data stream;assigning data having a second heat to a second data stream;determining an anticipated throughput of each of the first and second data streams;assigning a first number of logical erase blocks of non-volatile memory to the first data stream based on the anticipated throughput of the first data stream; andassigning a second number of logical erase blocks of non-volatile memory to the second data stream based on the anticipated throughput of the second data stream.2. The computer-implemented method of claim 1 , wherein the first and second numbers of logical erase blocks assigned to the first and second data streams are proportional to the anticipated throughput of the first and second data streams claim 1 , respectively.3. The computer-implemented method of claim 1 , wherein the first and/or second number of logical erase blocks are statically assigned to each of the first and/or second data streams.4. The computer-implemented method of claim 1 , wherein the first and/or second number of logical erase blocks assigned to the first and/or second data streams are adjusted dynamically based on a measurement of stream stall events of the first and/or second data streams.5. The computer- ...

Подробнее
13-02-2020 дата публикации

CALIBRATION OF OPEN BLOCKS IN NAND FLASH MEMORY

Номер: US20200051621A1
Принадлежит:

Performing a calibration of a NAND flash memory block that is in an open state. An open state of the NAND flash memory block is detected, the NAND flash memory block comprising a plurality of memory pages, each of which comprising a plurality of memory cells. A group of pages of the NAND flash memory block being in an open state having comparable characteristics is identified. A calibration of read voltage values to pages of the group of identified pages is performed. 1. (canceled)2. The method according to claim 10 , wherein said calibration is a delta calibration reflecting temporary changes in said programmed threshold voltage distributions only.3. The method according to claim 10 , wherein calibrating pages of said identified group of page is done using a delta calibration based on said block state information indicating said page group being affected by temporary changes in said programmed threshold voltage distributions claim 10 , and wherein calibrating pages of said identified page group is done using a base calibration based on said block state information indicating said page group not being affected by temporary changes in said programmed threshold voltage distributions.4. The method according to claim 10 , further comprising determining a time said NAND flash memory block has remained in an open state.5. The method according to claim 4 , further comprising writing a part or all of not yet programmed pages of said NAND flash memory block being in an open state if a predetermined time period is elapsed.6. The method according to claim 5 , wherein said predetermined time period corresponds to said time claim 5 , said NAND flash memory has remained in an open state.7. The method according to claim 5 , wherein said predetermined time period corresponds to said time claim 5 , a programmed page group has remained in an open state.8. The method according to claim 5 , wherein said writing comprises writing a predetermined data pattern or a random data pattern to ...

Подробнее
01-03-2018 дата публикации

BACKGROUND THRESHOLD VOLTAGE SHIFTING USING BASE AND DELTA THRESHOLD VOLTAGE SHIFT VALUES IN NON-VOLATILE MEMORY

Номер: US20180059940A1
Принадлежит:

A computer program product according to one embodiment includes a computer readable storage medium having program instructions embodied therewith. The program instructions are executable by a processing circuit to cause the circuitry to perform a method including determining, after writing data to a non-volatile memory block, one or more delta threshold voltage shift (TVS) values. One or more overall threshold voltage shift values for the data written to the non-volatile memory block are calculated, the values being a function of the one or more TVS values to be used when writing data to the non-volatile memory block. The overall threshold voltage shift values are stored. A base threshold voltage shift (TVS) value, the one or more TVS values, or both the TVSvalue and the one or more TVS values are re-calibrated during a background health check after a predetermined number of background health checks without calibration are performed. 1. A computer program product , the computer program product comprising a computer readable storage medium having program instructions embodied therewith , wherein the computer readable storage medium is not a transitory signal per se , the program instructions executable by a processing circuit to cause the processing circuit to perform a method comprising:{'sub': 'Δ', 'determining, by the processing circuit, after writing data to a non-volatile memory block, one or more delta threshold voltage shift (TVS) values configured to track temporary changes with respect to changes in the underlying threshold voltage distributions due to retention and/or read disturb errors;'}{'sub': 'Δ', 'calculating, by the processing circuit, one or more overall threshold voltage shift values for the data written to the non-volatile memory block, the one or more overall threshold voltage shift values being a function of the one or more TVS values to be used when writing data to the non-volatile memory block; and'}storing, by the processing circuit, the one ...

Подробнее
01-03-2018 дата публикации

BACKGROUND THRESHOLD VOLTAGE SHIFTING USING BASE AND DELTA THRESHOLD VOLTAGE SHIFT VALUES IN NON-VOLATILE MEMORY

Номер: US20180059941A1
Принадлежит:

In one embodiment, a computer program product includes a computer readable storage medium having program instructions embodied therewith. The program instructions are executable by a processing circuit to cause the processing circuit to perform a method that includes determining, after writing data to a non-volatile memory block, one or more delta threshold voltage shift (TVS) values. One or more overall threshold voltage shift values is calculated for the data written to the non-volatile memory block. The one or more overall threshold voltage shift values are stored. The method also includes reading one or more TVS values from a non-volatile controller memory, and resetting a program/erase cycle count since last calibration after calibrating the one or more overall threshold voltage shift values. The one or more TVS values and the program/erase cycle count since last calibration are stored to the non-volatile controller memory. 1. A computer program product , the computer program product comprising a computer readable storage medium having program instructions embodied therewith , wherein the computer readable storage medium is not a transitory signal per se , the program instructions executable by a processing circuit to cause the processing circuit to perform a method comprising:{'sub': 'Δ', 'determining, by the processing circuit, after writing data to a non-volatile memory block, one or more delta threshold voltage shift (TVS) values configured to track temporary changes with respect to changes in the underlying threshold voltage distributions due to retention and/or read disturb errors;'}{'sub': 'Δ', 'calculating, by the processing circuit, one or more overall threshold voltage shift values for the data written to the non-volatile memory block, the one or more overall threshold voltage shift values being a function of the one or more TVS values to be used when writing data to the non-volatile memory block;'}storing, by the processing circuit, the one or more ...

Подробнее
01-03-2018 дата публикации

WORKLOAD OPTIMIZED DATA DEDUPLICATION USING GHOST FINGERPRINTS

Номер: US20180060367A1
Принадлежит:

A controller of a data storage system generates fingerprints of data blocks written to the data storage system. The controller maintains, in a data structure, respective state information for each of a plurality of data blocks. The state information for each data block can be independently set to indicate any of a plurality of states, including at least one deduplication state and at least one non-deduplication state. At allocation of a data block, the controller initializes the state information for the data block to a non-deduplication state and, thereafter, in response to detection of a write of duplicate of the data block to the data storage system, transitions the state information for the data block to a deduplication state. The controller selectively performs data deduplication for data blocks written to the data storage system based on the state information in the data structure and by reference to the fingerprints. 1. A method of controlling a data storage system , the method comprising:a controller generating fingerprints of data blocks written to the data storage system; at allocation of a data block, initializing the state information for the data block to a non-deduplication state among the plurality of states; and', 'thereafter, in response to detection of a write of duplicate of the data block to the data storage system, transitioning the state information for the data block to a deduplication state among the plurality of states; and, 'the controller maintaining, in a data structure, respective state information for each of a plurality of data blocks in the data storage system, wherein the state information for each data block can be independently set to indicate any of a plurality of states, and wherein the plurality of states includes at least one deduplication state in which deduplication is performed for the associated data block and at least one non-deduplication state in which deduplication is not performed for the associated data block, wherein ...

Подробнее
20-02-2020 дата публикации

PAGE RETIREMENT IN A NAND FLASH MEMORY SYSTEM

Номер: US20200057702A9
Принадлежит:

In a data storage system including a non-volatile random access memory (NVRAM) array, a page is a smallest granularity of the NVRAM array that can be accessed by read and write operations, and a memory block containing multiple pages is a smallest granularity of the NVRAM array that can be erased. Data are stored in the NVRAM array in page stripes distributed across multiple memory blocks. In response to detection of an error in a particular page of a particular block of the NVRAM array, only the particular page of the particular block is retired, such that at least two of the multiple memory blocks across which a particular one of the page stripes is distributed include differing numbers of active (non-retired) pages. 1. A method of page retirement in a data storage system including a non-volatile random access memory (NVRAM) array , the method comprising:storing data in the NVRAM array in page stripes distributed across multiple memory blocks, wherein at least two of the multiple memory blocks across which one of the page stripes is distributed include differing numbers of active physical pages, wherein a physical page is a smallest granularity that can be accessed in the NVRAM array and a memory block containing multiple physical pages is a smallest granularity that can be erased in the NVRAM array;in response to detection of an error, retiring only a particular physical page of a particular block in which the error occurred and recording retirement of the particular physical page in a page status data structure; andthereafter, allocating a plurality of page stripes across a group of memory blocks including the particular block such that each of the plurality of page stripes is formed at a respective one of a plurality of different physical page indices, wherein the allocating includes skipping the particular page in the particular block when allocating a first page stripe at a first physical page index based on the page status data structure indicating the ...

Подробнее
20-02-2020 дата публикации

TECHNIQUES FOR IMPROVING DEDUPLICATION EFFICIENCY IN A STORAGE SYSTEM WITH MULTIPLE STORAGE NODES

Номер: US20200057814A9
Принадлежит:

Techniques for selecting a storage node of a storage system to store data include applying a first function to at least some data chunks of an extent to provide respective first values for each of the at least some data chunks. A storage node, included within multiple storage nodes of a storage system, is selected to store the extent based on a majority vote derived from the respective first values. 1. A method of selecting a storage node of a storage system to store data , comprising:applying, by a mapping node, a first function to at least some data chunks of an extent to provide respective first values for each of the at least some data chunks;selecting, by the mapping node, a storage node, included within multiple storage nodes of a storage system, to store the extent based on a majority vote derived from the respective first values; andperforming, by the storage system, deduplication of the data chunks of the extent in the selected storage node using fingerprints computed on each of the data chunks, wherein one or more of the data chunks within the extent do not include useful data for a vote and the one or more of the data chunks within the extent that do not include useful data for the vote are not utilized in selecting the storage node to store the extent.2. The method of claim 1 , wherein each of the first values are mapped to one of the storage nodes and the selecting is based on the majority vote of the mapped first values.3. The method of claim 1 , wherein the first values are dynamically calculated upon processing the extent and not stored persistently.4. The method of claim 1 , wherein at least some of the data chunks within the extent are not updated and the method further comprisesloading data for the data chunks within the extent that are not updated prior to the applying;applying a second function to the respective first values to generate respective storage node votes for each of the data chunks; andselecting the storage node to store the extent ...

Подробнее
05-03-2015 дата публикации

METHOD AND SYSTEM FOR ALLOCATING A RESOURCE OF A STORAGE DEVICE TO A STORAGE OPTIMIZATION OPERATION

Номер: US20150067294A1
Принадлежит:

Allocating a resource of a storage device to a storage optimization operation. An available resource of the storage device is monitored. Determining an allocation proportion of the resource allocated to the storage optimization operation, based on at least one of historical running information and a predicted value of a performance improvement caused by the storage optimization operation. Allocating the resource of the storage device to the storage optimization operation based on the available resource and the allocation proportion. 1. A method for allocating a resource of a storage device to a storage optimization operation , the method comprising:monitoring an available resource of the storage device;determining an allocation proportion of the resource allocated to the storage optimization operation, based on at least one of historical running information and a predicted value of a performance improvement caused by the storage optimization operation; andallocating the resource of the storage device to the storage optimization operation based on the available resource and the allocation proportion.2. The method of claim 1 ,wherein the historical running information includes information identifying customer workload in a past predetermined time period; and setting the allocation proportion to an initial value; and', 'adjusting the allocation proportion from the initial value to an upper limit value in response to the customer workload not exceeding a first threshold., 'wherein determining an allocation proportion further comprises3. The method of claim 2 ,wherein the historical running information of the machine further includes conflict information identifying a degree of data I/O conflicts which occur between the customer workload and the storage optimization operation in the past predetermined time period; and 'changing the allocation proportion based on the conflict information in response to the customer workload exceeding the first threshold.', 'wherein ...

Подробнее
04-03-2021 дата публикации

HYBRID READ VOLTAGE CALIBRATION IN NON-VOLATILE RANDOM ACCESS MEMORY

Номер: US20210065813A1
Принадлежит:

A computer-implemented method, according to one embodiment, includes: determining a current operating state of a block of memory. The block includes more than one type of page therein, and at least one read voltage is associated with each of the page types. The current operating state of the block is further used to produce a hybrid calibration scheme for the block which identifies a first subset of the read voltages, and a second subset of the read voltages. The read voltages in the second subset are further organized in one or more groupings. A unique read voltage offset value is calculated for each of the read voltages in the first subset, and a common read voltage offset value is also calculated for each grouping of read voltages in the second subset. 1. A computer-implemented method for calibrating read voltages for a block of memory , comprising:determining a current operating state of a block of memory, wherein the block includes more than one type of page therein, wherein at least one read voltage is associated with each of the page types;using the current operating state of the block to produce a hybrid calibration scheme for the block, wherein the hybrid calibration scheme identifies a first subset of the read voltages, as well as a second subset of the read voltages, wherein the read voltages in the second subset are organized in one or more groupings;calculating a unique read voltage offset value for each of the read voltages in the first subset;calculating a common read voltage offset value for each grouping of read voltages in the second subset;saving each of the unique read voltage offset values and each of the common read voltage offset values in a metadata storage area of the given block; andsaving the hybrid calibration scheme in the metadata storage area of the given block.2. The computer-implemented method of claim 1 , wherein the current operating state of a block of memory is selected from the group consisting of: a retention state claim 1 , a ...

Подробнее
28-02-2019 дата публикации

REDUCING WRITE AMPLIFICATION IN SOLID-STATE DRIVES BY SEPARATING ALLOCATION OF RELOCATE WRITES FROM USER WRITES

Номер: US20190065058A1
Принадлежит:

A computer program product, according to one embodiment, includes a computer readable storage medium having program instructions embodied therewith. The computer readable storage medium is not a transitory signal per se. Moreover, the program instructions are readable and/or executable by a processor to cause the processor to perform a method which includes: maintaining a first open logical erase block for user writes, and a second open logical erase block for relocate writes. A first data stream having the user writes is received, and transferred to the first open logical erase block. A second data stream having the relocate writes is also received, and transferred to the second open logical erase block. Furthermore, a third data stream is received, and is mixed with the first, second, and/or another data stream in response to determining that an open logical erase block is not available for assignment to the third data stream. 1. A computer program product comprising a computer readable storage medium having program instructions embodied therewith , wherein the computer readable storage medium is not a transitory signal per se , the program instructions readable and/or executable by a processor to cause the processor to perform a method comprising:maintaining, by the processor, a first open logical erase block for user writes;maintaining, by the processor, a second open logical erase block for relocate writes, wherein the first and second open logical erase blocks are different logical erase blocks;receiving, by the processor, a first data stream having the user writes;transferring, by the processor, the first data stream to the first open logical erase block;receiving, by the processor, a second data stream having the relocate writes;transferring, by the processor, the second data stream to the second open logical erase block;receiving, by the processor, a third data stream; andmixing, by the processor, the third data stream with the first data stream, the second ...

Подробнее
10-03-2016 дата публикации

STORING DATA IN A DISTRIBUTED FILE SYSTEM

Номер: US20160070715A1
Принадлежит:

A device for storing data in a distributed file system, the distributed file system including a plurality of deduplication storage devices, includes a determination unit configured to determine a characteristic of first data to be stored in the distributed file system; an identification unit configured to identify one of the deduplication storage devices of the distributed file system as deduplication storage device for the first data based on the characteristic of the first data; and a storing unit configured to store the first data in the identified deduplication storage device such that the first data and second data being redundant to the first data are deduplicatable within the identified deduplication storage device. 1. A device for storing data in a distributed file system , the distributed file system including a plurality of deduplication storage devices , the device comprising:a determination unit configured to determine a characteristic of first data to be stored in the distributed file system;an identification unit configured to identify one of the deduplication storage devices of the distributed file system as deduplication storage device for the first data based on the characteristic of the first data; anda storing unit configured to store the first data in the identified deduplication storage device such that the first data and second data being redundant to the first data are deduplicatable within the identified deduplication storage device.2. The device of claim 1 , wherein the determination unit is configured to compare metadata of the first data and the second data.3. The device of claim 2 , wherein the metadata includes one or more of a digest and a fingerprint of one or more of the first data and the second data.4. The device of claim 2 , wherein claim 2 , when the result of the comparison is negative claim 2 , the identification unit is configured to identify any deduplication storage device of the plurality of deduplication storage devices as ...

Подробнее
27-02-2020 дата публикации

Adaptive read voltage threshold calibration in non-volatile memory

Номер: US20200066353A1
Принадлежит: International Business Machines Corp

A non-volatile memory includes a plurality of physical pages each assigned to one of a plurality of page groups. A controller of the non-volatile memory performs a first calibration read of a sample physical page of a page group of the non-volatile memory. The controller determines if an error metric observed for the first calibration read of the sample physical page satisfies a calibration threshold. The controller calibrates read voltage thresholds of the page group utilizing a first calibration technique based on a determination that the error metric satisfies the calibration threshold and calibrates read voltage thresholds of the page group utilizing a different second calibration technique based on a determination that the error metric does not satisfy the calibration threshold.

Подробнее
27-02-2020 дата публикации

Error recovery of data in non-volatile memory during read

Номер: US20200066354A1
Принадлежит: International Business Machines Corp

A method of optimizing a read threshold voltage shift value for non-volatile memory units organized as memory pages may be provided. An ECC check is performed for active page reads. The method comprises, as part of the read operation, determining a status of the memory page, and reading a memory page with a current threshold voltage shift (TVS) value. Additionally, the method comprises, upon determining that a read memory page command passed an ECC check, returning corrected data read, and upon determining that the read memory page did not pass the ECC check, adjusting the current TVS value based on the status of the memory page to be read. Furthermore, the method comprises, while the read memory pages continues to not pass the ECC check, repeating the adjusting the current TVS value and the determining that the read memory page passes ECC check until a stop condition is met.

Подробнее
27-02-2020 дата публикации

SELECTIVE PAGE CALIBRATION BASED ON HIERARCHICAL PAGE MAPPING

Номер: US20200066355A1
Принадлежит:

A computer-implemented method, according to one embodiment, includes: detecting that a calibration of a first page group has been triggered, and evaluating a hierarchical page mapping to determine whether the first page group correlates to one or more other page groups in non-volatile memory. In response to determining that the first page group does correlate to one or more other page groups in the non-volatile memory, a determination is made as to whether to promote at least one of the one or more other page groups for calibration. In response to determining to promote at least one of the one or more other page groups for calibration, the first page group and the at least one of the one or more other page groups are calibrated. Moreover, each of the page groups includes one or more pages in non-volatile memory. 1. A computer-implemented method , comprising:detecting that a calibration of a first page group has been triggered;evaluating a hierarchical page mapping to determine whether the first page group correlates to one or more other page groups in non-volatile memory;in response to determining that the first page group does correlate to one or more other page groups in the non-volatile memory, determining whether to promote at least one of the one or more other page groups for calibration; andin response to determining to promote at least one of the one or more other page groups for calibration, calibrating the first page group as well as the at least one of the one or more other page groups,wherein each of the page groups includes one or more pages in non-volatile memory.2. The computer-implemented method of claim 1 , wherein the one or more pages in non-volatile memory included in each of the respective page groups have similar bit error rate characteristics.3. The computer-implemented method of claim 1 , wherein the calibration of the first page group is triggered by a bit error rate of the first page group claim 1 , wherein a number of the one or more other ...

Подробнее
27-02-2020 дата публикации

METHODS FOR READ THRESHOLD VOLTAGE SHIFTING IN NON-VOLATILE MEMORY

Номер: US20200066361A1
Принадлежит:

A method for optimizing a read threshold voltage shift value in a NAND flash memory may be provided. The method comprises selecting a group of memory pages, determining a current threshold voltage shift (TVS) value, and determining a negative and a positive threshold voltage shift offset value. Then, the method comprises repeating a loop process comprising reading all memory pages with different read TVS values, determining maximum raw bit error rates for the group of memory pages, determining a direction of change for the current TVS value, determining a new current TVS value by applying a function to the current TVS value using as parameters the current threshold voltage, the direction of change and the positive and the negative TVS value, until a stop condition is fulfilled such that a lowest possible number of read errors per group of memory pages is reached. 1. (canceled)2. A computer-implemented method for optimizing a read threshold voltage shift value in a NAND flash memory , said method comprisingselecting a group of at least one memory pages, each of said memory pages comprising a plurality of memory cells,determining a current threshold voltage shift value (gTVS),determining a positive threshold voltage shift offset value (Δ1) and a negative threshold voltage shift offset value (Δ2), reading all memory pages in said selected group with read threshold voltage shift values of gTVS, gTVS+Δ1, gTVS+Δ2,', 'determining for each of said read threshold shift values said maximum raw bit error rates for said group of memory pages being read,', 'determining a direction of change for said current threshold voltage shift value using said maximum raw bit error rates obtained from reading said memory pages in said selected group with said read threshold voltage shift values gTVS, gTVS+Δ1, and gTVS+Δ2,', 'determining a new current threshold voltage shift value by applying a function to said current threshold voltage shift value using as parameters said current threshold ...

Подробнее
22-03-2018 дата публикации

DATA DEDUPLICATION WITH REDUCED HASH COMPUTATIONS

Номер: US20180081752A1
Принадлежит:

Techniques for data deduplication in a data storage system include comparing a first attribute of a received data page to first attributes of one or more stored data pages. In response to the first attribute matching one of the first attributes, a second attribute of the received data page is compared to second attributes of the one or more data pages. In response to the second attribute of the received data page matching one of the second attributes, a fingerprint of the received data page is compared to fingerprints of the one or more data pages. In response to the fingerprint of the received data page matching one of the fingerprints, the received data page is discarded and replaced with a reference to the corresponding data page already stored in the storage system. In response to first attribute, the second attribute, or the fingerprint of the received data page not matching, the received data page is stored. 1. A method of data deduplication for a data storage system , comprising:comparing, by a controller, a first attribute of a received data page to one or more corresponding first attributes of one or more data pages stored in the storage system;in response to the first attribute of the received data page not being the same as one or more of the first attributes, storing, by the controller, the received data page in the storage system;in response to the first attribute being the same as one or more of the first attributes, comparing, by the controller, a second attribute of the received data page to one or more corresponding second attributes of the one or more data pages stored in the storage system;in response to the second attribute of the received data page not being the same as one or more of the second attributes, storing, by the controller, the received data page in the storage system;in response to the second attribute of the received data page being the same as one or more of the second attributes, comparing, by the controller, a fingerprint of the ...

Подробнее
22-03-2018 дата публикации

LOGICAL TO PHYSICAL TABLE RESTORATION FROM STORED JOURNAL ENTRIES

Номер: US20180081765A1
Принадлежит:

A controller-implemented method, according to one embodiment, includes: examining, by the controller, each of a plurality of journal entries from at least one journal beginning with a most recent one of the journal entries in a most recent one of the at least one journal and working towards an oldest one of the journal entries in an oldest one of the at least one journal, the journal entries corresponding to one or more updates made to one or more logical to physical table (LPT) entries of a LPT; determining, by the controller, whether a current LPT entry, which corresponds to a currently examined journal entry, has already been updated; and discarding, by the controller, the currently examined journal entry in response to determining that the current LPT entry has already been updated. 1. A controller-implemented method , comprising:examining, by the controller, each of a plurality of journal entries from at least one journal beginning with a most recent one of the journal entries in a most recent one of the at least one journal and working towards an oldest one of the journal entries in an oldest one of the at least one journal, the journal entries corresponding to one or more updates made to one or more logical to physical table (LPT) entries of a LPT;determining, by the controller, whether a current LPT entry, which corresponds to a currently examined journal entry, has already been updated; anddiscarding, by the controller, the currently examined journal entry in response to determining that the current LPT entry has already been updated.2. The controller-implemented method of claim 1 , wherein the journal entries include a physical address and a logical address.3. The controller-implemented method of claim 1 , wherein a flag is used to indicate whether each of the LPT entries have already been updated claim 1 , wherein determining whether the current LPT entry has already been updated includes inspecting a flag corresponding to the current LPT entry.4. The ...

Подробнее
12-03-2020 дата публикации

Addressing page-correlated read issues using intra-block parity

Номер: US20200081661A1
Принадлежит: International Business Machines Corp

A method for intra-block recovery of an Erasure Code protected memory page stripe may be provided. The method comprises providing a data storage device comprising a plurality of EC protected memory page stripes, each of which comprising a plurality of memory pages, wherein corresponding memory pages of the plurality of the page stripes are organized as a plurality of blocks comprising each the corresponding pages, each memory page comprising a plurality of non-volatile memory cells, and wherein each page stripe comprises at least one stripe parity page, grouping memory pages of a block into at least one window, each window comprising a plurality of memory pages of the block, and maintaining at least one parity page for each window of the block, such that a page read failure is recoverable even if multiple memory pages per page stripe experience a read failure concurrently.

Подробнее
12-03-2020 дата публикации

ADDRESSING PAGE-CORRELATED READ ISSUES USING INTRA-BLOCK PARITY

Номер: US20200081831A1
Принадлежит:

A method for intra-block recovery from memory page read failures of memory pages is provided. The method comprises providing a data storage device comprising a plurality of memory pages. Corresponding memory pages are physically organized as a plurality of blocks comprising each the corresponding pages, each memory page comprising a plurality of non-volatile memory cells. The method comprises grouping memory pages of a block into at least one window. Each window comprises a plurality of memory pages of the block. The method further comprises determining a window parity page for each window of the block for a recovery of page read failures of the memory pages of the block, and upon determining that a predefined number of memory pages of the window is written or not yet written, maintaining the determined window parity page as part of the related window of memory pages of the block or not. 1. A method for intra-block recovery from memory page read failures of memory pages , said method comprising:providing a data storage device comprising a plurality of memory pages, wherein the plurality of memory pages are organized as a plurality of blocks, wherein each of said plurality of memory pages comprises a plurality of non-volatile memory cells;grouping a set of memory pages of a block into at least one window, each window comprising a plurality of memory pages of said block, anddetermining a window parity page for each window of said block for a recovery of page read failures of said memory pages of said block,in response to determining that a predefined number of memory pages of said window is not yet written, maintaining said determined window parity page as part of a corresponding window of memory pages of said block, andin response to determining that a predefined number of memory pages of said window is written, refraining from maintaining said determined parity page as part of said corresponding window of memory pages of said block.2. The method according to claim 1 ...

Подробнее
12-03-2020 дата публикации

Calibration of open blocks in nand flash memory

Номер: US20200082878A1
Принадлежит: International Business Machines Corp

Performing a calibration of a NAND flash memory block that is in an open state. An open state of the NAND flash memory block is detected, the NAND flash memory block comprising a plurality of memory pages, each of which comprising a plurality of memory cells. A group of pages of the NAND flash memory block being in an open state having comparable characteristics is identified. A calibration of read voltage values to pages of the group of identified pages is performed.

Подробнее
31-03-2016 дата публикации

REDUCING WRITE AMPLIFICATION IN SOLID-STATE DRIVES BY SEPARATING ALLOCATION OF RELOCATE WRITES FROM USER WRITES

Номер: US20160092352A1
Принадлежит:

In one embodiment, a method includes maintaining a first open logical erase block for user writes, maintaining a second open logical erase block for relocate writes, wherein the first and second open logical erase blocks are different logical erase blocks, receiving a first data stream having the user writes, transferring the first data stream to the first open logical erase block, receiving a second data stream having the relocate writes, and transferring the second data stream to the second open logical erase block. Other systems, methods, and computer program products are described in additional embodiments. 1. A method , comprising:maintaining, by a processor, a first open logical erase block for user writes;maintaining, by the processor, a second open logical erase block for relocate writes, wherein the first and second open logical erase blocks are different logical erase blocks;receiving, by the processor, a first data stream having the user writes;transferring, by the processor, the first data stream to the first open logical erase block;receiving, by the processor, a second data stream having the relocate writes; andtransferring, by the processor, the second data stream to the second open logical erase block.2. The method of claim 1 , further comprising:assigning a first timeout value to the first open logical erase block; andassigning a second timeout value to the second open logical erase block.3. The method of claim 2 , further comprising:reassigning at least one of the open logical erase blocks to a different data stream when the timeout of the open logical erase block expires.4. The method of claim 2 , wherein the first and second timeout values are determined based on a logical erase block write interval.5. The method of claim 4 , wherein the first and second timeout values are between about two and about five times a length of the logical erase block write interval.6. The method of claim 1 , comprising:performing heat segregation on the data streams ...

Подробнее
02-04-2015 дата публикации

PROMOTION OF PARTIAL DATA SEGMENTS IN FLASH CACHE

Номер: US20150095561A1

For efficient track destage in secondary storage in a more effective manner, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, a preference of movement to lower speed cache level is implemented based on at least one of an amount of holes and a data heat metric. If a first bit has at least one of a lower amount of holes and a hotter data heat metric, it is moved to the lower speed cache level ahead of a second bit that has at least one of a higher amount of holes and a cooler data heat. If the first bit has a hotter data heat and greater than a predetermined number of holes, the first bit is discarded. 1. A method for promoting partial data segments in a computing storage environment having lower and higher speed levels of cache by a processor , comprising: a first of the partial data segments having at least one of a lower amount of holes and a hotter data heat metric is moved to the lower speed cache level ahead of a second of the partial data segments having at least one of a higher amount of holes and a cooler data heat; and', 'if the first of the partial data segments has a hotter data heat and greater than a predetermined number of holes, the first of the partial data segments is discarded., 'implementing a preference for movement of the partial data segments to the lower speed cache level based on at least one of an amount of holes and a data heat metric, wherein, 'configuring a data moving mechanism adapted for performing2. The method of claim 1 , further including claim 1 , pursuant to configuring the data mover mechanism claim 1 , allowing the partial data segments to remain in the higher speed cache level for a time period longer that at least one whole data segment.3. The method of claim 1 , further including claim 1 , pursuant to configuring the data mover mechanism claim 1 , writing one of the partial data segments to the lower speed cache level as a whole data segment.4. The method ...

Подробнее
30-03-2017 дата публикации

ADAPTIVE ASSIGNMENT OF OPEN LOGICAL ERASE BLOCKS TO DATA STREAMS

Номер: US20170090759A1
Принадлежит:

A computer-implemented method, according to one embodiment, includes: assigning data having a first heat to a first data stream, assigning data having a second heat to a second data stream, determining an anticipated throughput of each of the first and second data streams, assigning a first number of logical erase blocks of non-volatile memory to the first data stream based on the anticipated throughput of the first data stream, and assigning a second number of logical erase blocks of non-volatile memory to the second data stream based on the anticipated throughput of the second data stream. Other systems, methods, and computer program products are described in additional embodiments. 1. A computer-implemented method , comprising:assigning data having a first heat to a first data stream;assigning data having a second heat to a second data stream;determining an anticipated throughput of each of the first and second data streams;assigning a first number of logical erase blocks of non-volatile memory to the first data stream based on the anticipated throughput of the first data stream; andassigning a second number of logical erase blocks of non-volatile memory to the second data stream based on the anticipated throughput of the second data stream.2. The computer-implemented method of claim 1 , wherein the first and second numbers of logical erase blocks assigned to the first and second data streams are proportional to the anticipated throughput thereof.3. The computer-implemented method of claim 1 , wherein the first and/or second number of logical erase blocks are statically assigned to each of the first and/or second data streams.4. The computer-implemented method of claim 1 , wherein the first and/or second number of logical erase blocks assigned to the first and/or second data streams are adjusted dynamically based on a measurement of at least one of: a temporal stream throughput claim 1 , and stream stall events of the first and/or second data streams.5. The ...

Подробнее
30-03-2017 дата публикации

Detecting error count deviations for non-volatile memory blocks for advanced non-volatile memory block management

Номер: US20170091006A1
Принадлежит: International Business Machines Corp

Non-volatile memory block management. A method according to one embodiment includes calculating an error count margin threshold for each of the at least some non-volatile memory blocks of a plurality of non-volatile memory blocks. A determination is made as to whether the error count margin threshold of any of the at least some of the non-volatile memory blocks has been exceeded. A memory block management function is triggered upon determining that the error count margin threshold of any of the at least some of the non-volatile memory blocks has been exceeded.

Подробнее
26-06-2014 дата публикации

RELIABILITY SCHEME USING HYBRID SSD/HDD REPLICATION WITH LOG STRUCTURED MANAGEMENT

Номер: US20140181383A1

In one embodiment, a method of managing data includes managing a first copy of data in a solid state memory using a controller of the solid state memory, and managing a second copy of the data in a hard disk drive memory using the controller. In another embodiment, a system for storing data includes a solid state memory, at least one hard disk drive memory, and a controller for controlling storage of data in both the solid state memory and the hard disk drive memory. Other methods, systems, and computer program products are also described according to various embodiments. 1. A method of managing data , the method comprising:managing a first copy of data in a solid state memory using a controller of the solid state memory; andmanaging a second copy of the data in a hard disk drive memory using the controller.2. The method as recited in claim 1 , wherein the second copy of the data is managed by the controller in a same way as the first copy is managed by the controller.3. The method as recited in claim 1 , wherein read requests are served by the solid state memory.4. The method as recited in claim 1 , wherein write requests are served substantially simultaneously at both the solid state memory and the hard disk drive memory under control of the controller.5. The method as recited in claim 4 , comprising virtually associating each physical solid state block with a physical chunk of disk space.6. The method as recited in claim 5 , wherein the physical chunk of disk space has a same size as the physical solid state block associated therewith.7. The method as recited in claim 4 , wherein the write requests are served at the hard disk drive memory in an append mode controlled by the controller.8. The method as recited in claim 1 , comprising serving a read request for the data from the hard disk drive memory when the solid state memory fails.9. The method as recited in claim 1 , comprising replicating the data from the solid state memory to a second hard disk drive memory ...

Подробнее
21-04-2016 дата публикации

DETECTING ERROR COUNT DEVIATIONS FOR NON-VOLATILE MEMORY BLOCKS FOR ADVANCED NON-VOLATILE MEMORY BLOCK MANAGEMENT

Номер: US20160110124A1
Принадлежит:

Non-volatile memory block management. A method according to one embodiment includes determining a block health of at least some non-volatile memory blocks of a plurality of non-volatile memory blocks that are configured to store data. An error count margin threshold is calculated for each of the at least some non-volatile memory blocks. A determination is made as to whether the error count margin threshold of any of the at least some non-volatile memory blocks has been exceeded. A memory block management function is triggered upon determining that the error count margin threshold of any of the non-volatile memory blocks has been exceeded. 1. A system , comprising:a plurality of non-volatile memory blocks configured to store data; and determine a block health of each non-volatile memory block;', 'calculate an error count margin threshold for each non-volatile memory block;', 'determine whether the error count margin threshold of any of the non-volatile memory blocks has been exceeded; and', 'trigger a memory block management function upon determining that the error count margin threshold of any of the non-volatile memory blocks has been exceeded., 'a controller and logic integrated with and/or executable by the controller, the logic being configured to, for at least some of the plurality of non-volatile memory blocks2. A system as recited in claim 1 , wherein the memory block management function includes setting an indicator.3. A system as recited in claim 1 , wherein the memory block management function includes performing an immediate calibration of at least the non-volatile memory blocks having the exceeded threshold.4. A system as recited in claim 1 , wherein the memory block management function includes scheduling a calibration of at least the non-volatile memory blocks having the exceeded threshold claim 1 , wherein the calibration is scheduled to be performed during a subsequent background health check.5. A system as recited in claim 1 , wherein the memory ...

Подробнее
21-04-2016 дата публикации

STORAGE ARRAY MANAGEMENT EMPLOYING A MERGED BACKGROUND MANAGEMENT PROCESS

Номер: US20160110248A1

In at least one embodiment, a controller of a non-volatile memory array iteratively performs a merged background management process independently of any host system's demand requests targeting the memory array. During an iteration of the merged background management process, the controller performs a read sweep by reading data from each of a plurality of page groups within the memory array and recording page group error statistics regarding errors detected by the reading for each page group, where each page group is formed of a respective set of one or more physical pages of storage in the memory array. During the iteration of the merged background management process, the controller employs the page group error statistics recorded during the read sweep in another background management function. 1. A method in a data storage system including a non-volatile memory array controlled by a controller , the method comprising:the controller iteratively performing a merged background management process;during an iteration of the merged background management process, the controller performing a read sweep by reading data from each of a plurality of page groups within the memory array and recording page group error statistics regarding errors detected by the reading for each page group, wherein each page group is formed of a respective set of one or more physical pages of storage in the memory array; andduring the iteration of the merged background management process, the controller employing the page group error statistics recorded during the read sweep in another background management function.2. The method of claim 1 , wherein said another background management function is a page group calibration.3. The method of claim 2 , wherein employing the page group error statistics recorded during the read sweep in another background management function includes determining whether to adjust a read threshold voltage shift for each of the plurality of page groups based at least in ...

Подробнее
29-04-2021 дата публикации

WORKLOAD BASED RELIEF VALVE ACTIVATION FOR HYBRID CONTROLLER ARCHITECTURES

Номер: US20210124488A1
Принадлежит:

A computer-implemented method, according to one embodiment, includes: maintaining a first subset of the plurality of blocks in a first pool, where the blocks maintained in the first pool are configured in SLC mode. A second subset of the plurality of blocks is maintained in a second pool, where the blocks maintained in the second pool are configured in multi-bit-per-cell mode. A current I/O rate for the memory is identified during runtime, and a determination is made as to whether the current I/O rate is outside a first range. In response to determining that the current I/O rate is not outside the first range, the blocks maintained in the first pool are used to satisfy incoming host writes. Moreover, in response to determining that the current I/O rate is outside the first range, the blocks maintained in the second pool are used to satisfy incoming host writes. 1. A computer-implemented method for managing a plurality of blocks of memory in two or more pools , comprising:maintaining a first subset of the plurality of blocks in a first pool, wherein the blocks maintained in the first pool are configured in single-level cell (SLC) mode;maintaining a second subset of the plurality of blocks in a second pool, wherein the blocks maintained in the second pool are configured in multi-bit-per-cell mode;identifying a current input/output (I/O) rate for the memory during runtime;determining whether the current I/O rate is outside a first predetermined range;in response to determining that the current I/O rate is not outside the first predetermined range, using the blocks maintained in the first pool to satisfy incoming host writes; andin response to determining that the current I/O rate is outside the first predetermined range, using the blocks maintained in the second pool to satisfy incoming host writes.2. The computer-implemented method of claim 1 , comprising:identifying an updated input/output (I/O) rate for the memory while satisfying the incoming host writes; ...

Подробнее
29-04-2021 дата публикации

SELECTIVELY STORING PARITY DATA IN DIFFERENT TYPES OF MEMORY

Номер: US20210124643A1
Принадлежит:

A computer-implemented method, according to one embodiment, is for selectively storing parity data in different types of memory which include a higher performance memory and a lower performance memory. The computer-implemented method includes: receiving a write request, and determining whether the write request includes parity data. In response to determining that the write request includes parity data, a determination is made as to whether a write heat of the parity data is in a predetermined range. In response to determining that that write heat of the parity data is in the predetermined range, another determination is made as to whether the parity data has been read since a last time the parity data was updated. Furthermore, in response to determining that the parity data has been read since a last time the parity data was updated, the parity data is stored in the higher performance memory. 1. A computer-implemented method for selectively storing parity data in different types of memory , comprising:receiving a write request;determining whether the write request includes parity data;in response to determining that the write request includes parity data, determining whether a write heat of the parity data is in a predetermined range;in response to determining that that write heat of the parity data is in the predetermined range, determining whether the parity data has been read since a last time the parity data was updated; andin response to determining that the parity data has been read since a last time the parity data was updated, storing the parity data in higher performance memory,wherein the different types of memory include the higher performance memory and a lower performance memory.2. The computer-implemented method of claim 1 , comprising:in response to determining that that write heat of the parity data is not in the predetermined range, storing the parity data in the lower performance memory.3. The computer-implemented method of claim 1 , comprising:in ...

Подробнее
29-04-2021 дата публикации

CALIBRATING PAGES OF MEMORY USING PARTIAL PAGE READ OPERATIONS

Номер: US20210124685A1
Принадлежит:

A computer-implemented method, according to one embodiment, is for calibrating read voltages for a block of memory. The computer-implemented method includes: determining a calibration read mode of the block, and using the calibration read mode to determine whether pages in the block should be read using full page read operations. In response to determining that the pages in the block should not be read using full page read operations, a current value of a partial page read indicator for the block is determined. The block is further calibrated by reading only a portion of each page in the block, where the current value of the partial page read indicator determines which portion of each respective page in the block is read. Moreover, the current value of the partial page read indicator is incremented. 1. A computer-implemented method for calibrating read voltages for a block of memory , comprising:determining a calibration read mode of the block;using the calibration read mode to determine whether pages in the block should be read using full page read operations;in response to determining that the pages in the block should not be read using full page read operations, determining a current value of a partial page read indicator for the block;calibrating the block by reading only a portion of each page in the block, wherein the current value of the partial page read indicator determines which portion of each respective page in the block is read; andincrementing the current value of the partial page read indicator.2. The computer-implemented method of claim 1 , wherein calibrating the block by reading only a portion of each page in the block includes claim 1 , for each page in the block:using the current value of the partial page read indicator to determine which portion of the given page should be read;reading the determined portion of the given page; anddetermining a read voltage shift value using results of reading the determined portion of the given page.3. The ...

Подробнее
27-04-2017 дата публикации

STORAGE DEVICE WITH 2D CONFIGURATION OF PHASE CHANGE MEMORY INTEGRATED CIRCUITS

Номер: US20170117040A1
Принадлежит:

A storage device, apparatus, and method to write and/or read data from such storage device. The storage device, comprises a channel controller and phase change memory integrated circuits (PCM ICs) arranged in sub-channels, wherein each of the sub-channels comprises several PCM ICs connected by at least one data bus line, which at least one data bus line connects to the channel controller. The channel controller is configured to write data to and/or read data from the PCM ICs according to a matrix configuration of PCM ICs, wherein: a number of columns of the matrix configuration respectively corresponds to a number of the sub-channels, the sub-channels forming a channel, and a number of rows of the matrix configuration respectively corresponds to a number of sub-banks, the sub-banks forming a bank, wherein each of the sub-banks comprises PCM ICs that belong, each, to distinct sub-channels of the sub-channels. 1. A method for optimizing a storage device , comprising:connecting, by at least one data bus line, phase change memory integrated circuits (PCM ICs) to a channel controller, wherein the PCM ICs are arranged in sub-channels and each of the sub-channels comprises several PCM ICs corresponding to a respective finite state machine within the channel controller; andoptimizing a plurality of the sub-channels in a matrix configuration, based on characteristics of the PCM ICs;wherein the channel controller is configured to write data to and/or read data from the PCM ICs according to the matrix configuration of PCM ICs, wherein a plurality of columns of the matrix configuration respectively corresponds to the plurality of the sub-channels, the plurality of the sub-channels forming a channel.2. The method of claim 1 , wherein optimizing is furthermore carried out based on characteristics of a bus that comprises said at least one data bus line.3. The method of claim 1 , further comprising optimizing a plurality of sub-banks in the matrix configuration claim 1 , based on ...

Подробнее
18-04-2019 дата публикации

CORRUPT LOGICAL BLOCK ADDRESSING RECOVERY SCHEME

Номер: US20190114217A1
Принадлежит:

Technology for handling page size mismatches when DPL-CLR is performed at multiple levels of a data storage system (for example, RAID level and flash card level). A “corrective DPL” corrects only a portion of the data that would make up a page at the level at which the data is stored (that is, the “initial DPL level”), and, after that, a partially corrected page of data is formed and stored in data storage, with the partially corrected page: (i) having a page size characteristic of the initial DPL; (ii) including the part of the data corrected by the corrective DPL; and (iii) further including other data. In some embodiments, the other data has a pattern that indicates that it is invalid, erroneous data, such that an error message will be returned if this portion of the data is attempted to be read. 1. A computer-implemented method comprising:detecting, by an initial data protection layer, corruption within a first DPL-CLR (data protection layer corrupt logical block addressing recovery) data unit, with the first DPL-CLR data unit being characterized by a first data unit size;correcting, by a corrective data protection layer, a portion of data of the first DPL-CLR data unit to obtain a second DPL-CLR data unit, with the second DPL-CLR data unit being characterized by a second data unit size; andapplying a data replacement strategy.2. The method of wherein the data replacement strategy includes creation and use of a simple marker.3. The method of wherein the data replacement strategy includes creation of an E-page.4. The method of further comprising:assembling a third DPL-CLR data unit based upon the second DPL-CLR data unit, with the second DPL-CLR data unit: (i) having a data unit size equal to the first data unit size, (ii) including corrected data of the second DPL-CLR data unit, and (iii) further including other data to fill out the disparity between the first data unit size characterizing the third DPL-CLR data unit and the second data unit size characterizing ...

Подробнее
17-07-2014 дата публикации

MANAGEMENT OF PARTIAL DATA SEGMENTS IN DUAL CACHE SYSTEMS

Номер: US20140201448A1

For movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor, a whole data segment containing one of the partial data segments is promoted to both the lower and higher levels of cache. Requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache. 1. A method for movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor , comprising:promoting a whole data segment containing one of the partial data segments to both the lower and higher levels of cache, wherein requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache.2. The method of claim 1 , further wherein:unrequested data of the whole data segment is split and positioned at a Least Recently Used (LRU) portion of the demotion queue of the higher level of cache, andthe unrequested data is pinned in place until a write of the whole data segment to the lower level of cache completes.3. The method of claim 1 , wherein promoting the whole data segment occurs pursuant to a read request for the one of the partial data segments.4. The method of claim 1 , further including claim 1 , previous to promoting the whole data segment claim 1 , determining if the one of the partial data segments should be cached on the lower level of cache.5. The method of claim 4 , wherein determining if the one of the partial data segments should be cached on the lower level of cache includes considering an Input/Output Performance (IOP) metric claim 4 , a bandwidth metric claim 4 , and a garbage collection metric.6. The method of claim 4 , wherein determining if the one of the partial data segments should be cached on the lower level of cache includes considering if the one of the partial data segments is sequential with ...

Подробнее
03-05-2018 дата публикации

Wear leveling of a memory array

Номер: US20180121094A1
Принадлежит: International Business Machines Corp

In at least one embodiment, a controller of a non-volatile memory array including a plurality of subdivisions stores write data within the non-volatile memory array utilizing a plurality of block stripes of differing numbers of blocks, where all of the blocks within each block stripe are drawn from different ones of the plurality of subdivisions. The controller builds new block stripes for storing write data from blocks selected based on estimated remaining endurances of blocks in each of the plurality of subdivisions.

Подробнее
04-05-2017 дата публикации

BACKGROUND THRESHOLD VOLTAGE SHIFTING USING BASE AND DELTA THRESHOLD VOLTAGE SHIFT VALUES IN NON-VOLATILE MEMORY

Номер: US20170123660A1
Принадлежит:

In one embodiment, a computer-implemented method includes determining, by a processor, after the writing of data to a non-volatile memory block, one or more delta threshold voltage shift (TVS) values configured to track temporary changes with respect to changes in the underlying threshold voltage distributions due to retention and/or read disturb errors. One or more overall threshold voltage shift values is calculated for the data written to the non-volatile memory block, the one or more overall threshold voltage shift values being a function of the one or more TVS values to be used when writing data to the non-volatile memory block. The one or more overall threshold voltage shift values are stored. 1. A computer-implemented method , comprising:{'sub': 'Δ', 'determining, by a processor, after writing data to a non-volatile memory block, one or more delta threshold voltage shift (TVS) values configured to track temporary changes with respect to changes in the underlying threshold voltage distributions due to retention and/or read disturb errors;'}{'sub': 'Δ', 'calculate one or more overall threshold voltage shift values for the data written to the non-volatile memory block, the one or more overall threshold voltage shift values being a function of the one or more TVS values to be used when writing data to the non-volatile memory block;'}storing the one or more overall threshold voltage shift values;reading one or more TVS values from a non-volatile controller memory;resetting a program/erase cycle count since last calibration after calibrating the one or more overall threshold voltage shift values; and{'sub': 'Δ', 'storing, to the non-volatile controller memory, the one or more TVS values and the program/erase cycle count since last calibration.'}2. The method as recited in claim 1 , comprising resetting the one or more TVS values when the non-volatile memory block is erased.3. The method as recited in claim 1 , comprising applying the one or more overall threshold ...

Подробнее
16-04-2020 дата публикации

REDUCING BLOCK CALIBRATION OVERHEAD USING READ ERROR TRIAGE

Номер: US20200117527A1

A computer-implemented method, according to one embodiment, includes: detecting that an error count resulting from reading a first page in a block of storage space in memory is above a first threshold, and reading a second page in the block of storage space. The second page is one which had a highest error count of the pages in the block of storage space following a last calibration of the block of storage space. Moreover, a determination is made as to whether an error count resulting from reading the second page is above the first threshold. In response to determining that the error count resulting from reading the second page is above the first threshold, the block of storage space is calibrated. Other systems, methods, and computer program products are described in additional embodiments. 1. A computer-implemented method , comprising:detecting that an error count resulting from reading a first page in a block of storage space in memory is above a first threshold;reading a second page in the block of storage space, wherein the second page had a highest error count of the pages in the block of storage space following a last calibration of the block of storage space;determining whether an error count resulting from reading the second page is above the first threshold; andcalibrating the block of storage space in response to determining that the error count resulting from reading the second page is above the first threshold.2. The computer-implemented method of claim 1 , comprising:in response to detecting that the error count resulting from reading the first page in the block of storage space in memory is above the first threshold, determining whether a predetermined amount of time has passed since the first page was read;in response to determining that the predetermined amount of time has passed since the first page was read, re-reading the first page;determining whether an error count resulting from re-reading the first page is above the first threshold; ...

Подробнее
27-05-2021 дата публикации

DATA PLACEMENT IN WRITE CACHE ARCHITECTURE SUPPORTING READ HEAT DATA SEPARATION

Номер: US20210157735A1
Принадлежит:

A computer-implemented method, according to one approach, includes: determining a current read heat value of each logical page which corresponds to write requests that have accumulated in a destage buffer. Each of the write requests is assigned to a respective write queue based on the current read heat value of each logical page which corresponds to the write requests. Moreover, each of the write queues correspond to a different page stripe which includes physical pages, the physical pages included in each of the respective page stripes being of a same type. Other systems, methods, and computer program products are described in additional approaches. 1. A computer-implemented method , comprising:determining a current read heat value of each logical page which corresponds to write requests that have accumulated in a destage buffer; andassigning each of the write requests to a respective write queue based on the current read heat value of each logical page which corresponds to the write requests, wherein each of the write queues correspond to a different page stripe which includes physical pages, wherein the physical pages included in each of the respective page stripes are of a same type.2. The computer-implemented method of claim 1 , comprising:determining whether a given write queue includes enough data in the respective write requests to fill a next page stripe which corresponds thereto;in response to determining the given write queue does not include enough data in the respective write requests to fill the next page stripe which corresponds thereto, determining whether an adjacent write queue includes enough data in the respective write requests to complete filling the next page stripe which corresponds to the given write queue; anddestaging the data in the write requests from the given write queue and the adjacent write queue to the next page stripe which corresponds to the given write queue in response to determining that the adjacent write queue includes ...

Подробнее
12-05-2016 дата публикации

NON-VOLATILE MEMORY DATA STORAGE WITH LOW READ AMPLICATION

Номер: US20160132392A1
Принадлежит:

In one embodiment, an apparatus includes one or more memory devices, each memory device having non-volatile memory configured to store data, and a memory controller connected to the one or more memory devices, the memory controller being configured to receive data to be stored to the one or more memory devices, store read-hot data within one error correction code (ECC) codeword as aligned data, and store read-cold data to straddle two or more ECC codewords as non-aligned data and/or dispersed data. According to another embodiment, a method for storing data to non-volatile memory includes receiving data to store to one or more memory devices, each memory device including non-volatile memory configured to store data, storing read-hot data within one ECC codeword as aligned data, and storing read-cold data to straddle two or more ECC codewords as non-aligned data and/or dispersed data. 1. An apparatus , comprising:one or more memory devices, each memory device comprising non-volatile memory configured to store data; and receive data to be stored to the one or more memory devices;', 'store read-hot data within one error correction code (ECC) codeword as aligned data; and', 'store read-cold data to straddle two or more ECC codewords as non-aligned data and/or dispersed data., 'a memory controller connected to the one or more memory devices, the memory controller being configured to2. The apparatus as recited in claim 1 , wherein the memory controller utilizes a skewed packing scheme to store the data to the one or more memory devices claim 1 , the skewed packing scheme comprising:primarily storing aligned data within a plurality of ECC codewords; andstoring dispersed data into remaining space of a plurality of ECC codewords,wherein dispersed data comprises read-cold data.3. The apparatus as recited in claim 2 , wherein the memory controller is further configured to preferentially store metadata associated with user data as the dispersed data and read-hot user data as the ...

Подробнее
19-05-2016 дата публикации

BACKGROUND THRESHOLD VOLTAGE SHIFTING USING BASE AND DELTA THRESHOLD VOLTAGE SHIFT VALUES IN NON-VOLATILE MEMORY

Номер: US20160141048A1
Принадлежит:

In one embodiment, a computer-implemented method includes determining, by a processor, after the writing of data to a non-volatile memory block, one or more delta threshold voltage shift (TVS) values configured to track temporary changes with respect to changes in the underlying threshold voltage distributions due to retention and/or read disturb errors. One or more overall threshold voltage shift values is calculated for the data written to the non-volatile memory block, the one or more overall threshold voltage shift values being a function of the one or more TVS values to be used when writing data to the non-volatile memory block. The one or more overall threshold voltage shift values are stored. 1. A computer-implemented method , comprising:{'sub': 'Δ', 'determining, by a processor, after writing data to a non-volatile memory block, one or more delta threshold voltage shift (TVS) values configured to track temporary changes with respect to changes in the underlying threshold voltage distributions due to retention and/or read disturb errors;'}{'sub': 'Δ', 'calculate one or more overall threshold voltage shift values for the data written to the non-volatile memory block, the one or more overall threshold voltage shift values being a function of the one or more TVS values to be used when writing data to the non-volatile memory block; and'}storing the one or more overall threshold voltage shift values.2. The method as recited in claim 1 , comprising resetting the one or more TVS values when the non-volatile memory block is erased.3. The method as recited in claim 1 , comprising applying the one or more overall threshold voltage shift values to a read operation of the data stored to the non-volatile memory block upon receiving a read request.4. The method as recited in claim 1 , comprising:reading one or more TVS values from a non-volatile controller memory;resetting a program/erase cycle count since last calibration after calibrating the one or more overall ...

Подробнее
18-05-2017 дата публикации

SELECTIVELY DE-STRADDLING DATA PAGES IN NON-VOLATILE MEMORY

Номер: US20170139768A1
Принадлежит:

An apparatus, according to one embodiment, includes: one or more memory devices, each memory device comprising non-volatile memory configured to store data, and a memory controller connected to the one or more memory devices. The memory controller is configured to: detect at least one read of a logical page straddled across codewords, store an indication of a number of detected reads of the straddled logical page, and relocate the straddled logical page to a different physical location in response to the number of detected reads of the straddled logical page, wherein the logical page is written to the different physical location in a non-straddled manner. Other systems, methods, and computer program products are described in additional embodiments. 1. An apparatus , comprising:one or more memory devices, each memory device comprising non-volatile memory configured to store data; and detect at least one read of a logical page straddled across codewords;', 'store an indication of a number of detected reads of the straddled logical page; and', 'relocate the straddled logical page to a different physical location in response to the number of detected reads of the straddled logical page, wherein the logical page is written to the different physical location in a non-straddled manner., 'a memory controller connected to the one or more memory devices, the memory controller being configured to2. The apparatus as recited in claim 1 , wherein the memory controller is configured to detect reading of the logical page straddling across multiple codewords in a same physical page.3. The apparatus as recited in claim 1 , wherein the memory controller is configured to detect reading of the logical page straddling across multiple physical pages.4. The apparatus as recited in claim 1 , wherein the memory controller is configured to store the indication of the number of detected reads by increment a straddled page read counter in response to detecting each read of the straddled logical ...

Подробнее
18-05-2017 дата публикации

Logical to physical table restoration from stored journal entries

Номер: US20170139781A1
Принадлежит: International Business Machines Corp

A controller-implemented method, according to one embodiment, includes: restoring a valid snapshot of a LPT from the non-volatile random access memory, examining each journal entry from at least one journal beginning with a most recent one of the journal entries in a most recent one of the at least one journal and working towards an oldest one of the journal entries in an oldest one of the at least one journal, the journal entries corresponding to updates made to one or more entries of the LPT, determining whether a current LPT entry which corresponds to a currently examined journal entry has already been updated, using the currently examined journal entry to update the current LPT entry in response to determining that the current LPT entry has not already been updated, and discarding the currently examined journal entry in response to determining that the current LPT entry has already been updated.

Подробнее
04-06-2015 дата публикации

PAGE RETIREMENT IN A NAND FLASH MEMORY SYSTEM

Номер: US20150154061A1

In a data storage system including a non-volatile random access memory (NVRAM) array, a page is a smallest granularity of the NVRAM array that can be accessed by read and write operations, and a memory block containing multiple pages is a smallest granularity of the NVRAM array that can be erased. Data are stored in the NVRAM array in page stripes distributed across multiple memory blocks. In response to detection of an error in a particular page of a particular block of the NVRAM array, only the particular page of the particular block is retired, such that at least two of the multiple memory blocks across which a particular one of the page stripes is distributed include differing numbers of active (non-retired) pages. 1. A method of page retirement in a data storage system including a non-volatile random access memory (NVRAM) array , the method comprising:storing data in the NVRAM array in page stripes distributed across multiple memory blocks, wherein at least two of the multiple memory blocks across which a particular one of the page stripes is distributed include differing numbers of active pages, wherein a page is a smallest granularity that can be accessed in the NVRAM array and a memory block containing multiple pages is a smallest granularity that can be erased in the NVRAM array;detecting an error in a particular page of a particular block of the NVRAM array; andin response to detecting the error, retiring only the particular page of the particular block.2. The method of claim 1 , wherein at least two of the multiple page stripes are distributed across differing numbers of blocks.3. The method of claim 1 , wherein the detecting includes detecting a mismatch between a code computed for the particular page and a code stored in the NVRAM array in association with the particular page.4. The method of claim 1 , and further comprising:thereafter, retiring a physical memory region in the NVRAM array containing the particular page and multiple other pages in ...

Подробнее
16-05-2019 дата публикации

Background threshold voltage shifting using base and delta threshold voltage shift values in non-volatile memory

Номер: US20190146671A1
Принадлежит: International Business Machines Corp

A computer-implemented method according to one embodiment includes determining, after writing data to a non-volatile memory block, one or more delta threshold voltage shift (TVSΔ) values. One or more overall threshold voltage shift values for the data written to the non-volatile memory block are calculated, the values being a function of the one or more TVSΔ values to be used when writing data to the non-volatile memory block. The overall threshold voltage shift values are stored. A base threshold voltage shift (TVSBASE) value, the one or more TVSΔ values, or both the TVSBASE value and the one or more TVSΔ values are re-calibrated during a background health check after a predetermined number of background health checks without calibration are performed.

Подробнее
09-06-2016 дата публикации

PAGE RETIREMENT IN A NAND FLASH MEMORY SYSTEM

Номер: US20160162196A1
Принадлежит:

In a data storage system including a non-volatile random access memory (NVRAM) array, a page is a smallest granularity of the NVRAM array that can be accessed by read and write operations, and a memory block containing multiple pages is a smallest granularity of the NVRAM array that can be erased. Data are stored in the NVRAM array in page stripes distributed across multiple memory blocks. In response to detection of an error in a particular page of a particular block of the NVRAM array, only the particular page of the particular block is retired, such that at least two of the multiple memory blocks across which a particular one of the page stripes is distributed include differing numbers of active (non-retired) pages. 1. A method of page retirement in a data storage system including a non-volatile random access memory (NVRAM) array , the method comprising:storing data in the NVRAM array in page stripes distributed across multiple memory blocks, wherein at least two of the multiple memory blocks across which one of the page stripes is distributed include differing numbers of active physical pages, wherein a physical page is a smallest granularity that can be accessed in the NVRAM array and a memory block containing multiple physical pages is a smallest granularity that can be erased in the NVRAM array;detecting an error in a particular physical page of a particular block of the NVRAM array;in response to detecting the error, retiring only the particular physical page of the particular block and recording retirement of the particular physical page in a page status data structure;thereafter, allocating a plurality of page stripes across a group of memory blocks including the particular block such that each of the plurality of page stripes is formed at a respective one of a plurality of different physical page indices, wherein the allocating includes skipping the particular page in the particular block when allocating a first page stripe at a first physical page index ...

Подробнее
08-06-2017 дата публикации

EFFICIENT MANAGEMENT OF PAGE RETIREMENT IN NON-VOLATILE MEMORY UTILIZING PAGE RETIREMENT CLASSES

Номер: US20170160960A1
Принадлежит:

In at least one embodiment, a non-volatile memory array including a plurality of blocks each including a plurality of physical pages is controlled by a controller. The controller implements a plurality of nested page retirement classes each defined by a respective one of a plurality of different nested subsets of page indices of physical pages within the plurality of blocks that are to be considered retired from use. For each block among the plurality of blocks, the controller updating an indication of a page retirement class to which the block belongs in response to detection of a retirement-causing error in a data page stored in a physical page of the block. The controller forms block stripes for storing data from the plurality of blocks based on the page retirement classes of the blocks. 1. A method in a data storage system including a non-volatile memory array controlled by a controller , wherein the non-volatile memory array includes a plurality of blocks each including a plurality of physical pages , the method comprising:the controller implementing a plurality of nested page retirement classes each defined by a respective one of a plurality of different nested subsets of page indices of physical pages within the plurality of blocks that are to be considered retired from use;for each block among the plurality of blocks, the controller updating an indication of a page retirement class to which the block belongs in response to detection of a retirement-causing error in a data page stored in a physical page of the block; andthe controller forming block stripes for storing data from selected ones of the plurality of blocks identified in ready-to-use queues based on the page retirement classes of the blocks.2. The method of claim 1 , wherein the updating includes imprecisely retiring physical pages of the plurality of blocks based on page retirement class.3. The method of claim 2 , wherein:for at least a particular block among the plurality of blocks, the updating ...

Подробнее
18-06-2015 дата публикации

METHOD AND DEVICE FOR MANAGING A MEMORY

Номер: US20150169237A1
Принадлежит:

A method for managing a memory is disclosed, the memory including a set of units and a unit comprising a set of pages, wherein a unit of the set of units is erasable as a whole by a unit reclaiming process resulting in a free unit available for writing data to. The method includes maintaining a first pool of units available for reclamation by the unit reclaiming process; maintaining a second pool of units not available for reclamation by the unit reclaiming process; moving a first unit from the first pool to the second pool in response to invalidating a first one of the pages contained in the first unit; returning the first unit from the second pool to the first pool after a defined number of units of the set have been written; and selecting a unit out of the first pool for reclamation by the unit reclaiming process. 1. A method for managing a memory , the memory comprising a set of units and a unit comprising a set of pages , wherein a unit of the set of units is erasable as a whole by a unit reclaiming process resulting in a free unit available for writing data to , and wherein data updates are performed by writing data updates out-of-place , wherein data updates to outdated data are written to a page different from a page containing the outdated data , and wherein the page containing the outdated data is invalid , while a page containing up-to-date data is a valid page , the method comprising:maintaining a first pool of units available for reclamation by the unit reclaiming process;maintaining a second pool of units not available for reclamation by the unit reclaiming process;moving a first unit from the first pool to the second pool in response to invalidating a first one of the pages contained in the first unit;returning the first unit from the second pool to the first pool after a defined number of units of the set have been written; andselecting a unit out of the first pool for reclamation by the unit reclaiming process.2. The method of claim 1 , wherein a ...

Подробнее
16-06-2016 дата публикации

NON-VOLATILE MEMORY SYSTEM HAVING AN INCREASED EFFECTIVE NUMBER OF SUPPORTED HEAT LEVELS

Номер: US20160170870A1
Принадлежит:

A method, according to one embodiment, it dudes assigning data having a first heat to a first data stream, assigning data having a second heat to a second data stream, and writing the data streams in parallel to page-stripes having a same index across a series of planes of memory. Other systems, methods, and computer program products are described in additional embodiments. 1. A method , comprising:assigning data having a first heat to a first data stream;assigning data having a second heat to a second data stream; andwriting the data streams in parallel to page-stripes having a same index across a series of planes of memory.2. The method of claim 1 , wherein a junction is defined in a page-stripe of a plane where the data of the data streams meet.3. The method of claim 1 , wherein the writing of each data stream begins at opposite ends of the series of planes claim 1 , the writing of the streams being towards one another.4. The method of claim 1 , wherein the writing of each data stream begins at a starting position between ends of the series of planes.5. The method of claim 4 , wherein each data stream is written from the starting position towards a respective end of the series of planes.6. The method of claim 4 , wherein the stalling position is advanced after at least some writing is performed.7. The method of claim 1 , wherein the first and second heats are more similar to each other than a third heat with which data is associated.8. The method of claim 1 , wherein the memory is in a write cache.9. The method of claim 1 , wherein the type memory is non-volatile memory.10. A system claim 1 , comprising:a memory; anda controller configured to assign data having a first heat to a first data stream, assigning data having a second heat to a second data stream, and write the data streams in parallel to page-stripes having a same index across a series of planes of the memory.11. The system of claim 10 , wherein a junction is defined in a page-stripe of a plane where ...

Подробнее
14-06-2018 дата публикации

Adaptive health grading for a non-volatile memory

Номер: US20180165021A1
Принадлежит: International Business Machines Corp

A data storage system includes a controller that controls a non-volatile memory array including a plurality of blocks. The controller assigns blocks to a plurality of different health grades. The controller maintains a plurality of ready-to-use queues identifying blocks that do not currently hold valid data and are ready for use for data storage. Each of the ready-to-use queues is associated with a respective one of the health grades. The controller monitors fill levels in the ready-to-use queues, and based on the monitoring, adjusts at least one health grade block distribution for the plurality of blocks. Based on the adjustment of the at least one health grade block distribution, the controller thereafter re-grades blocks and assigns blocks to the plurality of ready-to-use queues in accordance with the at least one health grade block distribution that was adjusted, such that distribution of blocks within the plurality of ready-to-use queues is improved.

Подробнее
14-06-2018 дата публикации

Health-aware garbage collection in a memory system

Номер: US20180165022A1
Принадлежит: International Business Machines Corp

A data storage system includes a controller that controls a non-volatile memory array including a plurality of garbage collection units of physical memory. For each of the plurality of garbage collections units storing valid data, the controller determines an invalidation metric and a health-based adjustment of the invalidation metric. The controller selects a garbage collection unit on which to perform garbage collection from among a plurality of garbage collections units predominately based on the invalidation metric for the garbage collection unit and also based on the health-based adjustment for the garbage collection unit. In response to selection of the garbage collection unit, the controller performing garbage collection for the garbage collection unit.

Подробнее
30-05-2019 дата публикации

PAGE RETIREMENT IN A NAND FLASH MEMORY SYSTEM

Номер: US20190163592A1
Принадлежит:

In a data storage system including a non-volatile random access memory (NVRAM) array, a page is a smallest granularity of the NVRAM array that can be accessed by read and write operations, and a memory block containing multiple pages is a smallest granularity of the NVRAM array that can be erased. Data are stored in the NVRAM array in page stripes distributed across multiple memory blocks. In response to detection of an error in a particular page of a particular block of the NVRAM array, only the particular page of the particular block is retired, such that at least two of the multiple memory blocks across which a particular one of the page stripes is distributed include differing numbers of active (non-retired) pages. 1. A method of page retirement in a data storage system including a non-volatile random access memory (NVRAM) array , the method comprising:storing data in the NVRAM array in page stripes distributed across multiple memory blocks, wherein at least two of the multiple memory blocks across which one of the page stripes is distributed include differing numbers of active physical pages, wherein a physical page is a smallest granularity that can be accessed in the NVRAM array and a memory block containing multiple physical pages is a smallest granularity that can be erased in the NVRAM array;in response to detection of an error, retiring only a particular physical page of a particular block in which the error occurred and recording retirement of the particular physical page in a page status data structure; andthereafter, allocating a plurality of page stripes across a group of memory blocks including the particular block such that each of the plurality of page stripes is formed at a respective one of a plurality of different physical page indices, wherein the allocating includes skipping the particular page in the particular block when allocating a first page stripe at a first physical page index based on the page status data structure indicating the ...

Подробнее
30-05-2019 дата публикации

TECHNIQUES FOR IMPROVING DEDUPLICATION EFFICIENCY IN A STORAGE SYSTEM WITH MULTIPLE STORAGE NODES

Номер: US20190163764A1
Принадлежит:

Techniques for selecting a storage node of a storage system to store data include applying a first function to at least some data chunks of an extent to provide respective first values for each of the at least some data chunks. A storage node, included within multiple storage nodes of a storage system, is selected to store the extent based on a majority vote derived from the respective first values. 1. A method of selecting a storage node of a storage system to store data , comprising:applying, by a mapping node, a first function to at least some data chunks of an extent to provide respective first values for each of the at least some data chunks;selecting, by the mapping node, a storage node, included within multiple storage nodes of a storage system, to store the extent based on a majority vote derived from the respective first values; andperforming, by the storage system, deduplication of the data chunks of the extent in the selected storage node using fingerprints computed on each of the data chunks, wherein one or more of the data chunks within the extent do not include useful data for a vote and the one or more of the data chunks within the extent that do not include useful data for the vote are not utilized in selecting the storage node to store the extent.2. The method of claim 1 , wherein each of the first values are mapped to one of the storage nodes and the selecting is based on the majority vote of the mapped first values.3. The method of claim 1 , wherein the first values are dynamically calculated upon processing the extent and not stored persistently.4. The method of claim 1 , wherein at least some of the data chunks within the extent are not updated and the method further comprisesloading data for the data chunks within the extent that are not updated prior to the applying;applying a second function to the respective first values to generate respective storage node votes for each of the data chunks; andselecting the storage node to store the extent ...

Подробнее
25-06-2015 дата публикации

EXTENDING USEFUL LIFE OF A NON-VOLATILE MEMORY BY HEALTH GRADING

Номер: US20150177995A1

In at least one embodiment, a controller of a non-volatile memory array determines, for each of a plurality of regions of physical memory in the memory array, an associated health grade among a plurality of health grades and records the associated health grade. The controller also establishes a mapping between access heat and the plurality of health grades. In response to a write request specifying an address, the controller selects a region of physical memory to service the write request from a pool of available regions of physical memory based on an access heat of the address and the mapping and writes data specified by the write request to the selected region of physical memory. 1. A method in a data storage system including a non-volatile memory array controlled by a controller , the method comprising:for each of a plurality of regions of physical memory in the memory array, the controller determining an associated health grade among a plurality of health grades and recording the associated health grade;the controller establishing a mapping between access heat and the plurality of health grades;in response to a write request specifying an address, the controller selecting a region of physical memory to service the write request from a pool of available regions of physical memory based on an access heat of the address and the mapping; andthe controller writing data specified by the write request to the selected region of physical memory.2. The method of claim 1 , wherein the determining includes determining the associated health grade for each of the plurality of regions of physical memory based on at least one error metric for each of the plurality of regions of physical memory.3. The method of claim 2 , and further comprising changing which health grade is associated with a particular region of physical memory based on a change of the at least one error metric for the particular region of physical memory.4. The method of claim 2 , wherein the determining ...

Подробнее
23-06-2016 дата публикации

Cooperative data deduplication in a solid state storage array

Номер: US20160179395A1
Принадлежит: International Business Machines Corp

Deduplication of data on a set of non-volatile memory by performing the following operations: receiving a first dataset; determining whether the first dataset is already present in data written to a first set of non-volatile memory; and on condition that the first dataset is determined to have already been present in the data written to the first set of non-volatile memory, providing a linking mechanism to associate the received first dataset with the already present data written to the first set of non-volatile memory.

Подробнее
23-06-2016 дата публикации

Two-Level Hierarchical Log Structured Array Architecture Using Coordinated Garbage Collection for Flash Arrays

Номер: US20160179398A1
Принадлежит:

A mechanism is provided in an array controller of a two-level hierarchical log structured array architecture for a non-volatile memory array for coordinated garbage collection. The two-level hierarchical log structured array (LSA) architecture comprises an array-level LSA in the array controller and a node-level LSA in each node of the non-volatile memory array. The array controller maintains host logical block address (LBA) to node LBA mapping in an array controller connected to a plurality of nodes. A host data processing system issues access requests to host LBA. The mapping maps the host LBA space to a node LBA space of a plurality of nodes. The mechanism makes overprovisioned space in the node LBA space of the plurality of nodes available to the array-level LSA. The mechanism adds additional overprovisioned space at each node LBA space. The array controller initiates array-level garbage collection at the array-level LSA. 1. A method , in an array controller of a two-level hierarchical log structured array architecture for a non-volatile memory array , wherein the two-level hierarchical log structured array (LSA) architecture comprises an array-level LSA in the array controller and a node-level LSA in each node of the non-volatile memory array , for coordinated garbage collection , the method comprising:maintaining host logical block address (LBA) to node LBA mapping in an array controller connected to a plurality of nodes, wherein a host data processing system issues access requests to host LBA and wherein the mapping maps the host LBA space to a node LBA space of a plurality of nodes;making overprovisioned space in the node LBA space of the plurality of nodes available to the array-level LSA;adding additional overprovisioned space at each node LBA space; andinitiating array-level garbage collection at the array-level LSA.2. The method of claim 1 , wherein the additional overprovisioned space at each node LBA space is not visible in the host LBA space.3. The ...

Подробнее
23-06-2016 дата публикации

Two-Level Hierarchical Log Structured Array Architecture with Minimized Write Amplification

Номер: US20160179410A1
Принадлежит:

A mechanism is provided for coordinated garbage collection in an array controller of a two-level hierarchical log structured array architecture for a non-volatile memory array. The two-level hierarchical log structured array (LSA) architecture comprises an array-level LSA in the array controller and a node-level LSA in each node of the non-volatile memory array. The array controller writes logical pages of data to containers in memory of the array-level storage controller at node logical block addresses in an array-level LSA. The array-level LSA maps the host logical block addresses to node logical block addresses in a node-level LSA in a plurality of nodes. Responsive to initiating array-level garbage collection in the array controller, the mechanism identifies a first container to reclaim according to a predetermined garbage collection policy. Responsive to determining the first container has at least a first valid logical page of data, the mechanism moves the first valid logical page of data to a location assigned to the same node in a target container in the memory of the array-level storage controller, remaps the first valid logical page of data in a corresponding node, and reclaims the first container. 1. A method for coordinated garbage collection in an array controller of a two-level hierarchical log structured array architecture for a non-volatile memory array , wherein the two-level hierarchical log structured array (LSA) architecture comprises an array-level LSA in the array controller and a node-level LSA in each node of the non-volatile memory array , the method comprising:writing, by the array controller, logical pages of data to containers in memory of the array-level storage controller at node logical block addresses in an array-level LSA;mapping the host logical block addresses to node logical block addresses in a node-level LSA in a plurality of nodes;responsive to initiating array-level garbage collection in the array controller, identifying a ...

Подробнее
23-06-2016 дата публикации

ENDURANCE ENHANCEMENT SCHEME USING MEMORY RE-EVALUATION

Номер: US20160179412A1
Принадлежит:

An apparatus, according to one embodiment, includes non-volatile memory configured to store data, and a controller and logic integrated with and/or executable by the controller, the logic being configured to: determine, by the controller, that at least one block of the non-volatile memory and/or portion of a block of the non-volatile memory meets a retirement condition, re-evaluate, by the controller, the at least one block and/or the portion of a block to determine whether to retire the at least one block and/or the portion of a block, indicate, by the controller, that the at least one block and/or the portion of a block remains usable when a result of the re-evaluation is not to retire the block, and indicate, by the controller, that the at least one block and/or the portion of a block is retired when the result of the re-evaluation is to retire the block. 1. An apparatus , comprising:non-volatile memory configured to store data; and determine, by the controller, that at least one block of the non-volatile memory and/or portion of a block of the non-volatile memory meets a retirement condition;', 're-evaluate, by the controller, the at least one block and/or the portion of a block to determine whether to retire the at least one block and/or the portion of a block;', 'indicate, by the controller, that the at least one block and/or the portion of a block remains usable when a result of the re-evaluation is not to retire the block; and', 'indicate, by the controller, that the at least one block and/or the portion of a block is retired when the result of the re-evaluation is to retire the block., 'a controller and logic integrated with and/or executable by the controller, the logic being configured to2. The apparatus as recited in claim 1 , wherein the re-evaluating includes assigning the at least one block and/or the portion of a block to a delay queue for at least a dwell time.3. The apparatus as recited in claim 1 , wherein the re-evaluating includes performing one ...

Подробнее
23-06-2016 дата публикации

NON-VOLATILE MEMORY CONTROLLER CACHE ARCHITECTURE WITH SUPPORT FOR SEPARATION OF DATA STREAMS

Номер: US20160179678A1
Принадлежит:

A system according to one embodiment includes non-volatile memory, and a non-volatile memory controller having a cache. An architecture of the cache supports separation of data streams, and the cache architecture supports parallel writes to different non-volatile memory channels. Additionally, the cache architecture supports pipelining of the parallel writes to different non-volatile memory planes. Furthermore, the non-volatile memory controller is configured to perform a direct memory lookup in the cache based on a physical block address. Other systems, methods, and computer program products are described in additional embodiments. 1. A system , comprising:non-volatile memory; anda non-volatile memory controller having a cache,wherein an architecture of the cache supports separation of data streams,wherein the cache architecture supports parallel writes to different non-volatile memory channels,wherein the cache architecture supports pipelining of the parallel writes to different non-volatile memory planes,wherein the non-volatile memory controller is configured to perform a direct memory lookup in the cache based on a physical block address.2. The system of claim 1 , comprising: receive a logical block address write request;', 'retrieve a previous physical block address and heat value associated with the logical block address from memory;', 'increment the heat value;', 'compute, by the non-volatile memory controller, a stream for the logic block address based on the incremented heat value;', 'increment a fill pointer of the stream;', 'write data of the logic block address write request to a page indexed by the incremented fill pointer; and', 'retrieve an updated physical block address of the page indexed by the incremented fill pointer., 'logic integrated with and/or executable by the non-volatile memory controller, the logic being configured to3. The system of claim 2 , comprising logic configured to:update a logical to physical table with the updated physical ...

Подробнее
06-06-2019 дата публикации

REDUCING UNNECESSARY CALIBRATION OF A MEMORY UNIT FOR WHICH THE ERROR COUNT MARGIN HAS BEEN EXCEEDED

Номер: US20190171381A1
Принадлежит:

A controller sets an error count margin for each of multiple units of a non-volatile memory and detects whether the error count margin of any of the multiple units has been exceeded. In response to detecting that the error count margin of a memory unit is exceeded, the controller determines whether calibration of the memory unit would improve a bit error rate of the memory unit sufficiently to warrant calibration. If so, the controller performs calibration of the memory unit. In some implementations, the controller refrains from performing the calibration in response to determining that calibration of the memory unit would not improve the bit error rate of the memory unit sufficiently to warrant calibration, but instead relocates a desired part or all valid data within the memory unit and, if all valid data has been relocated from it, erases the memory unit. 1. A method of calibration in a non-volatile memory , the method comprising:a controller of the non-volatile memory setting an error count margin for each of multiple units of the non-volatile memory;the controller detecting whether the error count margin of any of the multiple units has been exceeded;in response to detecting that the error count margin of a memory unit among the multiple memory units is exceeded, the controller determining whether calibration of the memory unit would improve a bit error rate of the memory unit sufficiently to warrant calibration; andthe controller performing calibration of the memory unit in response to determining that calibration of the memory unit would improve the bit error rate of the memory unit sufficiently to warrant calibration.2. The method of claim 1 , and further comprising refraining from performing the calibration in response to determining that calibration of the memory unit would not improve the bit error rate of the memory unit sufficiently to warrant calibration.3. The method of claim 2 , and further comprising:in response to determining that calibration of ...

Подробнее
30-06-2016 дата публикации

MANAGING METADATA FOR CACHING DEVICES DURING SHUTDOWN AND RESTART PROCEDURES

Номер: US20160188478A1
Принадлежит:

A computer program product, system, and method for managing metadata for caching devices during shutdown and restart procedures. Fragment metadata for each fragment of data from the storage server stored in the cache device is generated. The fragment metadata is written to at least one chunk of storage in the cache device in a metadata directory in the cache device. For each of the at least one chunk in the cache device to which the fragment metadata is written, chunk metadata is generated for the chunk and writing the generated chunk metadata to the metadata directory in the cache device. Header metadata having information on access of the storage server is written to the metadata directory in the cache device. The written header metadata, chunk metadata, and fragment metadata are used to validate the metadata directory and the fragment data in the cache device during a restart operation. 1. A computer program product for caching data from a storage device managed by a storage server in a cache device providing non-volatile storage , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that is executable to perform operations , the operations comprising:for each of at least one chunk in the cache device having fragment metadata for each fragment of data from the storage server stored in the cache device, generating chunk metadata for the chunk;using the chunk metadata to validate the fragment metadata in the cache device during a restart operation.2. The computer program product of claim 1 , wherein the operations performed as part of the restart operation further comprise:using the chunk metadata to validate header metadata;constructing a cache directory from information in response to validating the header metadata and the fragment metadata;indicating the header metadata as invalid; andrequesting permission from the storage server to access the cached fragment data.3. The computer ...

Подробнее
29-06-2017 дата публикации

REDUCING WRITE AMPLIFICATION IN SOLID-STATE DRIVES BY SEPARATING ALLOCATION OF RELOCATE WRITES FROM USER WRITES

Номер: US20170185298A1
Принадлежит:

A computer-implemented method, according to one embodiment, includes: maintaining, by a processor, a first open logical erase block for user writes; maintaining, by the processor, a second open logical erase block for relocate writes; receiving, by the processor, a first data stream having the user writes; transferring, by the processor, the first data stream to the first open logical erase block; receiving, by the processor, a second data stream having the relocate writes; and transferring, by the processor, the second data stream to the second open logical erase block. Moreover, the first and second open logical erase blocks are different logical erase blocks. Other systems, methods, and computer program products are described in additional embodiments. 1. A computer-implemented method , comprising:maintaining, by a processor, a first open logical erase block for user writes;maintaining, by the processor, a second open logical erase block for relocate writes, wherein the first and second open logical erase blocks are different logical erase blocks;receiving, by the processor, a first data stream having the user writes;transferring, by the processor, the first data stream to the first open logical erase block;receiving, by the processor, a second data stream having the relocate writes; andtransferring, by the processor, the second data stream to the second open logical erase block.2. The computer-implemented method of claim 1 , further comprising:assigning a first timeout value to the first open logical erase block; andassigning a second timeout value to the second open logical erase block.3. The computer-implemented method of claim 2 , further comprising:reassigning at least one of the first and second open logical erase blocks to a different data stream in response to at least one of the first and second timeouts of the respective first and second open logical erase blocks expiring.4. The computer-implemented method of claim 2 , wherein the first and second ...

Подробнее
04-06-2020 дата публикации

Relocating and/or re-programming blocks of storage space based on calibration frequency and resource utilization

Номер: US20200174664A1
Принадлежит: International Business Machines Corp

A computer-implemented method, according to one embodiment, includes: calibrating a first block of storage space in memory, identifying a page in the calibrated first block having a highest RBER, and determining whether the RBER of the identified page is greater than an error correction code limit. In response to determining that the RBER of the identified page is not greater than the error correction code limit, a determination is made as to whether the RBER of the identified page is greater than a relocation limit. In response to determining that the RBER of the identified page is not greater than a relocation limit, another determination is made as to whether the first block has been excessively calibrated. Furthermore, in response to determining that the first block has been excessively calibrated, data in the first block relocated to a second block of storage space in the memory.

Подробнее
18-06-2020 дата публикации

SELECTIVELY PERFORMING MULTI-PLANE READ OPERATIONS IN NON-VOLATILE MEMORY

Номер: US20200192735A1
Принадлежит:

A computer-implemented method, according to one embodiment, includes: receiving a multi-page read request and predicting whether using a multi-plane read operation to read pages of storage space in memory which correspond to the multi-page read request will result in a bit error rate that is in a predetermined range. In response to predicting that using the multi-plane read operation to read the pages will not result in a bit error rate that is in the predetermined range, a threshold voltage shift (TVS) value is computed for the multi-plane read operation. Furthermore, the pages are read using the multi-plane read operation with the computed TVS. Other systems, methods, and computer program products are described in additional embodiments. 1. A computer-implemented method , comprising:receiving a multi-page read request;predicting whether using a multi-plane read operation to read pages of storage space in memory which correspond to the multi-page read request will result in a bit error rate that is in a predetermined range;in response to predicting that using the multi-plane read operation to read the pages will not result in a bit error rate that is in the predetermined range, computing a threshold voltage shift (TVS) value for the multi-plane read operation; andreading the pages using the multi-plane read operation with the computed TVS.2. The computer-implemented method of claim 1 , comprising:determining whether the multi-plane read operation was unable to read at least one of the pages; andin response to determining that the multi-plane read operation was unable to read at least one of the pages, re-reading the at least one of the pages using a sequential read operation.3. The computer-implemented method of claim 1 , comprising:in response to predicting that using the multi-plane read operation to read the pages will result in a bit error rate that is in a predetermined range, examining blocks of storage space in the memory which include the pages;determining ...

Подробнее
18-06-2020 дата публикации

ADAPTIVE DATA AND PARITY PLACEMENT USING COMPRESSION RATIOS OF STORAGE DEVICES

Номер: US20200192758A1

Embodiments for adaptive placement of parity information within Redundant Array of Independent Disks (RAID) stripes in a computer storage environment. A RAID controller periodically collects a physical capacity usage of each of a plurality of storage devices within the RAID. The RAID controller determines a placement of data and the parity information within at least one of the plurality of storage devices according to at least one of a plurality of factors associated with the physical capacity usage. The RAID controller writes the data and the parity information to the at least one of the plurality of storage devices according to the determined placement. 1. A method for adaptive placement of parity information within Redundant Array of Independent Disks (RAID) stripes , by a processor , comprising:periodically collecting, by a RAID controller, a physical capacity usage of each of a plurality of storage devices within the RAID;determining, by the RAID controller, a placement of data and the parity information within at least one of the plurality of storage devices according to at least one of a plurality of factors associated with the physical capacity usage; andwriting the data and the parity information by the RAID controller to the at least one of the plurality of storage devices according to the determined placement.2. The method of claim 1 , wherein the plurality of factors are selected from a list comprising the physical capacity usage claim 1 , a current parity location of each stripe claim 1 , and a current data location of each stripe.3. The method of claim 2 , further including maintaining claim 2 , by the RAID controller claim 2 , a first table recording the physical capacity usage of each of the plurality of storage devices and a second table indicating the current parity location of each stripe; wherein the RAID controller references the first table and the second table when determining the placement of the data and the parity information.4. The method ...

Подробнее
28-07-2016 дата публикации

PROCESSING UNIT RECLAIMING REQUESTS IN A SOLID STATE MEMORY DEVICE

Номер: US20160217070A1
Автор: Haas Robert, Pletka Roman

An apparatus and method for processing unit reclaiming requests in a solid state memory device. The present invention provides a method of managing a memory which includes a set of units. The method includes selecting a unit from the set of units having plurality of subunits. The method further includes determining a number of valid subunits m to be relocated from the units selected for a batch operation where m is at least 2. The selecting is carried out by a unit reclaiming process. 1. A memory controller for managing a memory , comprising:a processor device configured:to select at least one unit out of a set of units in a solid state memory device for reclamation by a unit reclaiming process, where each of the set of units is a fixed size solid state memory block, where each of the set of units is configured to store data and is erasable as a whole by the unit reclaiming process, and where each of the set of units comprises a plurality of subunits, where each of the plurality of subunits is a solid state memory page that is of a fixed size;to identify a set of valid subunits n within the at least one selected unit;to determine a number m of valid subunits within each of the set of valid subunits n to be relocated by a batch operation from the at least one unit that has been selected, wherein m is at least two; andto perform the batch operation, the batch operation relocating each of the set of valid subunits n by operating relocation requests for each of the set of valid subunits in batches of size m in which the last batch to be relocated is not required to be equal to m subunits, wherein operating relocation requests further comprises:storing each of a plurality of read requests for the m subunits to be relocated in a queue configured to store at least user requests and read requests for relocation requests associated with subunits, each of the plurality of read requests being stored in a row for execution and are included within one of a plurality of ...

Подробнее
27-07-2017 дата публикации

Reducing Read Access Latency by Straddling Pages Across Non-Volatile Memory Channels

Номер: US20170212692A1
Принадлежит:

A mechanism is provided in a non-volatile memory controller for reducing read access latency by straddling pages across non-volatile memory channels. Responsive to a request to write a logical page to a non-volatile memory array, the non-volatile memory controller determines whether the logical page fits into a current physical page. Responsive to determining the logical page does not fit into the current physical page, the non-volatile memory controller writes a first portion of the logical page to a first physical page in a first block and writes a second portion of the logical page to a second physical page in a second block. The first physical page and the second physical page are on different non-volatile memory channels. 1. A method , in a non-volatile memory controller , for reducing read access latency by straddling pages across non-volatile memory channels , the method comprising:responsive to a request to write a logical page to a non-volatile memory array, determining whether the logical page fits into a current physical page; andresponsive to determining the logical page does not fit into the current physical page, writing a first portion of the logical page to a first physical page in a first block and writing a second portion of the logical page to a second physical page in a second block, wherein the first physical page and the second physical page are on different non-volatile memory channels.2. The method of claim 1 , wherein the first physical page and the second physical page are in a same physical page stripe.3. The method of claim 1 , further comprising:responsive to determining the logical page does not fit into the current physical page and writing the first portion of the logical page to the first physical page and writing the second portion of the logical page to the second physical page, setting a straddle bit in a logical page table entry corresponding to the logical page.4. The method of claim 1 , further comprising:responsive to ...

Подробнее
11-07-2019 дата публикации

INCREASING STORAGE EFFICIENCY OF A DATA PROTECTION TECHNIQUE

Номер: US20190212949A1
Принадлежит:

A technique for operating a data storage system includes receiving uncompressed data. The uncompressed data is organized into data strips of a stripe. The data strips are compressed subsequent to the organizing. Parity information for the compressed data strips is calculated. Storage of the compressed data strips and the parity information for the stripe is initiated on respective storage devices of the data storage system. 1. A method of operating a data storage system that includes multiple storage devices , comprising:receiving, by a first controller, uncompressed data;organizing, by the first controller, the uncompressed data into data strips of a stripe;compressing, by second controllers that are each associated with a different one of the storage devices, the data strips of the stripe;calculating, by the first controller, parity information for the data strips subsequent to the compressing, wherein calculating the parity information for the data strips subsequent to the compressing increases efficiency of the data storage system by reducing a size of the parity information; andinitiating, by the second controllers, storing of the compressed data strips and the parity information for the stripe on the storage devices of the data storage system.2. The method of claim 1 , further comprising:determining a size of a largest one of the data strips in the stripe; andpadding remaining ones of the data strips in the stripe with zeroes such that all of the data strips in the stripe are a same size prior to calculating the parity information.3. The method of claim 1 , further comprising:performing log-structured array (LSA) data organization on the uncompressed data prior to the uncompressed data being organized into the data strips of the stripe.4. The method of claim 3 , wherein the received uncompressed data is organized into the data strips using a Redundant Array of Independent Disks (RAID) engine.5. The method of claim 3 , wherein the first controller is ...

Подробнее
11-07-2019 дата публикации

ENDURANCE ENHANCEMENT SCHEME USING MEMORY RE-EVALUATION

Номер: US20190213124A1
Принадлежит:

An apparatus, according to one embodiment, includes non-volatile memory configured to store data, and a controller and logic integrated with and/or executable by the controller, the logic being configured to: determine, by the controller, that at least one block of the non-volatile memory and/or portion of a block of the non-volatile memory meets a retirement condition, re-evaluate, by the controller, the at least one block and/or the portion of a block to determine whether to retire the at least one block and/or the portion of a block, indicate, by the controller, that the at least one block and/or the portion of a block remains usable when a result of the re-evaluation is not to retire the block, and indicate, by the controller, that the at least one block and/or the portion of a block is retired when the result of the re-evaluation is to retire the block. 1. An apparatus , comprising:non-volatile memory configured to store data; and determine, by the controller, that at least one block of the non-volatile memory and/or portion of a block of the non-volatile memory meets a retirement condition;', 're-evaluate, by the controller, the at least one block and/or the portion of a block to determine whether to retire the at least one block and/or the portion of a block;', 'indicate, by the controller, that the at least one block and/or the portion of a block remains usable when a result of the re-evaluation is not to retire the block; and', 'indicate, by the controller, that the at least one block and/or the portion of a block is retired when the result of the re-evaluation is to retire the block., 'a controller and logic integrated with and/or executable by the controller, the logic being configured to2. The apparatus as recited in claim 1 , wherein the re-evaluating includes assigning the at least one block and/or the portion of a block to a delay queue for at least a dwell time.3. The apparatus as recited in claim 1 , wherein the re-evaluating includes performing one ...

Подробнее
16-08-2018 дата публикации

SELECTIVE SPACE RECLAMATION OF DATA STORAGE MEMORY EMPLOYING HEAT AND RELOCATION METRICS

Номер: US20180232318A1
Принадлежит:

Space of a data storage memory of a data storage memory system is reclaimed by determining heat metrics of data stored in the data storage memory; determining relocation metrics related to relocation of the data within the data storage memory; determining utility metrics of the data relating the heat metrics to the relocation metrics for the data; and making the data whose utility metric fails a utility metric threshold, available for space reclamation. 1. A method for reclaiming space of a data storage memory of a data storage memory system , comprising:determining at least one relocation metric related to relocation of data within said data storage memory, said at least one relocation metric comprising a count of the number of times said data has been relocated during reclamation process iterations; andperforming, based at least in part on the determined at least one relocation metric, at least one action selected from the group consisting of:(i) making said data available for space reclamation and (ii) making said data exempted from space reclamation.2. The method of claim 1 , additionally comprising:exempting from space reclamation eligibility, data recently added to said data storage memory.3. The method of claim 1 , additionally comprising:exempting from space reclamation eligibility, data designated as ineligible by space management policy.4. A computer-implemented data storage memory system comprising:at least one data storage memory; anda processor configured to obtain instructions that cause the processor to perform a method for reclaiming space of said data storage memory, said method comprising:determining at least one relocation metric related to relocation of data within said data storage memory, said at least one relocation metric comprising a count of the number of times said data has been relocated during reclamation process iterations; andperforming, based at least in part on the determined at least one relocation metric, at least one action ...

Подробнее
24-08-2017 дата публикации

TECHNIQUES FOR DYNAMICALLY ADJUSTING OVER-PROVISIONING SPACE OF A FLASH CONTROLLER BASED ON WORKLOAD CHARACTERISTICS

Номер: US20170242592A1
Принадлежит:

A technique for adapting over-provisioning space in a storage system includes determining one or more workload characteristics in the storage system. Over-provisioning space in the storage system is then adjusted to achieve a target write amplification for the storage system, based on the workload characteristics. 1. A method of adapting over-provisioning space in a storage system , comprising:determining, by a controller, a data reduction ratio in the storage system;determining, by the controller, one or more workload characteristics in the storage system; andadjusting by the controller, based on the workload characteristics and the data reduction ratio, over-provisioning space in the storage system to achieve a target write amplification for the storage system and thereby improve performance of the storage system.2. The method of claim 1 , wherein the workload characteristics include one or more of a dynamic read/write ratio claim 1 , a dynamic write amplification claim 1 , and a dynamic write access distribution experienced by the storage system.3. (canceled)4. The method of claim 1 , wherein the data reduction ratio is based on at least one of a dynamic data compression ratio and a dynamic data deduplication ratio experienced by the storage system.5. The method of claim 1 , wherein the storage system is a thin provisioned storage system.6. The method of claim 1 , wherein the target write amplification is increased for read-dominated workloads.7. The method of claim 1 , wherein the target write amplification is decreased for write-dominated workloads.8. A storage system claim 1 , comprising:a flash controller memory; and determine a data reduction ratio in the storage system;', 'determine one or more workload characteristics in the storage system; and', 'adjust, based on the workload characteristics and the data reduction ratio, over-provisioning space in the storage system to achieve a target write amplification for the storage system and thereby improve ...

Подробнее