Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 10985. Отображено 200.
05-10-2018 дата публикации

ЗАПОМИНАЮЩИЕ СИСТЕМЫ И ПАМЯТЬ С ПРИВЯЗКАМИ

Номер: RU2669008C2

Изобретение относится к вычислительной технике. Технический результат заключается в снижение операционных расходов, связанных с копированием, при работе с данными в памяти. Контроллер памяти содержит схему для доступа к данным в энергонезависимой памяти; интерфейс, выполненный с возможностью принимать от драйвера файловой системы запрос на создание привязки между блоком памяти запоминающей системы и блоком памяти основной памяти, при этом блок памяти запоминающей системы предназначен для использования при хранении данных запоминающей системы, а блок памяти основной системы предназначен для использования применительно к основной памяти компьютера; схему установления привязок для создания привязки между блоком памяти запоминающей системы и блоком памяти основной памяти; и схему извлечения для получения данных из блока запоминающей системы в ответ на запрос считывания в отношении данных блока памяти основной памяти, при этом схема установления привязок, будучи выполненной с возможностью создания ...

Подробнее
04-11-1993 дата публикации

Festplattencontroller

Номер: DE0004121974C2

Подробнее
02-05-2013 дата публикации

Verwaltung von Teildatensegmenten in Systemen mit doppeltem Cachespeicher

Номер: DE102012219098A1
Принадлежит:

Es werden verschiedene beispielhafte Ausführungsformen von Verfahren, Systemen und Computerprogrammprodukten zum Verschieben von Teildatensegmenten innerhalb einer Datenverarbeitungs-Speicherumgebung, die durch einen Prozessor untergeordnete und übergeordnete Cachespeicherebenen aufweist, bereitgestellt. Bei einer solchen Ausführungsform wird, lediglich als Beispiel, ein gesamtes Datensegment, das eines der Teildatensegmente enthält, sowohl in die untergeordnete als auch in die übergeordnete Cachespeicherebene umgestuft. Angeforderte Daten des gesamten Datensegments werden aufgeteilt und an einem zuletzt verwendeten (MRU-)Abschnitt einer Herabstufungs-Warteschlange der übergeordneten Cachespeicherebene positioniert. Nicht angeforderte Daten des gesamten Datensegments werden aufgeteilt und an einem am längsten ungenutzten (LRU-)Abschnitt der Herabstufungs-Warteschlange der übergeordneten Cachespeicherebene positioniert. Die nicht angeforderten Daten werden fixiert, bis ein Schreibvorgang ...

Подробнее
26-09-2013 дата публикации

Verwalten von Cachespeicher-Auslagerungsüberprüfungszeiten

Номер: DE112011104314T5

Es werden Systeme und Verfahren zum Verwalten von Auslagerungsüberprüfungszeiten in einem Cachespeicher bereitgestellt. Ein System beinhaltet einen Cachespeicher und einen Prozessor. Der Prozessor ist dafür konfiguriert, einen ersten Thread zu verwenden, um fortlaufend eine gewünschte Überprüfungszeit für das Überprüfen der Mehrzahl von Speicherspuren in dem Cachespeicher zu ermitteln, und einen zweiten Thread zu verwenden, um fortlaufend eine tatsächliche Überprüfungszeit der Mehrzahl von Speicherspuren in dem Cachespeicher auf der Grundlage der fortlaufend ermittelten gewünschten Überprüfungszeit zu steuern. Ein Verfahren beinhaltet das Verwenden eines ersten Thread, um fortlaufend eine gewünschte Überprüfungszeit für das Überprüfen der Vielzahl von Speicherspuren in dem Cachespeicher zu ermitteln, und das Verwenden eines zweiten Thread, um fortlaufend eine tatsächliche Überprüfungszeit der Mehrzahl von Speicherspuren in dem Cachespeicher auf der Grundlage der fortlaufend ermittelten ...

Подробнее
20-11-2008 дата публикации

Speichersystem mit Cache-Speicher

Номер: DE0060323969D1
Принадлежит: HITACHI LTD, HITACHI LTD.

Подробнее
15-04-2010 дата публикации

Nand-Fehlerbehandlung

Номер: DE102009031125A1
Принадлежит:

Es werden Techniken zum Behandeln verschiedener Fehler in Speichern, wie z.B. NAND-Speichern, in elektronischen Geräten offengelegt. Bei manchen Ausführungsformen werden Lösch-, Lese- und Programmfehlerbehandlungsfehler behandelt.

Подробнее
08-08-2002 дата публикации

STEUERUNG VON GEMEINSAMEN PLATTENDATEN IN EINER DUPLEX-RECHNEREINHEIT

Номер: DE0069617709T2
Принадлежит: NOKIA NETWORKS OY, NOKIA NETWORKS OY, ESPOO

Подробнее
30-04-2009 дата публикации

RAID mit Hochleistungs- und Niedrigleistungsplattenspieler

Номер: DE602005013322D1

Подробнее
27-01-1988 дата публикации

METHOD OF RAPIDLY OPENING DISK FILES IDENTIFIED BY PATH NAMES

Номер: GB0008728924D0
Автор:
Принадлежит:

Подробнее
15-06-1994 дата публикации

Disk meshing and flexible storage mapping with enhanced flexible caching

Номер: GB0009408375D0
Автор:
Принадлежит:

Подробнее
27-02-2013 дата публикации

Demoting Partial Tracks From A First Chache To A Second Cache

Номер: GB0201300444D0
Автор:
Принадлежит:

Подробнее
16-03-2016 дата публикации

Data storage

Номер: GB0201601655D0
Автор:
Принадлежит:

Подробнее
24-11-2004 дата публикации

Adaptive pre-fetching of data from a disk

Номер: GB0002383658B
Принадлежит: EMC CORP, * EMC CORPORATION

Подробнее
14-09-2005 дата публикации

Storage device control unit and method of controlling the same

Номер: GB0002406929B
Автор: MORI KENJI, KENJI * MORI
Принадлежит: HITACHI LTD, * HITACHI, LTD.

Подробнее
25-12-2013 дата публикации

Sharing aggregated cache hit and miss data in a storage area network

Номер: GB0002503266A
Принадлежит:

A number of computer systems LS1, LS2 are connected to a shared data storage system CS to access data D. The computer systems each have a local cache CM1, CM2 and run applications A1, A2. The computer systems provide information about cache hits H and misses M to the storage system. The storage system aggregates the information and provides the aggregated information ACD to the computer systems. The computer systems then use the aggregated information to update the cached data. The computer system may populate the cache with one or more subsets of the data identified in the aggregated cache information. The computer system may immediately populate the cache with data identified in some of the subsets and may add data identified in other subsets to a watch list. Data corresponding to a local cache miss may also be put in the watch list.

Подробнее
25-01-2017 дата публикации

Invalidation data area for cache

Номер: GB0002540681A
Принадлежит:

Disclosed is a method 400 of invaliding blocks in a cache by determining a journal block 402 tracking a memory address associated with a received write operation, the journal block stored in a journal of the cache. The method determines a mapped journal block based on the determined journal block and on an invalidation record 404. The mapped journal block and the invalidation record are stored in an invalidation data area of the cache. Then, whether any write operations are outstanding is determined 406, if any are they are aggregated and a single aggregated write operation is preformed 408, on the other hand if no write operations are outstanding then the received write operation is performed 410. Also, disclose is a method of recovery of a cache by determining an initial reconstruction of the cache based on the cache journal. The recovery works by for each mapped journal block in an invalidation record, determining if a corresponding data block tracked in the journal is valid, and if ...

Подробнее
26-03-2003 дата публикации

Non-volatile cache integrated with mass storage device

Номер: GB0002380031A
Принадлежит:

Apparatus and methods relating to a non-volatile mass storage device including a non-volatile cache.

Подробнее
03-06-2015 дата публикации

Process control systems and methods

Номер: GB0002520808A
Принадлежит:

A method comprises: operating a first cluster including first virtual machines and first servers, and a second cluster including second virtual machines and second servers; storing 2210 first data from the first virtual machines at a first data store of the first cluster and a replica of the first data at a second data store of the second cluster; storing 2212 second data from the second virtual machines at the second data store and a replica of the second data at the first data store; identifying 2222 a failure of the first cluster and, in response, restarting 2224 the first virtual machines using the second servers and the replica of the first data at the second data store. After resolving the failure, a migration 2228 from the second servers back to the first servers may be performed. A method comprises: selecting 2202 a first mode to operate 2206 a first cluster, comprising writing to a cache of a first data store to store first data from first virtual machines; and selecting 2214 a ...

Подробнее
21-01-1981 дата публикации

Disc cache subsystem

Номер: GB0002052118A
Принадлежит:

Data memory sub-system for connection to a data processing system includes a magnetic disc store 14, and a disc controller 20 incorporating a high speed semiconductor cache buffer 16. The memory operates in a broadly conventional manner. The cache may be divided into two levels. The first level may be arranged to store complete disc tracks. The second (slower) level may be formed of CCDs. The controller also includes a microprocessor arranged to manage the stores. ...

Подробнее
25-11-2020 дата публикации

Process control systems and methods

Номер: GB0002584232A
Принадлежит:

A cluster of virtual machines and servers can operate in a first mode, in which disc writes are cached and a second mode where disc writes are not cached. The cached mode may be a configuration mode and the uncached mode may be a resilient mode. When operating in the uncached mode, a replica of the data in a first cluster may be stored in a second cluster. In the event of a failure of the first cluster, the virtual machines of the first cluster may be restarted using the servers of the second cluster and the duplicated data. Once the failure is resolved, the virtual machines may be migrated from the second cluster to the first cluster. Data from the second cluster may be duplicated at the first cluster in parallel with the duplication of the data from the first cluster to the second cluster. The data may be duplicated at a file system level.

Подробнее
07-09-2011 дата публикации

Method and apparatus to manage non-volatile disk cache

Номер: GB0002478434A
Принадлежит:

The present invention provides a method and an apparatus to manage non-volatile (NV) memory as cache on a hard drive disk for storage.

Подробнее
18-05-2005 дата публикации

Data storage system with shared cache address space

Номер: GB0000507160D0
Автор:
Принадлежит:

Подробнее
02-03-1994 дата публикации

Flash memory system with arbitrary block size

Номер: GB0009326499D0
Автор:
Принадлежит:

Подробнее
22-02-2023 дата публикации

Preemptive staging for full-stride destage

Номер: GB0002610121A
Принадлежит:

There is a method for improving destage performance to a RAID array. The method periodically scans a cache for first strides that are ready to be destaged to a RAID array. While scanning the cache, the method identifies second strides that are not currently ready to be destaged to the RAID array, but will likely be ready to be destaged during a subsequent scan of the cache. The method initiates preemptive staging of any missing data of the second strides from the RAID array into the cache in preparation for the subsequent scan. Upon occurrence of the subsequent scan, the method destages, from the cache, the second strides from the cache to the RAID array.

Подробнее
15-01-2011 дата публикации

COMPUTER MEMORY SYSTEM

Номер: AT0000492846T
Принадлежит:

Подробнее
15-07-1979 дата публикации

HIERARCHICAL MEMORY ARRANGEMENT

Номер: AT0000407675A
Автор:
Принадлежит:

Подробнее
15-02-2007 дата публикации

ADAPTIVE CACHE ALGORITHM FOR TEMPERATURE-SENSITIVE MEMORY

Номер: AT0000352065T
Принадлежит:

Подробнее
15-08-2006 дата публикации

HYBRID DATA MEMORY SYSTEM

Номер: AT0000333697T
Принадлежит:

Подробнее
23-05-2019 дата публикации

Downhole vibration and impact data recording method

Номер: AU2018253514B1
Принадлежит: Shelston IP Pty Ltd.

GIBJ18AU19414-47D1 Disclosed is a downhole vibration and impact data recording method, comprising: performing analog-to-digital conversion on analog data, outputting digital format data obtained at a sampling rate fi, and performing sampling storage processing and analysis storage processing on the same. The sampling storage processing includes outputting the digital format data obtained at a sampling rate fn through multiple samplings; storing the same continuously into a storage module. The analysis storage processing includes: buffering the digital format data obtained at the sampling rate fi into a memory; analyzing the same to determine whether an impact event occurs, and if yes, storing the current data in the memory into the storage module, and then jumping back to the buffering step; otherwise, jumping directly back to the buffering step. With the above method, the amount of data storage can be effectively reduced, while the characteristics of the vibration and impact data can be ...

Подробнее
26-11-2001 дата публикации

System and method for high-speed substitute cache

Номер: AU0005934201A
Принадлежит:

Подробнее
30-03-1992 дата публикации

COMPUTER MEMORY ARRAY CONTROL

Номер: AU0008508191A
Принадлежит:

Подробнее
01-04-1997 дата публикации

Controlling shared disk data in a duplexed computer unit

Номер: AU0006932996A
Принадлежит:

Подробнее
10-05-2018 дата публикации

Systems and methods for storing and transferring message data

Номер: AU2016335746A1

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for storing and transferring messages. An example method includes providing a queue having an ordered plurality of storage blocks. Each storage block stores one or more respective messages and is associated with a respective time. The times increase from a block designating a head of the queue to a block designating a tail of the queue. The method also includes reading, by each of a plurality of first sender processes, messages from one or more blocks in the queue beginning at the head of the queue. The read messages are sent, by each of the plurality of first sender processes, to a respective recipient. One or more of the blocks are designated as old when they have associated times that are earlier than a first time. A block is designated as a new head of the queue when the block is associated with a time later than or equal to the first time. One or more of the first sender processes is ...

Подробнее
07-08-2001 дата публикации

Method and apparatus for increasing the battery life of portable electronic devices

Номер: AU0002535801A
Автор:
Принадлежит:

Подробнее
07-08-1984 дата публикации

SCHEDULING DEVICE OPERATIONS IN A BUFFERED PERIPHERAL SUBSYSTEM

Номер: CA1172378A

SCHEDULING DEVICE OPERATIONS IN A BUFFERED PERIPHERAL SUBSYSTEM Data transfers between respective buffer segments and data source-sinks, such as peripheral data storage devices are scheduled as a series of transfer based upon most recent, next most recent, etc., usage of the buffer segments by a utilization device. A "most recently used - least recently used" list of segments ordered by such usage is dynamically maintained. Replacement of segment allocations among devices proceeds from the least recent used, next least recent used, etc. segments. Therefore the single list controls replacement, read ahead (pre-fetch) of data from devices to the buffer and transfer of data from the buffer to the devices all based on utilization of the buffer by the utilization device. TU9-80-009 ...

Подробнее
14-10-1986 дата публикации

CONTROL OF CACHE BUFFER FOR MEMORY SUBSYSTEM

Номер: CA1212780A

A solid-state cache memory subsystem configured to be used in conjunction with disk drives for prestaging of data in advance of its being called for by a host computer is disclosed, featuring means for establishing and maintaining precise correspondence between storage locations in the solid-state array and on the disk memory, for use in establishing a reoriented position on a disk in the event of error detection, and in order to determine when a predetermined quantity of data has been read from the disk into the cache in a staging operation.

Подробнее
31-12-1985 дата публикации

ADAPTIVE DOMAIN PARTITIONING OF CACHE MEMORY SPACE

Номер: CA0001198813A1
Принадлежит:

Подробнее
07-08-1984 дата публикации

SCHEDULING DEVICE OPERATIONS IN A BUFFERED PERIPHERAL SUBSYSTEM

Номер: CA0001172378A1
Принадлежит: NA

Подробнее
07-10-1986 дата публикации

STORAGE MEDIUM CONTROLLER

Номер: CA0001212482A1
Принадлежит:

Подробнее
05-11-2015 дата публикации

SYSTEMS, DEVICES AND METHODS FOR GENERATING LOCALITY-INDICATIVE DATA REPRESENTATIONS OF DATA STREAMS, AND COMPRESSIONS THEREOF

Номер: CA0002947158A1
Принадлежит:

Described are various embodiments of systems, devices and methods for generating locality-indicative data representations of data streams, and compressions thereof. In one such embodiment, a method is provided for determining an indication of locality of data elements in a data stream communicated over a communication medium. This method comprises determining, for at least two sample times, count values of distinct values for each of at least two distinct value counters, wherein each of the distinct value counters has a unique starting time; and comparing corresponding count values for at least two of the distinct value counters to determine an indication of locality of data elements in the data stream at one of the sample times.

Подробнее
17-09-2015 дата публикации

PAGE CACHE WRITE LOGGING AT BLOCK-BASED STORAGE

Номер: CA0002940246A1
Принадлежит:

A block-based storage system may implement page cache write logging. Write requests for a data volume maintained at a storage node may be received at a storage node. A page cache for may be updated in accordance with the request. A log record describing the page cache update may be stored in a page cache write log maintained in a persistent storage device. Once the write request is performed in the page cache and recorded in a log record in the page cache write log, the write request may be acknowledged. Upon recovery from a system failure where data in the page cache is lost, log records in the page cache write log may be replayed to restore to the page cache a state of the page cache prior to the system failure.

Подробнее
23-01-2014 дата публикации

A METHOD AND PROCESS FOR ENABLING DISTRIBUTING CACHE DATA SOURCES FOR QUERY PROCESSING AND DISTRIBUTED DISK CACHING OF LARGE DATA AND ANALYSIS REQUESTS

Номер: CA0002918472A1
Принадлежит:

A method and system for large data and distributed disk cache processing in a Pneuron platform (100). The system and method include three specific interoperable but distributed functions: the adapter/cache Pneuron (14) and distributed disk files (34), a dynamic memory mapping tree (50), and distributed disk file cleanup (28). The system allows for large data processing considerations and the ability to access and acquire information from large data files (102) and rapidly distribute and provide the information to subsequent Pneurons (104) for processing. The system also provides the ability to store large result sets, the ability to deal with sequential as well as asynchronous parallel processing, the ability to address large unstructured data; web logs, email, web pages, etc., as well as the ability to handle failures to large block processing.

Подробнее
29-05-2018 дата публикации

A METHOD AND PROCESS FOR ENABLING DISTRIBUTING CACHE DATA SOURCES FOR QUERY PROCESSING AND DISTRIBUTED DISK CACHING OF LARGE DATA AND ANALYSIS REQUESTS

Номер: CA0002918472C

A method and system for large data and distributed disk cache processing in a Pneuron platform (100). The system and method include three specific interoperable but distributed functions: the adapter/cache Pneuron (14) and distributed disk files (34), a dynamic memory mapping tree (50), and distributed disk file cleanup (28). The system allows for large data processing considerations and the ability to access and acquire information from large data files (102) and rapidly distribute and provide the information to subsequent Pneurons (104) for processing. The system also provides the ability to store large result sets, the ability to deal with sequential as well as asynchronous parallel processing, the ability to address large unstructured data; web logs, email, web pages, etc., as well as the ability to handle failures to large block processing.

Подробнее
05-05-2020 дата публикации

VIRTUAL MACHINE EXCLUSIVE CACHING

Номер: CA0002871919C

Techniques, systems and an article of manufacture for caching in a virtualized computing environment. A method includes enforcing a host page cache on a host physical machine to store only base image data, and enforcing each of at least one guest page cache on a corresponding guest virtual machine to store only data generated by the guest virtual machine after the guest virtual machine is launched, wherein each guest virtual machine is implemented on the host physical machine.

Подробнее
06-12-1993 дата публикации

DISK DRIVE CONTROLLER WITH A POSTED WRITE CACHE MEMORY

Номер: CA0002097762A1
Принадлежит:

DISK DRIVE CONTROLLER WITH A POSTED WRITE CACHE MEMORY A disk array controller includes a local microprocessor, a bus master interface, a compatible interface, buffer memory and a disk interface. The controller includes a DMA controller between the microprocessor, the bus master interface, the compatibility interface and the buffer memory DMA controllers are also provided between the disk interface and the buffer memory. One of these DMA channels includes an XOR engine used to develop parity information used with the disk array. The various DMA controllers are cycled to allow access to the buffer memory and the disk interface. A posted write memory system is connected as a selectable disk drive to the disk interface. The posted write memory system includes mirrored, parity checked and battery backed semiconductor memory to allow posted write data to be retained during power down conditions with only a very small change of data loss.

Подробнее
23-09-2003 дата публикации

SYSTEM FOR ACCESSING DISTRIBUTED DATA CACHE CHANNEL AT EACH NETWORK NODE TO PASS REQUESTS AND DATA

Номер: CA0002136727C
Принадлежит: PITTS, WILLIAM M., PITTS WILLIAM M

Network Distributed Caches ("NDCs") (50) permit accessing a named dataset stored at an NDC server terminator site (22) in response to a request submitted to an NDC client terminator site (24 ) by a client workstation (42). In accessing the dataset, the NDCs (50) form an NDC data conduit (62) that provides an active virtual circuit ("AVC") from the NDC client site (24) through intermediate NDC sites (26B, 26A) to the NDC server site (22). Throu gh the AVC provided by the conduit (62), the NDC sites (22, 26A and 26B) project an image of the requested portion of the nam ed dataset into the NDC client site (24). The NDCs (50) maintain absolute consistency between the source dataset and its projections at all NDC client terminator sites (24, 204B and 206) at which client workstations access the dataset. Channels (116) in each NDC (50) accumulate profiling data from the re- quests to access the dataset for which they have been claimed. The NDCs (50) use the profile data stored in channels (116 ...

Подробнее
15-11-1976 дата публикации

Номер: CH0000581864A5
Автор:
Принадлежит: IBM, INTERNATIONAL BUSINESS MACHINES CORP.

Подробнее
13-09-1974 дата публикации

SPEICHEREINRICHTUNG.

Номер: CH0000554048A
Автор:
Принадлежит: IBM, INTERNATIONAL BUSINESS MACHINES CORP.

Подробнее
19-04-2017 дата публикации

Supplemental write cache command for bandwidth compression

Номер: CN0106575262A
Принадлежит:

Подробнее
14-09-2018 дата публикации

Multi-nuclear data array power gating recovery mechanism

Номер: CN0104572335B
Автор:
Принадлежит:

Подробнее
08-06-2016 дата публикации

Information processing device

Номер: CN0103377162B
Автор:
Принадлежит:

Подробнее
21-09-2011 дата публикации

Computer system and method of data cache management

Номер: CN0102193959A
Принадлежит:

The invention provides a computer system and a method of data cache management. When a conventional method is used to manage metadata of a file on a cache server, the metadata of the file cannot be moved from the cache server to another cache server while the file is being accessed by one of terminals. The computer system includes: a file server, a cache servers, and a cache management server that manages authority information indicating the administration right over the cache data of each of the plurality of files. A first cache server updates the locking management information so that the locking state of the file is moved from the first cache server to a third cache server. The third cache server updates the third locking management information so that the locking state of the file is moved to the third cache server. A cache management server updates the authority information so that the administration right of the cache data of the first file is changed from the first cache serverto ...

Подробнее
01-12-2004 дата публикации

读优先高速缓存系统和方法

Номер: CN0001550993A
Принадлежит:

... 对磁盘驱动器的写请求的预期访问时间(EAT)基本上是写请求的预计服务时间的度量。由高速缓存存储控制器产生的对磁盘驱动器的写请求基本上是用来清理高速缓冲存储器的维护功能。磁盘驱动器用处罚来修改写命令的EAT,使得需要磁盘访问的读请求优先地满足。处罚可以是常量或者可以基于一个或多个因素来建立,并且如果需要清理充满需要降级到磁盘的写请求的高速缓冲存储器时,甚至可以是负的。 ...

Подробнее
08-06-2018 дата публикации

Distributed caching dynamic migration

Номер: CN0108139974A
Автор:
Принадлежит:

Подробнее
29-03-2019 дата публикации

For adaptive persistence of the system, method and interface

Номер: CN0104903872B
Автор:
Принадлежит:

Подробнее
09-11-2011 дата публикации

Data storage device and method thereof

Номер: CN0101038532B
Принадлежит:

A microprocessor 18 in a control device 13 of a data storage device determines that the read request has a sequential access property, when a transfer size of data specified by a read request from a host computer 11 is the same as a preset pre-fetch determination size and sends the data for the read request to the host computer 11. The microprocessor 18 also reads data in succeeding areas continuous to the data designated by the read request from a storage device 12 into a cache memory 20. The data storage device enables to reduce a number of access from the control device 13 to the storage device 12, improving a response time as well as throughput of the data storage device.

Подробнее
19-12-2003 дата публикации

VIRTUAL STORAGE SYSTEM

Номер: FR0002812736B1
Принадлежит: NEARTEK INC.

Подробнее
18-10-2019 дата публикации

AREA DATA FOR CACHE INVALIDATION

Номер: FR0003023030B1
Принадлежит:

Подробнее
31-05-2019 дата публикации

SYSTEM AND METHOD FOR CACHING QUERY RESULTS OF SOLID-STATE DEVICE READ

Номер: FR0003020885B1
Принадлежит:

Подробнее
23-04-1982 дата публикации

SUBSET HAS CACHE MEMORY FOR WHOLE OF MEMORIZING HAS MAGNETIC DISKS

Номер: FR0002458846B3
Автор:
Принадлежит:

Подробнее
26-08-2005 дата публикации

SYSTEM OF NETWORK OF DISCS

Номер: FR0002866732A1
Принадлежит:

La présente invention concerne un système de réseau de disques (600) incluant des dispositifs de mémorisation (300), une unité de commande de dispositif de mémorisation (100), une unité de connexion (150) connectée à l'unité de commande (100), des unités de commande de canal (110), une mémoire partagée (120), et une mémoire cache (130). Chaque unité (110) inclut un premier processeur pour convertir des données de fichier, reçues via un réseau local (400) se trouvant à l'extérieur du système (600) auquel appartiennent les unités (110), en données de bloc et demandant la mémorisation des données converties dans la pluralité de dispositifs (300) et un second processeur pour transférer les données de bloc vers les dispositifs (300) via l'unité de connexion (150) et l'unité de commande (100) en réponse à une demande envoyée par le premier processeur et est connectée à l'unité (150) et au réseau (400).

Подробнее
05-11-2003 дата публикации

METHOD AND APPARATUS FOR IMPLEMENTING AUTOMATIC CACHE VARIABLE UPDATE

Номер: KR0100404374B1
Автор:
Принадлежит:

Подробнее
08-06-2009 дата публикации

METHOD AND APPARATUS FOR PERFORMING BLOCK CACHING IN A NON-VOLATILE MEMORY SYSTEM

Номер: KR0100901551B1
Автор:
Принадлежит:

Подробнее
30-12-2011 дата публикации

MEMORY SYSTEM

Номер: KR0101101655B1
Автор:
Принадлежит:

Подробнее
09-10-2012 дата публикации

Storage device and information processing system

Номер: KR0101189259B1
Автор:
Принадлежит:

Подробнее
27-03-2017 дата публикации

고체-상태 저장 디바이스 상에서 데이터를 캐싱하는 장치, 시스템, 및 방법

Номер: KR0101717644B1

... 고체-상태 저장 디바이스를 이용하여 캐싱 데이터를 위한 장치, 시스템, 및 방법이 기재된다. 고체-상태 저장 디바이스는 고체-상태 저장 디바이스 상에서 수행된 캐시 동작 및 고체-상태 저장 디바이스의 저장 동작에 관련된 메타데이터를 유지한다. 메타데이터는, 캐시에서의 어떤 데이터가 유효한 지, 뿐 아니라 비휘발성 캐시에서의 어떤 데이터가 백킹 스토어에 저장되는지를 나타낸다. 백업 엔진은 비휘발성 캐시 디바이스에서의 유닛을 통해 작용하고, 유효 데이터를 백킹 스토어에 백업한다. 그루밍 동작 동안, 그루머는, 데이터가 유효한지, 그리고 데이터가 폐기가능한지를 결정한다. 유효 및 폐기가능한 데이터는 그루밍 동작 동안 제거될 수 있다. 그루머는, 캐시 디바이스로부터 데이터를 제거할 지를 결정할 때 데이터가 콜드인지를 결정할 수 있다. 캐시 디바이스는 백킹 스토어와 동일한 크기인 논리적 공간을 클라이언트에게 제공할 수 있다. 캐시 디바이스는 클라이언트에게 투명할 수 있다.

Подробнее
12-11-2003 дата публикации

HYBRID DATA STORAGE SYSTEM

Номер: KR20030087059A
Принадлежит:

A hybrid data storage system that includes a controller that communicates with an outside source of information, and at least one holographic drive engine that communicates with controller. The information is transmitted between the controller and the outside source of information and between the holographic drive engine and the controller. © KIPO & WIPO 2007 ...

Подробнее
09-12-2011 дата публикации

TIME BUDGETING FOR NON-DATA TRANSFER OPERATIONS IN DRIVE UNITS

Номер: KR1020110133059A
Автор:
Принадлежит:

Подробнее
19-08-2010 дата публикации

PROGRAMMING METHOD AND DEVICE FOR A BUFFER CACHE IN A SOLID-STATE DISK SYSTEM

Номер: KR2010093114A1
Принадлежит:

A programming method and device for a buffer cache in a solid-state disk system are proposed. A buffer cache programming device in a solid-state disk system according to the present invention comprises: a buffer cache for storing pages; a memory unit comprising a plurality of memory chips; and a control unit for selecting at least one of the pages as a victim page, with reference to a wait time which can occur when storing data in at least one target memory chip from among the plurality of memory chips.

Подробнее
12-12-2016 дата публикации

블록-기반 저장장치에서의 페이지 캐시 기록 로깅

Номер: KR1020160142311A
Принадлежит:

... 본 블록-기반 저장 시스템은 페이지 캐시 기록 로깅을 구현할 수 있다. 저장 노드에 유지되는 데이터 볼륨에 대한 기록 요청이 저장 노드에 수신될 수 있다. 페이지 캐시는 요청에 따라 업데이트될 수 있다. 페이지 캐시 업데이트를 기술하는 로그 레코드는 영구 저장 디바이스에 유지되는 페이지 캐시 기록 로그에 저장될 수 있다. 기록 요청이 페이지 캐시에서 수행되고 페이지 캐시 기록 로그의 로그 레코드에 레코딩되면, 기록 요청은 확인 응답될 수 있다. 페이지 캐시에서의 데이터가 손실되는 시스템 고장에서 복구 시, 페이지 캐시 기록 로그의 로그 레코드들이 시스템 고장 이전 페이지 캐시의 상태를 페이지 캐시에 복원하기 위해 리플레이될 수 있다.

Подробнее
09-05-2011 дата публикации

METHODS TO COMMUNICATE A TIMESTAMP TO A STORAGE SYSTEM

Номер: KR1020110048066A
Автор:
Принадлежит:

Подробнее
24-08-2007 дата публикации

DISTRIBUTED STORAGE ARCHITECTURE BASED ON BLOCK MAP CACHING AND VFS STACKABLE FILE SYSTEM MODULES

Номер: KR1020070083489A
Принадлежит:

A distributed storage architecture and tiered caching system are employed in a video-on-demand or streaming media application, An illustrative embodiment of a distributed storage architecture, based on block map caching and virtual file system stackable file system modules, includes a controller, a first computer and a second computer, first and second switches, and a storage device. The first computer includes a local file system and uses this to store asset files in the local files system on the first storage device. The first computer employs a process to create a block map for each asset file, the block map including information concerning boundaries where an asset file is stored on the first storage device. © KIPO & WIPO 2007 ...

Подробнее
16-01-2011 дата публикации

Memory system, method of controlling memory system, and information processing apparatus

Номер: TW0201102816A
Принадлежит:

A WC resource usage is compared with an auto flush (AU) threshold Caf that is smaller than an upper limit Clmt, and when the WC resource usage exceeds the AF threshold Caf, the organizing state of a NAND memory 10 is checked. When the organizing of the NAND memory 10 has proceeded sufficiently, data is flushed from a write cache (WC) 21 to the NAND memory 10 early, so that the response to the subsequent write command is improved.

Подробнее
11-02-2014 дата публикации

Adaptive storage system including hard disk drive with flash interface

Номер: TWI426444B
Автор: SEHAT SUTARDJA, YUN YANG

Подробнее
22-11-2007 дата публикации

ADAPTIVE STORAGE SYSTEM INCLUDING HARD DISK DRIVE WITH FLASH INTERFACE

Номер: WO2007133646A2
Принадлежит:

Various types of data storage systems employ low power disk drives to cache data to/from high power disk drives to reduce power consumption and access times. Some of the disk drives may communicate with a host device via a non-volatile semiconductor memory interface such as a flash memory interface.

Подробнее
12-07-2007 дата публикации

CACHE DISASSOCIATION DETECTION

Номер: WO000002007078588A2
Автор: GRIMSRUD, Knut, S.
Принадлежит:

In some embodiments an expected value is compared with a number of times a storage device has been powered up and/or spun up. A cache disassociation is detected in response to the comparing. Other embodiments are described and claimed.

Подробнее
04-04-2013 дата публикации

APPARATUS, METHOD AND SYSTEM THAT STORES BIOS IN NON-VOLATILE RANDOM ACCESS MEMORY

Номер: WO2013048491A1
Принадлежит:

A non- volatile random access memory (NVRAM) is used in a computer system to perform multiple roles in the platform storage hierarchy. The NVRAM is byte-rewritable and byte-erasable by the processor. The NVRAM is coupled to the processor to be directly accessed by the processor without going through an I/O subsystem. The NVRAM stores a Basic Input and Output System (BIOS). During a Pre-Extensible Firmware Interface (PEI) phase of the boot process, the cache within the processor can be used in a write-back mode for execution of the BIOS.

Подробнее
06-03-2014 дата публикации

TRANSPARENT HOST-SIDE CACHING OF VIRTUAL DISKS LOCATED ON SHARED STORAGE

Номер: WO2014036005A1
Принадлежит:

Techniques for using a host-side cache to accelerate virtual machine (VM) I/O are provided. In one embodiment, the hypervisor of a host system can intercept an I/O request from a VM running on the host system, where the I/O request is directed to a virtual disk residing on a shared storage device. The hypervisor can then process the I/O request by accessing a host-side cache that resides one or more cache devices distinct from the shared storage device, where the accessing of the host-side cache is transparent to the VM.

Подробнее
24-06-2010 дата публикации

REDUNDANT DATA STORAGE FOR UNIFORM READ LATENCY

Номер: WO2010071655A1
Принадлежит:

A memory apparatus (100, 200, 300, 500, 600, 700) has a plurality of memory banks (d0 to d7, m0 to m3, p, p0, p1), wherein a write or erase operation to the memory banks (d0 to d7, m0 to m3, p, p0, p1) is substantially slower than a read operation to the banks (d0 to d7, m0 to m3, p, p0, p1). The memory apparatus (100, 200, 300, 500, 600, 700) is configured to read a redundant storage of data instead of a primary storage location in the memory banks (d0 to d7, m0 to m3, p, p0, p1) for the data or reconstruct requested data in response to a query for the data when the primary storage location is undergoing at least one of a write operation and an erase operation.

Подробнее
27-07-2000 дата публикации

PRELOADING DATA IN A CACHE MEMORY ACCORDING TO USER-SPECIFIED PRELOAD CRITERIA

Номер: WO2000043854A2
Принадлежит:

L'invention concerne un dispositif et un procédé pour le chargement de données dans une unité de stockage de système informatique qui se présente sous la forme d'une unité de stockage de volume intermédiaire utilisée comme cache configurable par l'utilisateur. Les demandes d'accès à une mémoire de masse du type disque ou bande sont interceptées par un pilote de périphérique qui compare les demandes d'accès à un répertoire du contenu de ce cache configurable par l'utilisateur. Si le cache renferme les données recherchées, les demandes d'accès sont traitées dans ledit cache au lieu d'être transmises au pilote de périphérique pour la mémoire de masse cible. Sachant que le cache fonctionne en mémoire à temps d'accès considérablement plus court que pour la plupart des mémoires de masse, les demandes d'accès sont satisfaites bien plus rapidement que sur la base de l'accès en première intention à la mémoire de masse. Les données sont donc préchargées de manière adaptée aux besoins dans le cache ...

Подробнее
04-11-2004 дата публикации

METHOD AND SYSTEM FOR IMPROVING THE PERFORMANCE OF A PROCESSING SYSTEM

Номер: WO2004093815A3
Автор: ROBERTI, Paolo F.
Принадлежит:

A method and system for improving the performance of a processing system is disclosed. The processing system comprises a plurality of host computers, at least one control unit (CU) coupled to the host computer. The control unit comprises a cache and disk array coupled to the CU. The method and system comprises querying an operating system of at least one host computer to determine the storage medium that contains an object to be cached and providing the data in the portion of the disk array to be cached. The method and system further comprises providing a channel command sequence and sending the channel command sequence to the CU via an I/O operation at predetermined time intervals until the object is deactivated. A method and system in accordance with the present invention instructs a control unit (CU) or a storage medium to keep some objects constantly in its cache, so as to improve the overall response time off transaction systems running on one or more host computer and accessing data ...

Подробнее
07-02-1991 дата публикации

VARIABLE CAPACITY CACHE MEMORY

Номер: WO1991001525A1
Автор: ESBENSEN, Daniel, M.
Принадлежит:

A variable length cache system (58, 60) keeps track of the amount of available space on an output device (10). The capacity of the cache system (60) is continuously increased so long as it is less than the available output space on the output unit (10). Once the size of the cache system exceeds the available output space on the output unit (10), which is less than the total space available on the output unit (10) by a predetermined amount, the contents of the cache memory (60) are flushed or written to the output device (10) and the size of the cache memory (60) is reduced to zero.

Подробнее
21-05-2004 дата публикации

METHOD AND DEVICE FOR PERSISTENT-MEMORY MANAGEMENT

Номер: WO2004042584A3
Принадлежит:

The present invention relates to a method for managing memory space of a persistent-memory device and to a memory management device. The memory management method of the invention comprises a step of allocating (S14) at least one first part of said memory space to a file system (74) upon request from said file system (74) or from an application (70). The method and the device of the present invention enable a dynamical allocation of persistent-memory space to a file system. This way, the memory space of a persistent memory is effectively used also for write-caching. At the same time, write-caching and storing steps can be accelerated.

Подробнее
22-07-1993 дата публикации

COMPUTER MEMORY ARRAY CONTROL

Номер: WO1993014455A1
Принадлежит:

A computer memory controller for interfacing to a host computer comprises a buffer memory (26) for interfacing to a plurality of memory units (42) and for holding data read thereto and therefrom. A central controller (22) operative to control the transfer of data to and from the host computer and the memory units (42). The buffer memory (26) is controlled to form a plurality of buffer segments for addressably storing data read from or written to the memory units (42). The central controller (22) is operative to allocate a buffer segment for a read or write request from the host computer, of a size sufficient for the data. The central controller (22) is also operative in response to data requests from the host computer to control the memory units (42) to seek data stored in different memory units (42) simultaneously.

Подробнее
17-11-2005 дата публикации

ADAPTIVE CACHE ENGINE FOR STORAGE AREA NETWORK INCLUDING SYSTEMS AND METHODS RELATED THERETO

Номер: WO2005109213A1
Принадлежит:

Featured is a data storage back-up system for replication, mirroring and/ or backing-up data including one or more first and second data storage devices that embody iSCSI, FC or alike principals and that are operably coupled to each other preferably via a WAN. The first data storage device (100) is configured and arranged so there are two writes of data, one write to a persistent storage device (150) from which reads are done and another write to a SAPS device wherein the data is saved using log-structured file system (LSF) techniques. After saving data to the First storage device, the data logs in the SAPS device (140) are communicated to the second data storage device whereat a de– staging process is conducted so as to de-stage the data logs and write the de-staged data to a persistent storage device in the second data storage device.

Подробнее
25-08-2005 дата публикации

STORAGE SYSTEM INCLUDING HIERARCHICAL CACHE METADATA

Номер: WO2005078588A1
Автор: RAO, Raghavendra, J.
Принадлежит:

A storage system including hierarchical cache metadata storages includes a cache, a first metadata storage, and a second metadata storage. In one embodiment, the cache may store a plurality of data blocks in a first plurality of locations. The first metadata storage may include a plurality of entries that stores metadata including block addresses of data blocks within the cache. The second metadata storage may include a second plurality of locations for storing metadata including the block addresses identifying the data blocks within the cache. The metadata stored within the second metadata storage may also include pointers to the data blocks within the cache. The cache and the first metadata storage are non-volatile storages. However, the second metadata storage may be a volatile storage.

Подробнее
06-07-1995 дата публикации

SOLID STATE MEMORY SYSTEM

Номер: WO1995018407A1
Принадлежит:

A storage device that connects to a standard computer system and communicates with a computer system bus, being capable of receiving, storing and retrieving data intended for a magnetic disk drive; the storage device comprising: a main memory; a cache memory; block transfer buffer means to store 512 bytes of data; interface means to receive and store address, data and control information from computer system; look-up table means for storing one physical block address corresponding to each address sent by computer system, and to indicate whether the physical block address is for cache or main memory; additional memory means for storing information on the addresses of cache and also main memory which are unused and suitable for writing to; control means to access a record in look-up table means using address sent by computer system, and to determine a physical block address corresponding to standard address format.

Подробнее
09-02-2012 дата публикации

Semiconductor storage device with volatile and nonvolatile memories

Номер: US20120033496A1
Принадлежит: Individual

A semiconductor storage device includes a first memory area configured in a volatile semiconductor memory, second and third memory areas configured in a nonvolatile semiconductor memory, and a controller which executes following processing. The controller executes a first processing for storing a plurality of data by the first unit in the first memory area, a second processing for storing data outputted from the first memory area by a first management unit in the second memory area, and a third processing for storing data outputted from the first memory area by a second management unit in the third memory area.

Подробнее
16-02-2012 дата публикации

Intelligent cache management

Номер: US20120042123A1
Автор: Curt Kolovson
Принадлежит: Curt Kolovson

An exemplary storage network, storage controller, and methods of operation are disclosed. In one embodiment, a method of managing cache memory in a storage controller comprises receiving, at the storage controller, a cache hint generated by an application executing on a remote processor, wherein the cache hint identifies a memory block managed by the storage controller, and managing a cache memory operation for data associated with the memory block in response to the cache hint received by the storage controller.

Подробнее
23-02-2012 дата публикации

Computer system, control apparatus, storage system and computer device

Номер: US20120047502A1
Автор: Akiyoshi Hashimoto
Принадлежит: HITACHI LTD

The computer system includes a server being configured to manage a first virtual machine to which a first part of a server resource included in the server is allocated and a second virtual machine to which a second part of the server resource is allocated. The computer system also includes a storage apparatus including a storage controller and a plurality of storage devices and being configured to manage a first virtual storage apparatus to which a first storage area on the plurality of storage devices is allocated and a second virtual storage apparatus to which a second storage area on the plurality of storage devices is allocated. The first virtual machine can access to the first virtual storage apparatus but not the second virtual storage apparatus and the second virtual machine can access to the second virtual storage apparatus but not the first virtual storage apparatus.

Подробнее
12-04-2012 дата публикации

Method for managing and tuning data movement between caches in a multi-level storage controller cache

Номер: US20120089782A1
Принадлежит: LSI Corp

A method for managing data movement in a multi-level cache system having a primary cache and a secondary cache. The method includes determining whether an unallocated space of the primary cache has reached a minimum threshold; selecting at least one outgoing data block from the primary cache when the primary cache reached the minimum threshold; initiating a de-stage process for de-staging the outgoing data block from the primary cache; and terminating the de-stage process when the unallocated space of the primary cache has reached an upper threshold. The de-stage process further includes determining whether a cache hit has occurred in the secondary cache before; storing the outgoing data block in the secondary cache when the cache hit has occurred in the secondary cache before; generating and storing metadata regarding the outgoing data block; and deleting the outgoing data block from the primary cache.

Подробнее
19-04-2012 дата публикации

System and Method for the Synchronization of a File in a Cache

Номер: US20120096228A1
Автор: David Thomas, Scott Wells
Принадлежит: Individual

The present invention provides a system and method for bi-directional synchronization of a cache. One embodiment of the system of this invention includes a software program stored on a computer readable medium. The software program can be executed by a computer processor to receive a database asset from a database; store the database asset as a cached file in a cache; determine if the cached file has been modified; and if the cached file has been modified, communicate the cached file directly to the database. The software program can poll a cached file to determine if the cached file has changed. Thus, bi-directional synchronization can occur.

Подробнее
07-06-2012 дата публикации

Recommendation based caching of content items

Номер: US20120144117A1
Принадлежит: Microsoft Corp

Content item recommendations are generated for users based on metadata associated with the content items and a history of content item usage associated with the users. Each content item recommendation identifies a user and a content item and includes a score that indicates how likely the user is to view the content item. Based on the content item recommendations, and constraints of one or more caches, the content items are selected for storage in one or more caches. The constraints may include users that are associated with each cache, the geographical location of each cache, the size of each cache, and/or costs associated with each cache such as bandwidth costs. The content items stored in a cache are recommended to users associated with the cache.

Подробнее
14-06-2012 дата публикации

Systems and methods for background destaging storage tracks

Номер: US20120151148A1
Принадлежит: International Business Machines Corp

Systems and methods for background destaging storage tracks from cache when one or more hosts are idle are provided. One system includes a write cache configured to store a plurality of storage tracks and configured to be coupled to one or more hosts, and a processor coupled to the write cache. The processor includes code that, when executed by the processor, causes the processor to perform the method below. One method includes monitoring the write cache for write operations from the host(s) and determining if the host(s) is/are idle based on monitoring the write cache for write operations from the host(s). The storage tracks are destaged from the write cache if the host(s) is/are idle and are not destaged from the write cache if one or more of the hosts is/are not idle. Also provided are physical computer storage mediums including a computer program product for performing the above method.

Подробнее
14-06-2012 дата публикации

Systems and methods for managing cache destage scan times

Номер: US20120151151A1
Принадлежит: International Business Machines Corp

Systems and methods for managing destage scan times in a cache are provided. One system includes a cache and a processor. The processor is configured to utilize a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilize a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time. One method includes utilizing a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilizing a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time. Physical computer storage mediums including a computer program product for performing the above method are also provided.

Подробнее
14-06-2012 дата публикации

System and method for maintaining a data redundancy scheme in a solid state memory in the event of a power loss

Номер: US20120151253A1
Автор: Robert L. Horn
Принадлежит: Western Digital Technologies Inc

Embodiments of the invention are directed to systems and methods for reducing an amount of backup power needed to provide power fail safe preservation of a data redundancy scheme such as RAID that is implemented in solid state storage devices where new write data is accumulated and written along with parity data. Because new write data cannot be guaranteed to arrive in integer multiples of stripe size, a full stripe's worth of new write data may not exist when power is lost. Various embodiments use truncated RAID stripes (fewer storage elements per stripe) to save cached write data when a power failure occurs. This approach allows the system to maintain RAID parity data protection in a power fail cache flush case even though a full stripe of write data may not exist, thereby reducing the amount of backup power needed to maintain parity protection in the event of power loss.

Подробнее
19-07-2012 дата публикации

Method and system for cache endurance management

Номер: US20120185638A1
Принадлежит: Sandisk IL Ltd

A system and method for cache endurance management is disclosed. The method may include the steps of querying a storage device with a host to acquire information relevant to a predicted remaining lifetime of the storage device, determining a download policy modification for the host in view of the predicted remaining lifetime of the storage device and updating the download policy database of a download manager in accordance with the determined download policy modification.

Подробнее
26-07-2012 дата публикации

Managing Access to a Cache Memory

Номер: US20120191917A1
Принадлежит: International Business Machines Corp

Managing access to a cache memory includes dividing said cache memory into multiple of cache areas, each cache area having multiple entries; and providing at least one separate lock attribute for each cache area such that only a processor thread having possession of the lock attribute corresponding to a particular cache area can update that cache area.

Подробнее
16-08-2012 дата публикации

Managing read requests from multiple requestors

Номер: US20120210022A1
Автор: Alexander B. Beaman
Принадлежит: Apple Computer Inc

Techniques are disclosed for managing data requests from multiple requestors. According to one implementation, when a new data request is received, a determination is made as to whether a companion relationship should be established between the new data request and an existing data request. Such a companion relationship may be appropriate under certain conditions. If a companion relationship is established between the new data request and an existing data request, then when data is returned for one request, it is used to satisfy the other request as well. This helps to reduce the number of data accesses that need to be made to a data storage, which in turn enables system efficiency to be improved.

Подробнее
23-08-2012 дата публикации

Secure management of keys in a key repository

Номер: US20120213369A1
Принадлежит: International Business Machines Corp

A method for managing keys in a computer memory including receiving a request to store a first key to a first key repository, storing the first key to a second key repository in response to the request, and storing the first key from the second key repository to the first key repository within said computer memory based on a predetermined periodicity.

Подробнее
23-08-2012 дата публикации

Recycling of cache content

Номер: US20120215981A1
Принадлежит: International Business Machines Corp

A method of operating a storage system comprises detecting a cut in an external power supply, switching to a local power supply, preventing receipt of input/output commands, copying content of cache memory to a local storage device and marking the content of the cache memory that has been copied to the local storage device. When a resumption of the external power supply is detected, the method continues by charging the local power supply, copying the content of the local storage device to the cache memory, processing the content of the cache memory with respect to at least one storage volume and receiving input/output commands. When detecting a second cut in the external power supply, the system switches to the local power supply, prevents receipt of input/output commands, and copies to the local storage device only the content of the cache memory that is not marked as present.

Подробнее
20-09-2012 дата публикации

Flash storage device with read disturb mitigation

Номер: US20120239990A1
Принадлежит: Stec Inc

A method for managing a flash storage device includes initiating a read request and reading requested data from a first storage block of a plurality of storage blocks in the flash storage device based on the read request. The method further includes incrementing a read count for the first storage block and moving the data in the first storage block to an available storage block of the plurality of storage blocks when the read count reaches a first threshold value.

Подробнее
27-09-2012 дата публикации

Communication device, communication method, and computer- readable recording medium storing program

Номер: US20120246402A1
Автор: Shunsuke Akimoto
Принадлежит: NEC Corp

A communication device reducing the processing time to install data on a disc storage medium onto multiple servers is provided. A protocol serializer 10 of a communication device 5 serializes read requests received from servers A 1 to A 2 for target data stored on a disc storage medium K in a processing order. A cache controller 11 determines whether the target data corresponding to the read requests are present in a cache memory 4 in the order of serialized read requests and, if present, receives the target data from the cache memory 4 via a memory controller 12 . If not present, the cache controller 11 acquires the target data from the disc storage medium K via a DVD/CD controller 13 . Then, the protocol serializer 10 sends the target data acquired by the cache controller 11 to the server of the transmission source of the read request corresponding to the target data.

Подробнее
04-10-2012 дата публикации

Method for giving read commands and reading data, and controller and storage system using the same

Номер: US20120254522A1
Автор: Chih-Kang Yeh
Принадлежит: Phison Electronics Corp

A method for giving a read command to a flash memory chip to read data to be accessed by a host system is provided. The method includes receiving a host read command; determining whether the received host read command follows a last host read command; if yes, giving a cache read command to read data from the flash memory chip; and if no, giving a general read command and the cache read command to read data from the flash memory chip. Accordingly, the method can effectively reduce time needed for executing the host read commands by using the cache read command to combine the host read commands which access continuous physical addresses and pre-read data stored in a next physical address.

Подробнее
22-11-2012 дата публикации

Optimized flash based cache memory

Номер: US20120297113A1
Принадлежит: International Business Machines Corp

Embodiments of the invention relate to throttling accesses to a flash memory device. The flash memory device is part of a storage system that includes the flash memory device and a second memory device. The throttling is performed by logic that is external to the flash memory device and includes calculating a throttling factor responsive to an estimated remaining lifespan of the flash memory device. It is determined whether the throttling factor exceeds a threshold. Data is written to the flash memory device in response to determining that the throttling factor does not exceed the threshold. Data is written to the second memory device in response to determining that the throttling factor exceeds the threshold.

Подробнее
29-11-2012 дата публикации

Populating strides of tracks to demote from a first cache to a second cache

Номер: US20120303875A1
Принадлежит: International Business Machines Corp

Provided are a computer program product, system, and method for populating strides of tracks to demote from a first cache to a second cache. A first cache maintains modified and unmodified tracks from a storage system subject to Input/Output (I/O) requests. A determination is made to demote tracks from the first cache. A determination is made as to whether there are enough tracks ready to demote to form a stride, wherein tracks are written to a second cache in strides defined for a Redundant Array of Independent Disk (RAID) configuration. A stride is populated with tracks ready to demote in response to determining that there are enough tracks ready to demote to form the stride. The stride of tracks, to demote from the first cache, are promoted to the second cache. The tracks in the second cache that are modified are destaged to the storage system.

Подробнее
29-11-2012 дата публикации

Implementing storage adapter performance optimization with hardware chains to select performance path

Номер: US20120303886A1
Принадлежит: International Business Machines Corp

A method and controller for implementing storage adapter performance optimization with a predefined chain of hardware operations configured to implement a particular performance path minimizing hardware and firmware interactions, and a design structure on which the subject controller circuit resides are provided. The controller includes a plurality of hardware engines; and a data store configured to store a plurality of control blocks selectively arranged in one of a plurality of predefined chains. Each predefined chain defines a sequence of operations. Each control block is designed to control a hardware operation in one of the plurality of hardware engines. A resource handle structure is configured to select a predefined chain based upon a particular characteristic of the system. Each predefined chain is configured to implement a particular performance path to maximize performance.

Подробнее
29-11-2012 дата публикации

Intelligent caching

Номер: US20120303896A1
Принадлежит: International Business Machines Corp

Intelligent caching includes defining a cache policy for a data source, selecting parameters of data in the data source to monitor, the parameters forming a portion of the cache policy, and monitoring the data source for an event based on the cache policy. Upon an occurrence of an event, the intelligent caching also includes retrieving target data subject to the cache policy from a first location and moving the target data to a second location.

Подробнее
29-11-2012 дата публикации

Managing track discard requests to include in discard track messages

Номер: US20120303899A1
Принадлежит: International Business Machines Corp

Provided are a computer program product, system, and method for managing track discard requests to include in discard track messages. A backup copy of a track in a cache is maintained in the cache backup device. A track discard request is generated to discard tracks in the cache backup device removed from the cache. Track discard requests are queued in a discard track queue. In response to detecting that a predetermined number of track discard requests are queued in the discard track queue while processing in a discard multi-track mode, one discard multiple tracks message is sent indicating the tracks indicated in the queued predetermined number of track discard requests to the cache backup device instructing the cache backup device to discard the tracks indicated in the discard multiple tracks message. In response to determining a predetermined number of periods of inactivity while processing in the discard multi-track mode, processing the track discard requests is switched to a discard single track mode.

Подробнее
06-12-2012 дата публикации

Storage system comprising microprocessor load distribution function

Номер: US20120311204A1
Принадлежит: HITACHI LTD

Among a plurality of microprocessors 12, 32, when the load on a microprocessor 12 which performs I/O task processing of received I/O requests is equal to or greater than a first load, the microprocessor assigns at least an I/O task portion of the I/O task processing to another microprocessor 12 or 32, and the other microprocessor 12 or 32 executes at least the I/O task portion. The I/O task portion is a task processing portion comprising cache control processing, comprising the securing in cache memory 20 of a cache area, which is one area in cache memory 20, for storage of data.

Подробнее
03-01-2013 дата публикации

Browser Storage Management

Номер: US20130007371A1
Принадлежит: Individual

Browser storage management techniques are described. In one or more implementations, inputs are received at a computing device that specify maximum aggregate sizes of application and database caches, respectively, of browser storage to be used to locally store data at the computing device. For example, the inputs may be provided using a policy, by an administrator of the computing device, and so on. The maximum aggregate sizes are set of application and database caches, respectively, of browser storage at the computing device to the sizes specified by the inputs.

Подробнее
17-01-2013 дата публикации

Handheld imaging device with image processor provided with multiple parallel processing units

Номер: US20130016232A1
Автор: Kia Silverbrook
Принадлежит: Google LLC

A handheld imaging device includes an image sensor for sensing an image; a micro-controller integrating therein a dedicated image processor for processing the sensed image, a bus interface, and an image sensor interface; and a plurality of processing units connected in parallel by a crossbar switch, the plurality of processing units provided within the micro-controller to form a multi-core processing unit for the processor. The image sensor interface provides communication between the micro-controller and the image sensor. The bus interface provides communication between the micro-controller and devices external to the micro-controller other than the image sensor.

Подробнее
17-01-2013 дата публикации

Handheld imaging device with multi-core image processor integrating image sensor interface

Номер: US20130016236A1
Автор: Kia Silverbrook
Принадлежит: Google LLC

A handheld imaging device includes an image sensor for sensing an image; a processor for processing the sensed image; a multi-core processing unit provided in the processor, the multi-core processing unit having a plurality of processing units connected in parallel by a crossbar switch; and an image sensor interface for converting signals from the image sensor to a format readable by the multi-core processing unit, the image sensor interface sharing a wafer substrate with the processor. A transfer of data from the image sensor interface to the plurality of processing units is conducted entirely on the shared wafer substrate.

Подробнее
17-01-2013 дата публикации

Handheld imaging device with vliw image processor

Номер: US20130016266A1
Автор: Kia Silverbrook
Принадлежит: Google LLC

A handheld imaging device includes an image sensor for sensing an image: a Very Long Instruction Word (VLIW) processor for processing the sensed image; a plurality of processing units provided in the VLIW processor, the plurality of processing units connected in parallel by a crossbar switch to form a multi-core processing unit for the VLIW processor; and an image sensor interface for receiving signals from the image sensor and converting the signals to a format readable by the VLIW processor, the image sensor interface sharing a wafer substrate with the VLIW processor. A transfer of data from the image sensor interface to the VLIW processor is conducted entirely on the shared wafer substrate.

Подробнее
17-01-2013 дата публикации

Method and system for ensuring cache coherence of metadata in clustered file systems

Номер: US20130019067A1
Принадлежит: VMware LLC

Metadata of a shared file in a clustered file system is changed in a way that ensures cache coherence amongst servers that can simultaneously access the shared file. Before a server changes the metadata of the shared file, it waits until no other server is attempting to access the shared file, and all I/O operations to the shared file are blocked. After writing the metadata changes to the shared file, local caches of the other servers are updated, as needed, and I/O operations to the shared file are unblocked.

Подробнее
24-01-2013 дата публикации

Camera system with color display and processor for reed-solomon decoding

Номер: US20130021443A1
Автор: Kia Silverbrook
Принадлежит: Google LLC

A camera system including: a substrate having a coding pattern printed thereon and a handheld digital camera device. The camera device includes: a digital camera unit having a first image sensor for capturing images and a color display for displaying captured images to a user; an integral processor configured for: controlling operation of the first image sensor and color display; decoding an imaged coding pattern printed on a substrate, the printed coding pattern employing Reed-Solomon encoding; and performing an action in the handheld digital camera device based on the decoded coding pattern. The decoding includes the steps of: detecting target structures defining the extent of the data area; determining the data area using the detected target structures; and Reed-Solomon decoding the coding pattern contained in the determined data area.

Подробнее
14-03-2013 дата публикации

Caching for a file system

Номер: US20130067168A1
Принадлежит: Microsoft Corp

Aspects of the subject matter described herein relate to caching data for a file system. In aspects, in response to requests from applications and storage and cache conditions, cache components may adjust throughput of writes from cache to the storage, adjust priority of I/O requests in a disk queue, adjust cache available for dirty data, and/or throttle writes from the applications.

Подробнее
28-03-2013 дата публикации

Storage caching/tiering acceleration through staggered asymmetric caching

Номер: US20130080696A1
Автор: Luca Bert
Принадлежит: LSI Corp

A multi-tiered system of data storage includes a plurality of data storage solutions. The data storage solutions are organized such that the each progressively faster, more expensive solution serves as a cache for the previous solution, and each solution includes a dedicated data block to store individual data sets, newly written in a plurality of write operations, for later migration to slower data storage solutions in a single write operation.

Подробнее
18-04-2013 дата публикации

STORAGE DEVICE AND REBUILD PROCESS METHOD FOR STORAGE DEVICE

Номер: US20130097375A1
Автор: lida Takashi
Принадлежит: NEC Corporation

A storage device includes a plurality of magnetic disk devices each having a write cache, a processor unit that redundantly stores data, a rebuild execution control unit that performs a rebuild process, a write cache control unit that, at the time of the rebuild process, enables a write cache of a storage device that stores rebuilt data, and a rebuild progress management unit that is configured using a nonvolatile memory and manages progress information of the rebuild process. In the case where power discontinuity is caused during the rebuild process and then power is restored, the rebuild execution control unit calculates an address that is before an address of last written rebuilt data by an amount corresponding to the capacity of the write cache based on the progress information of the rebuild process managed by the progress management unit and resumes the rebuild process from that calculated address. 1. A storage device comprising:a storage unit including a plurality of memory devices each having a write cache;a first control unit that redundantly stores data in the plurality of memory devices;a second control unit that performs a rebuild process of rebuilding the data;a write cache control unit that, at a time of of the rebuild process, enables a write cache of a memory device that stores rebuilt data; anda progress management unit that is configured using a nonvolatile memory and manages, as progress information of the rebuild process, an address of rebuilt data for which rebuilding is completed and which is written in the write cache,wherein, in a case where power discontinuity is caused during the rebuild process and then power is restored, the second control unit calculates an address that is before an address of last written rebuilt data by an amount corresponding to a capacity of the write cache based on the progress information of the rebuild process managed by the progress management unit and resumes the rebuild process from that calculated address.2. ...

Подробнее
25-04-2013 дата публикации

METHOD AND APPARATUS FOR IMPLEMENTING PROTECTION OF REDUNDANT ARRAY OF INDEPENDENT DISKS IN FILE SYSTEM

Номер: US20130103902A1
Автор: WEI Mingchang, ZHANG Wei
Принадлежит: Huawei Technologies Co., Ltd.

Embodiments of the present invention disclose a method and an apparatus for implementing protection of RAID in a file system, and are applied in the field of communications technologies. In the embodiments of the present invention, after receiving a file operation request, the file system needs to determine the type of a file to be operated as requested by the file operation request, and perform file operations in a hard disk drive of the file system directly according to a file operation method corresponding to the determined file type, that is, a RAID data protection method. Therefore, corresponding file operations may be performed in a proper operation method according to each different file types, and data of an important file type is primarily protected, thereby improving reliability of data storage. 1. In a file system of one or more computers , a method for implementing protection of a redundant array of independent disks (RAID) , comprising:receiving a file operation request;determining a type of a file to be operated as requested by the file operation request, wherein the type of the file comprises at least one of file metadata and file data;selecting a file operation method according to the determined file type, wherein the file operation method is a RAID data protection method; andperforming file operations on one or more hard disk drives according to the selected file operation method.2. The method according to claim 1 , wherein the file operation method selected according to the determined file type is a multi-mirroring redundant algorithm if the determined file type is the file metadata claim 1 , and the file metadata is backed up with multiple copies and storing the multiple copies in at least two hard disks according to the multi-mirroring redundant algorithm.35. The method according to claim 1 , wherein the file operation method selected according to the determined file type is a data protection method of RAID if the type of the file is the file ...

Подробнее
02-05-2013 дата публикации

Dynamically adjusted threshold for population of secondary cache

Номер: US20130111133A1
Принадлежит: International Business Machines Corp

The population of data to be inserted into secondary data storage cache is controlled by determining a heat metric of candidate data; adjusting a heat metric threshold; rejecting candidate data provided to the secondary data storage cache whose heat metric is less than the threshold; and admitting candidate data whose heat metric is equal to or greater than the heat metric threshold. The adjustment of the heat metric threshold is determined by comparing a reference metric related to hits of data most recently inserted into the secondary data storage cache, to a reference metric related to hits of data most recently evicted from the secondary data storage cache; if the most recently inserted reference metric is greater than the most recently evicted reference metric, decrementing the threshold; and if the most recently inserted reference metric is less than the most recently evicted reference metric, incrementing the threshold.

Подробнее
23-05-2013 дата публикации

Storage system, storage apparatus and method of controlling storage system

Номер: US20130132673A1
Принадлежит: HITACHI LTD

A storage system enables a core storage apparatus to execute processing requiring securing of data consistency, while providing high write performance to a host computer. A storage system includes an edge storage apparatus 20 configured to communicate with a host computer 10 and including a cache memory 25 , and a core storage apparatus 30 that communicates with the edge storage apparatus 20 and perform I/O processing on a storage device 39 . When receiving a write request from the host computer 10 , the edge storage apparatus 20 processes the write request by writeback. When about to execute storage function control processing, on condition that data consistency is be secured, such as pair split processing of a local copy function, the core storage apparatus 30 requests the edge storage apparatus 20 to perform forced destage of dirty data in the cache memory 25 and then executes the storage function control processing after the completion of the forced destage.

Подробнее
30-05-2013 дата публикации

Systems, methods, and devices for running multiple cache processes in parallel

Номер: US20130138865A1
Принадлежит: SEAGATE TECHNOLOGY LLC

Certain embodiments of the present disclosure related to systems, methods, and devices for increasing data access speeds. In certain embodiments, a method includes running multiple cache retrieval processes in parallel, in response to a read command. In certain embodiments, a method includes initiating a first cache retrieval process and a second cache retrieval process to run in parallel, in response to a single read command.

Подробнее
06-06-2013 дата публикации

Information Processing Apparatus and Driver

Номер: US20130145094A1
Автор: Kurashige Takehiko
Принадлежит:

According to one embodiment, an information processing apparatus includes a memory that comprises a buffer area, a first external storage, a second external storage and a driver. The driver is configured to control the first and second external storages in units of predetermined blocks. The driver comprises a cache reservation module configured to (i) reserve a cache area in the memory, the cache area being logically between the buffer area and the first external storage and between the buffer area and the second external storage and (ii) manage the cache area. The cache area operates as a primary cache for the second external storage and a cache for the first external storage. Part or the entire first external storage is used as a secondary cache for the second external storage. The buffer area is used to transfer data between the driver and a host system that requests data reads/writes. 1. An information processing apparatus comprising:a memory comprising a buffer area;a first external storage separate from the memory;a second external storage separate from the memory; anda driver configured to control the first and second external storages in units of predetermined blocks,wherein the driver comprises a cache reservation module configured to reserve a cache area in the memory, the cache area being logically between the buffer area and the first external storage and between the buffer area and the second external storage, and the cache reservation module is configured to manage the cache area in units of the predetermined blocks, using the cache area, secured on the memory by the cache reservation module, as a primary cache for the second external storage and a cache for the first external storage, and using part or the entire first external storage as a secondary cache for the second external storage, the buffer area being reserved in order to transfer data between the driver and a host system that requests for data writing and data reading.2. A driver stored in a ...

Подробнее
13-06-2013 дата публикации

Fast startup hybrid memory module

Номер: US20130148457A1
Принадлежит: Sanmina SCI Corp

A memory device is provided comprising: a volatile memory device, a non-volatile memory device, a memory control circuit volatile memory controller coupled to the volatile memory device and non-volatile memory device, and a backup power source. The backup power source may be arranged to temporarily power the volatile memory devices and the memory control circuit upon a loss of power from the external power source. Additionally, a switch may serve to selectively couple: (a) a host memory bus to either the volatile memory device or non-volatile memory device; and (b) the volatile memory device to the non-volatile memory device. Upon reestablishment of power by an external power source from a power loss event, the memory control circuit is configured to restore data from the non-volatile memory device to the volatile memory device prior to a host system, to which the memory device is coupled, completes boot-up.

Подробнее
13-06-2013 дата публикации

Information Processing Apparatus and Driver

Номер: US20130151775A1
Автор: Kurashige Takehiko
Принадлежит:

According to one embodiment, an information processing apparatus includes a memory includes a buffer area, a first storage, a second storage and a driver. Controlling the first and second external storages, the driver comprises a cache reservation module configured to reserve a cache area in the memory. The cache area is logically between the buffer area and the first external storage and between the buffer area and the second external storage. The driver being configured to use the cache area, secured on the memory by the cache reservation module, as a primary cache for the second external storage and a cache for the first external storage, and uses part or the entire first external storage as a secondary cache for the second external storage. The buffer area is reserved in order to transfer data between the driver and a host system that requests for data writing and data reading. 1. An information processing apparatus comprising:a memory comprising a buffer area;a first external storage separate from the memory;a second external storage separate from the memory; anda driver configured to control the first and second external storages,wherein the driver comprises a cache reservation module configured to reserve a cache area in the memory, the cache area being logically between the buffer area and the first external storage and between the buffer area and the second external storage, the driver being configured to use the cache area, secured on the memory by the cache reservation module, as a primary cache for the second external storage and a cache for the first external storage, and uses part or the entire first external storage as a secondary cache for the second external storage, the buffer area being reserved in order to transfer data between the driver and a host system that requests for data writing and data reading.2. A driver stored in a non-transitory computer readable medium which operates in an information processing apparatus comprising a memory ...

Подробнее
20-06-2013 дата публикации

Optimized execution of interleaved write operations in solid state drives

Номер: US20130159626A1
Автор: Oren Golov, Shachar Katz
Принадлежит: Apple Inc

A method for data storage includes receiving a plurality of data items for storage in a memory, including at least first data items that are associated with a first data source and second data items that are associated with a second data source, such that the first and second data items are interleaved with one another over time. The first data items are de-interleaved from the second data items, by identifying a respective data source with which each received data item is associated. The de-interleaved first data items and the de-interleaved second data items are stored in the memory.

Подробнее
27-06-2013 дата публикации

DESTAGING OF WRITE AHEAD DATA SET TRACKS

Номер: US20130166837A1

Exemplary methods, computer systems, and computer program products for efficient destaging of a write ahead data set (WADS) track in a volume of a computing storage environment are provided. In one embodiment, the computer environment is configured for preventing destage of a plurality of tracks in cache selected for writing to a storage device. For a track N in a stride Z of the selected plurality of tracks, if the track N is a first WADS track in the stride Z, clearing at least one temporal bit for each track in the cache for the stride Z minus 2 (Z−2), and if the track N is a sequential track, clearing the at least one temporal bit for the track N minus a variable X (N−X). 1. A method for efficient destaging of a write ahead data set (WADS) track in a volume by a processor device in a computing storage environment , comprising:preventing destage of a plurality of tracks in cache selected for writing to a storage device; and if the track N is a first WADS track in the stride Z, clearing at least one temporal bit for each track in the cache for the stride Z minus 2 (Z−2), and', 'if the track N is a sequential track, clearing the at least one temporal bit for the track N minus a variable X (N−X)., 'for a track N in a stride Z of the selected plurality of tracks2. The method of claim 1 , further including prestaging data to the plurality of tracks such that the stride Z includes complete tracks claim 1 , enabling subsequent destage of complete WADS tracks.3. The method of claim 1 , further including incrementing the at least one temporal bit.4. The method of claim 1 , further including taking a track access to the WADS track and completing a write operation on the WADS track.5. The method of claim 1 , further including ending a track access to the WADS track upon a completion of a write operation and adding the WADS track to a wise order writing (WOW) list.6. The method of claim 5 , further including checking the WOW list and examining a left neighbor and a right ...

Подробнее
18-07-2013 дата публикации

Systems and methods for cache profiling

Номер: US20130185475A1
Принадлежит: Fusion IO LLC

A cache module leverages a logical address space and storage metadata of a storage module (e.g., virtual storage module) to cache data of a backing store. The cache module maintains access metadata to track access characteristics of logical identifiers in the logical address space, including accesses pertaining to data that is not currently in the cache. The access metadata may be separate from the storage metadata maintained by the storage module. The cache module may calculate a performance metric of the cache based on profiling metadata, which may include portions of the access metadata. The cache module may determine predictive performance metrics of different cache configurations. An optimal cache configuration may be identified based on the predictive performance metrics.

Подробнее
18-07-2013 дата публикации

DEMOTING PARTIAL TRACKS FROM A FIRST CACHE TO A SECOND CACHE

Номер: US20130185502A1

A determination is made of a track to demote from the first cache to the second cache, wherein the track in the first cache corresponds to a track in the storage system and is comprised of a plurality of sectors. In response to determining that the second cache includes a the stale version of the track being demoted from the first cache, a determination is made as to whether the stale version of the track includes track sectors not included in the track being demoted from the first cache. The sectors from the track demoted from the first cache are combined with sectors from the stale version of the track not included in the track being demoted from the first cache into a new version of the track. The new version of the track is written to the second cache. 1. A method for managing data in a cache system comprising a first cache , a second cache , and a storage system , comprising:determining a track to demote from the first cache to the second cache, wherein the track in the first cache corresponds to a track in the storage system and is comprised of a plurality of sectors;determining whether the second cache includes a stale version of the track being demoted from the first cache;in response to determining that the second cache includes the stale version of the track, determining whether the stale version of the track includes track sectors not included in the track being demoted from the first cache;combining the sectors from the track demoted from the first cache with sectors from the stale version of the track not included in the track being demoted from the first cache into a new version of the track; andwriting the new version of the track to the second cache.2. The computer program product of claim 1 , wherein the operations further comprise:invalidating the stale version of the track in the second cache in response to writing the new version of the track to the second cache.3. The method of claim 1 , wherein the operations further comprise:determining ...

Подробнее
18-07-2013 дата публикации

Writing adjacent tracks to a stride, based on a comparison of a destaging of tracks to a defragmentation of the stride

Номер: US20130185507A1
Автор: Lokesh M. Gupta
Принадлежит: International Business Machines Corp

Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks. The one or more tracks are written to a selected stride of the plurality of strides, based on comparing the number of operations required to destage selected tracks from the selected stride to the number of operations required to defragment the compressed data in the selected stride.

Подробнее
01-08-2013 дата публикации

Computer system and storage control method

Номер: US20130198457A1
Принадлежит: HITACHI LTD

The entirety or a part of free space of a second storage device included in a host computer is used as a cache memory region (external cache) outside of a storage apparatus. If Input/Output (I/O) in the host computer is Write, a Write request is transmitted from the host computer to a storage apparatus, the storage apparatus writes data associated with the Write request into a main cache that is a cache memory region included in this storage apparatus, and the storage apparatus writes the data in the main cache into a first storage device included in the storage apparatus. The storage apparatus writes the data in the main cache into an external cache included in the host computer. If the I/O in the host computer is Read, the host computer determines whether or not Read data as target data of the Read exists in the external cache. If a result of the determination is positive, the host computer reads the Read data from the external cache.

Подробнее
01-08-2013 дата публикации

Content addressable stores based on sibling groups

Номер: US20130198475A1
Принадлежит: UpThere Inc

A content addressable storage (CAS) system is provided in which each storage unit is assigned to one of a plurality of sibling groups. Each sibling group is assigned the entire hash space. Within each sibling group, the hash space is partitioned into hash segments which are assigned to the individual storage units that belong to the sibling group. Chunk retrieval requests are submitted to all sibling groups. Chunk storage requests are submitted to a single sibling group. The sibling group to which a storage request is submitted depends on whether any sibling group already stores the chunk, and which sibling groups are considered full.

Подробнее
08-08-2013 дата публикации

VIRTUAL TAPE DEVICE AND CONTROL METHOD OF VIRTUAL TAPE DEVICE

Номер: US20130205082A1
Принадлежит: FUJITSU LIMITED

A virtual tape device includes a storage unit, a cache determining unit, a selector, and a cache controller. The storage unit records logical volume information associated with an identifier of a logical volume, an updated time of the logical volume, information indicating whether the logical volume is allocated to a cache, an identifier of a physical volume storing data of the logical volume, and information indicating whether the physical volume are mounted in a physical tape drive. The cache determining unit determines, based on the logical volume information, whether the logical volume exists on the cache, when a request to store the logical volume on the cache is received and the cache does not have an available capacity. The selector selects the logical volume based on the determined result as an off-cache target logical volume. The selected logical volume is off-cached by the cache controller. 1. A virtual tape device comprising:a storage unit that records logical volume information associated with an identifier of a logical volume, an updated time of the logical volume, information indicating whether the logical volume is allocated to a cache or not, an identifier of a physical volume storing data of the logical volume, and information indicating whether the physical volume are mounted in a physical tape drive or not;a cache determining unit that determines, based on the logical volume information, whether the logical volume exists on the cache or not, when a request to store the logical volume on the cache is received and the cache does not have an available capacity, the logical volume being updated and stored in the physical volume mounted in the physical tape drive and being allocated to the cache;a selector that selects the logical volume based on the result of the determination made by the cache determining unit as an off-cache target logical volume to be off-cached from the cache; anda cache controller that off-caches the off-cache target logical ...

Подробнее
08-08-2013 дата публикации

VIRTUAL TAPE DEVICE AND CONTROL METHOD OF VIRTUAL TAPE DEVICE

Номер: US20130205083A1
Принадлежит: FUJITSU LIMITED

A virtual tape device includes a memory to record logical volume information that includes an identifier of a logical volume, an identifier of a physical volume that stores data of the logical volume, and information that indicates whether the data of the logical volume is cached in a cache unit, in association with each other. A determining unit that, when a copy command to copy data of the logical volume stored in a first physical volume to a second physical volume is received, determines whether a logical volume cached in the cache unit exists among the logical volumes, and a storage control unit that, when it is determined that the logical volume cached in the cache unit exists among the logical volumes, stores the data of the logical volume cached in the cache unit to the second physical volume without reference to an order indicated in the copy command. 1. A virtual tape device comprising:a memory to record logical volume information that is associated with an identifier of a logical volume, an identifier of a physical volume that stores data of the logical volume, and information that indicates whether the data of the logical volume is cached in a cache unit or not;a determining unit that, when a copy command to copy data of the logical volume stored in a first physical volume to a second physical volume is received, determines whether a logical volume cached in the cache unit exists among the logical volumes indicated in the copy command or not, based on the logical volume information; anda storage control unit that, when the determining unit determines that the logical volume cached in the cache unit exists among the logical volumes at receiving the copy command, stores the data of the logical volume cached in the cache unit among the logical volumes at receiving the copy command to the second physical volume, without reference to an order indicated in the copy command.2. The virtual tape device according to claim 1 , further comprising:a mounting control ...

Подробнее
22-08-2013 дата публикации

Correlation filter

Номер: US20130218901A1
Принадлежит: Apple Inc

In one embodiment, the correlation filter can use one of several data structure to track each migration unit and reject successive accesses within a period of time to each migration unit. In one embodiment, the correlation filter uses a space efficient data structure, such as a hash indexed correlation array to store the address of referenced migration units, and to filter accesses to a single migration unit that are correlated accesses resulting from multiple accesses to the same migration unit during a sequential I/O stream. In one embodiment, the correlation array contains a global timeout, which resets each element to a default value, clearing all store migration unit address values from the correlation array. In one embodiment, each element of the migration array can time-out separately.

Подробнее
29-08-2013 дата публикации

Data Migration between Memory Locations

Номер: US20130227218A1
Принадлежит: Hewlett Packard Development Co LP

Migrating data may include determining to copy a first data block in a first memory location to a second memory location and determining to copy a second data block in the first memory location to the second memory location based on a migration policy.

Подробнее
05-09-2013 дата публикации

Method and Apparatus of Accessing Data of Virtual Machine

Номер: US20130232303A1
Автор: Xiao Fei Quan
Принадлежит: Alibaba Group Holding Ltd

A methods and device for accessing virtual machine (VM) data are described. A computing device for accessing virtual machine comprises an access request process module, a data transfer proxy module and a virtual disk. The access request process module receives a data access request sent by a VM and adds the data access request to a request array. The data transfer proxy module obtains the data access request from the request array, maps the obtained data access request to a corresponding virtual storage unit, and maps the virtual storage unit to a corresponding physical storage unit of a distributed storage system. A corresponding data access operation may be performed based on a type of the data access request.

Подробнее
05-09-2013 дата публикации

Communication management apparatus, communication management method, and computer program product

Номер: US20130232314A1
Принадлежит: Toshiba Corp

According to an embodiment, a communication management apparatus mediates data between an information processing terminal having a temporary memory and an external memory device that is installed outside the information processing terminal. The apparatus includes a receiving unit configured to receive a write request issued by a device other than the information processing terminal for writing the data in the external memory device; a reading-writing unit configured to control reading of the data from the external memory device and control writing of the data in the external memory device; and a delete command issuing unit configured to, when the write request with respect to the external memory device is received, issue a delete command to the information processing terminal for deleting temporary data that is stored in the temporary memory.

Подробнее
19-09-2013 дата публикации

Optimizing signature computation and sampling for fast adaptive similarity detection based on algorithm-specific performance

Номер: US20130243190A1
Автор: Pulkit Misra, QING Yang
Принадлежит: Velobit Inc

A set of similarity detection algorithms and techniques for determining which signature calculation, sampling, and generation algorithms may be most beneficially applied to application related data are described herein. These algorithms work well with SSD caching software to product high speed, high accuracy, and low false-positive detections. Because the different algorithms may show different performance depending on data sets and different applications, to achieve optimal performance, a calibration process may be applied to each application and associated data set to select the best combination of signature computation and sampling technique. The new algorithms are also very fast with execution times an order of magnitude smaller than existing techniques. While some of the algorithms are presented using examples for the purpose of easy readability, these algorithms are very general and can be easily applied to broad range of cases.

Подробнее
19-09-2013 дата публикации

Adaptive prestaging in a storage controller

Номер: US20130246691A1
Принадлежит: International Business Machines Corp

In one aspect of the present description, at least one of the value of a prestage trigger and the value of the prestage amount, may be modified as a function of the drive speed of the storage drive from which the units of read data are prestaged into a cache memory. Thus, cache prestaging operations in accordance with another aspect of the present description may take into account storage devices of varying speeds and bandwidths for purposes of modifying a prestage trigger and the prestage amount. Other features and aspects may be realized, depending upon the particular application.

Подробнее
19-09-2013 дата публикации

Conditional write processing for a cache structure of a coupling facility

Номер: US20130246713A1
Принадлежит: International Business Machines Corp

A method for managing a cache structure of a coupling facility includes receiving a conditional write command from a computing system and determining whether data associated with the conditional write command is part of a working set of data of the cache structure. If the data associated with the conditional write command is part of the working set of data of the cache structure the conditional write command is processed as an unconditional write command. If the data associated with the conditional write command is not part of the working set of data of the cache structure a conditional write failure notification is transmitted to the computing system.

Подробнее
07-11-2013 дата публикации

METHOD AND SYSTEM FOR MANAGING POWER GRID DATA

Номер: US20130297868A1
Принадлежит: BATTELLE MEMORIAL INSTITUTE

A system and method of managing time-series data for smart grids is disclosed. Data is collected from a plurality of sensors. An index is modified for a newly created block. A one disk operation per read or write is performed. The one disk operation per read includes accessing and looking up the index to locate the data without movement of an arm of the disk, and obtaining the data. The one disk operation per write includes searching the disk for free space, calculating an offset, modifying the index, and writing the data contiguously into a block of the disk the index points to. 1. A method of managing time-series data for smart grids , comprising:a. collecting data from a plurality of sensors;b. modifying an index for a newly created block; andc. performing a one disk operation per read or write.2. The method of further comprising adding a look-up capability to the index.3. The method of wherein the index is stored in at least one of the following: main memory of a local machine claim 1 , main memory from a remote machine claim 1 , a solid-state storage device (SSD) from the local machine claim 1 , and the SSD from the remote machine.4. The method of wherein the performing a one disk operation per read comprises accessing and looking up the index to locate the data without movement of an arm of the disk claim 1 , and obtaining the data.5. The method of wherein the performing a one disk operation per write comprises searching the disk for free space claim 1 , calculating an offset claim 1 , modifying the index claim 1 , and writing the data contiguously into a block of the disk the index points to.6. The method of wherein the data is first written into a main memory buffer before being written into the disk.7. The method of wherein the collecting data from a plurality of sensors further comprises organizing the data contiguously in the disk.8. The method of wherein the data is reorganized contiguously in main memory before being written into the disk.9. The method ...

Подробнее
07-11-2013 дата публикации

METHODS AND APPARATUS FOR CUT-THROUGH CACHE MANAGEMENT FOR A MIRRORED VIRTUAL VOLUME OF A VIRTUALIZED STORAGE SYSTEM

Номер: US20130297870A1
Автор: Young Howard
Принадлежит: NETAPP, INC.

Methods and apparatus for cut-through cache memory management in write command processing on a mirrored virtual volume of a virtualized storage system, the virtual volume comprising a plurality of physical storage devices coupled with the storage system. Features and aspects hereof within the storage system provide for receipt of a write command and associated write data from an attached host. Using a cut-through cache technique, the write data is stored in a cache memory and transmitted to a first of the plurality of storage devices as the write data is stored in the cache memory thus eliminating one read-back of the write data for transfer to a first physical storage device. Following receipt of the write data and storage in the cache memory, the write data is transmitted from the cache memory to the other physical storage devices. 1. A method operable in a virtualized storage system , the method comprising:receiving a write command from a host system directed to a virtual volume of the storage system, the storage system comprising a plurality of storage devices, the virtual volume comprising data on multiple storage devices;detecting that a first storage device of the multiple storage devices is ready to receive write data associated with the write command;receiving the write data from the host system responsive to detecting that the first storage device is ready to receive write data;storing the write data in a cache memory;transmitting the write data to the first storage device as the write data is stored in the cache memory; andtransmitting the write data from the cache memory to other storage devices of the multiple storage devices, responsive to receipt of the write data by the first storage device.2. The method of further comprising:detecting that the transmission of the write data to the first storage device was successful.3. The method ofwherein the step of transmitting the write data from the cache memory to the other storage devices further comprises: ...

Подробнее
14-11-2013 дата публикации

SYSTEMS AND METHODS FOR SECURE HOST RESOURCE MANAGEMENT

Номер: US20130304986A1
Принадлежит:

Systems and methods are described herein to provide for secure host resource management on a computing device. Other embodiments include apparatus and system for management of one or more host device drivers from an isolated execution environment. Further embodiments include methods for querying and receiving event data from manageable resources on a host device. Further embodiments include data structures for the reporting of event data from one or more host device drivers to one or more capability modules. 1. A method , comprising:querying at least one host device driver for event types supported by the at least one host device driver;receiving the event types from the at least one device driver; andcaching the event types in a resource data record repository, where the resource data record repository is stored in an environment that is isolated from the host device.2. The method of claim 1 , wherein the at least one host device driver includes each host device driver on a host device.3. The method of claim 2 , further comprising:receiving a request from a capability module for an event type;determining which of the event types cached in the resource data record repository match the request; andsubscribing the capability module to the event types cached in the resource data record that matched the request.4. The method of claim 3 , further comprising receiving from the capability module at least one event threshold for the requested event type.53. The method of claim 1 , wherein the host device driver is a ring software application.60. The method of claim 1 , wherein the host device driver is a ring software application.7. A machine-readable medium that is not a transitory propagating signal claim 1 , the machine-readable medium including instructions that claim 1 , when executed by a machine claim 1 , cause the machine to perform operations comprising:querying at least one host device driver for event types supported by the at least one host device driver; ...

Подробнее
28-11-2013 дата публикации

Virtual Machine Exclusive Caching

Номер: US20130318301A1
Автор: Han Chen, Hui Lei, Zhe Zhang
Принадлежит: International Business Machines Corp

Techniques, systems and an article of manufacture for caching in a virtualized computing environment. A method includes enforcing a host page cache on a host physical machine to store only base image data, and enforcing each of at least one guest page cache on a corresponding guest virtual machine to store only data generated by the guest virtual machine after the guest virtual machine is launched, wherein each guest virtual machine is implemented on the host physical machine.

Подробнее
05-12-2013 дата публикации

Methods and Systems for Retrieving and Caching Geofence Data

Номер: US20130326137A1
Принадлежит: QUALCOMM INCORPORATED

Mobile device systems and methods for monitoring geofences cache a subset of geofences within a likely travel perimeter determined based on speed and direction of travel, available roads, current traffic, etc. A server may download to mobile devices subsets of geofences within a likely travel perimeter determined based on a threshold travel time possible from a current location given current travel speed, direction and roads. Mobile device may receive a list of local geofences from a server, which may maintain or have access to a database containing all geofences. The mobile device may use the cashed geofences in the normal manner, by comparing its location to the cached list of local geofences to detect matches. In an embodiment, the mobile device may calculate or receive from the server an update perimeter, which when crossed may prompt the mobile device to request an update to the geofences stored in cache. 1. A method for enabling a mobile computing device to monitor geofences , comprising:determining a current location of the mobile computing device;receiving a subset of geofences from a global database of geofences based upon the current location of the mobile computing device;caching the subset of geofences in memory of the mobile computing device;andcomparing the current location to the geofences cached on the mobile computing device to determine if a geofence criterion is satisfied.2. The method of claim 1 , wherein the global database of geofences is located on a server claim 1 , the method further comprising:receiving the current location of the mobile computing device in the server;selecting the subset of geofences from the global database of geofences based upon the current location of the mobile computing device;transmitting the selected subset of geofences to the mobile computing device; andreceiving the transmitted subset of geofences in the mobile computing device,wherein caching the subset of geofences in memory of the mobile computing device ...

Подробнее
26-12-2013 дата публикации

Data Cache Method, Device, and System in a Multi-Node System

Номер: US20130346693A1
Автор: Zhang Xiaofeng
Принадлежит: Huawei Technologies Co., Ltd.

A data cache method, device, and system in a multi-node system are provided. The method includes: dividing a cache area of a cache medium into multiple sub-areas, where each sub-area is corresponding to a node in the system; dividing each of the sub-areas into a thread cache area and a global cache area; when a process reads a file, detecting a read frequency of the file; when the read frequency of the file is greater than a first threshold and the size of the file does not exceed a second threshold, caching the file in the thread cache area; or when the read frequency of the file is greater than the first threshold and the size of the file exceeds the second threshold, caching the file in the global cache area. Thus overheads of remote access of a system are reduced, and I/O performance of the system is improved. 1. A data cache method in a multi-node system , wherein the multi-node system comprises a cache medium and a disk array , and wherein the method comprises:dividing a cache area in the cache medium into multiple sub-areas, wherein each sub-area is corresponding to a node in the multi-node system;dividing each of the sub-areas into a thread cache area and a global cache area, wherein a mapping is established between the thread cache area and the disk array by adopting an associative mapping manner, and wherein a mapping is established between the global cache area and the disk array by adopting a set-associative mapping manner;detecting a read frequency of a file when a process reads the file;caching the file in the thread cache area when the read frequency of the file is greater than a first threshold and a size of the file does not exceed a second threshold; andcaching the file in the global cache area when the read frequency of the file is greater than the first threshold and the size of the file exceeds the second threshold.2. The method according to claim 1 , wherein the method further comprises:dividing the thread cache area into multiple small areas; ...

Подробнее
09-01-2014 дата публикации

Managing Data Writing to Memories

Номер: US20140010014A1
Принадлежит: Apple Inc

Systems and processes may use a first memory, a second memory, and a memory controller. The second memory is at least as large as a block of the first memory. Data is received and stored in the second memory for further writing to the second memory.

Подробнее
09-01-2014 дата публикации

Layered architecture for hybrid controller

Номер: US20140013027A1
Принадлежит: SEAGATE TECHNOLOGY LLC

Approaches for implementing a controller for a hybrid memory that includes a main memory and a cache for the main memory are discussed. The controller comprises a hierarchy of abstraction layers, wherein each abstraction layer is configured to provide at least one component of a cache management structure. Each pair of abstraction layers utilizes processors communicating through an application programming interface (API). The controller is configured to receive incoming memory access requests from a host processor and to manage outgoing memory access requests routed to the cache using the plurality of abstraction layers.

Подробнее
09-01-2014 дата публикации

Systems, methods and apparatus for cache transfers

Номер: US20140013059A1
Принадлежит: Fusion IO LLC

A virtual machine cache provides for maintaining a working set of the cache during a transfer between virtual machine hosts. In response to a virtual machine transfer, the previous host of the virtual machine is configured to retain cache data of the virtual machine, which may include both cache metadata and data that has been admitted into the cache. The cache data may be transferred to the destination host via a network (or other communication mechanism). The destination host populates a virtual machine cache with the transferred cache data to thereby reconstruct the working state of the cache.

Подробнее
16-01-2014 дата публикации

Storing data in presistent hybrid memory

Номер: US20140019677A1
Принадлежит: Hewlett Packard Development Co LP

Storing data in persistent hybrid memory includes promoting a memory block from non-volatile memory to a cache based on a usage of said memory block according to a promotion policy, tracking modifications to the memory block while in the cache, and writing the memory block back into the non-volatile memory after the memory block is modified in the cache based on a writing policy that keeps a number of the memory blocks that are modified at or below a number threshold while maintaining the memory block in the cache.

Подробнее
16-01-2014 дата публикации

SAVING LOG DATA USING A DISK SYSTEM AS PRIMARY CACHE AND A TAPE LIBRARY AS SECONDARY CACHE

Номер: US20140019682A1

Various embodiments are provided for saving a log data in a hierarchical storage management system using a disk system as a primary cache with a tape library as a secondary cache. The user data is stored in the primary cache and written into the secondary cache at a subsequent period of time. Blank tapes in the secondary cache is prepared for storing the user data and the log data based on priorities. At least one of the blank tapes is selected for copying the log data and the user data from the primary cache to the secondary cache based on priorities. The log data is stored in the primary cache. The selection of at least one of the blank tapes completely filled with the log data is delayed for writing additional amounts of the user data. 1. A system , for saving a plurality of log data in a hierarchical storage management (HSM) system using a disk system as a primary cache with a tape library as a secondary cache , comprising:at least one tape drive; and stores user data in the primary cache, the user data being written from the primary cache into the secondary cache at a subsequent period of time,', 'prepares a plurality of blank tapes in the secondary cache for storing the user data and the plurality of log data based on a plurality of priorities, the plurality of blank tapes are unused until the user data is written to at least one tape media,', 'selects at least one of the plurality of blank tapes for copying the plurality of log data and the user data from the primary cache to the secondary cache based upon the plurality of priorities, and', 'stores the plurality of log data in the primary cache, wherein the plurality of log data is wrapped in the primary cache and a copy of the plurality of log data being copied into the plurality of blank tapes, the plurality of blank tapes being configured to appear blank to a user for storing the user data., 'at least one processor device, operable with the at least one tape drive, wherein the at least one processor ...

Подробнее
16-01-2014 дата публикации

Methods of cache preloading on a partition or a context switch

Номер: US20140019689A1
Принадлежит: International Business Machines Corp

A scheme referred to as a “Region-based cache restoration prefetcher” (RECAP) is employed for cache preloading on a partition or a context switch. The RECAP exploits spatial locality to provide a bandwidth-efficient prefetcher to reduce the “cold” cache effect caused by multiprogrammed virtualization. The RECAP groups cache blocks into coarse-grain regions of memory, and predicts which regions contain useful blocks that should be prefetched the next time the current virtual machine executes. Based on these predictions, and using a simple compression technique that also exploits spatial locality, the RECAP provides a robust prefetcher that improves performance without excessive bandwidth overhead or slowdown.

Подробнее
06-02-2014 дата публикации

System and Method for Simple Scale-Out Storage Clusters

Номер: US20140040411A1
Принадлежит: NetApp Inc

Systems and associated methods for flexible scalability of storage systems. In one aspect, a storage controller may include an interface to a fabric adapted to permit each storage controller coupled to the fabric to directly access memory mapped components of all other storage controllers coupled to the fabric. The CPU and other master device circuits within a storage controller may directly address memory an I/O devices directly coupled thereto within the same storage controller and may use RDMA features to directly address memory an I/O devices of other storage controllers through the fabric interface.

Подробнее
20-02-2014 дата публикации

Storage control device, storage device, storage system, storage control method, and program for the same

Номер: US20140052910A1
Принадлежит: Fujitsu Ltd

A storage control device configured to control a storage device includes a first disk which is in active state and a second disk which is in standby state. The storage control device includes a communication unit and a control unit. The communication unit transmits a read-out request or a write request to the storage device and receives a response to the read-out request or the write request from the storage device. The control unit controls the communication unit so that the communication unit transmits a rotation start command which instructs a start of rotation of the second disk to the storage device, when a time to the point when receiving the response to the read-out request or the write request transmitted to the first disk which is in active state is longer than a predetermined threshold.

Подробнее
27-02-2014 дата публикации

TRANSPARENT HOST-SIDE CACHING OF VIRTUAL DISKS LOCATED ON SHARED STORAGE

Номер: US20140059292A1
Принадлежит:

Techniques for using a host-side cache to accelerate virtual machine (VM) I/O are provided. In one embodiment, the hypervisor of a host system can intercept an I/O request from a VM running on the host system, where the I/O request is directed to a virtual disk residing on a shared storage device. The hypervisor can then process the I/O request by accessing a host-side cache that resides one or more cache devices distinct from the shared storage device, where the accessing of the host-side cache is transparent to the VM. 1. A method for using a host-side cache to accelerate virtual machine (VM) I/O , the method comprising:intercepting, by a hypervisor of a host system, an I/O request from a VM running on the host system, the I/O request being directed to a virtual disk residing on a shared storage device; andprocessing, by the hypervisor, the I/O request by accessing a host-side cache that resides one or more cache devices distinct from the shared storage device, the accessing of the host-side cache being transparent to the VM.2. The method of wherein processing the I/O request by accessing the host-side cache comprises invoking a caching module that has been preconfigured for use with the VM or the virtual disk.3. The method of wherein the caching module is a modular component of the hypervisor that is implemented by a third-party developer.4. The method of wherein the host-side cache is spread across a plurality of cache devices claim 1 , and wherein the host-side cache is presented as a single logical resource to the hypervisor by pooling the plurality of cache devices using a common file system.5. The method of wherein the plurality of cache devices are heterogeneous devices.6. The method of wherein the VM or the virtual disk is allocated a portion of the host-side cache when the VM is powered on claim 1 , the allocated portion having a preconfigured minimum size and a preconfigured maximum size.7. The method of wherein the allocated portion is freed when the VM ...

Подробнее
27-02-2014 дата публикации

METHOD FOR PROTECTING A GPT CACHED DISKS DATA INTEGRITY IN AN EXTERNAL OPERATING SYSTEM ENVIRONMENT

Номер: US20140059293A1
Автор: Bisht Pradeep
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

An invention is provided for protecting the data integrity of a cached storage device in an alternate operating system (OS) environment. The invention includes replacing a globally unique identifiers partition table (GPT) for a cached disk with a modified globally unique identifiers partition table (MGPT). The MGPT renders cached partitions on the cached disk inaccessible when the MGPT is used by an OS to access the cached partitions, while un-cached partitions on the cached disk are still accessible when the using MGPT. In normal operation, the data on the cached disk is accessed using information based on the GPT, which can be stored on a caching disk, generally via caching software. In response to receiving a request to disable caching, the MGPT on the cached disk is replaced with the GPT, thus rendering the all data on the formally cached disk accessible in an alternate OS environment where appropriate caching software is not present. 1. A method for protecting data integrity of a disk in an alternate operating system (OS) environment , comprising:replacing a globally unique identifiers partition table (GPT) for a cached disk with a modified globally unique identifiers partition table (MGPT), wherein the MGPT renders cached partitions on the cached disk inaccessible when the MGPT is used by an OS to access the cached partitions, and wherein un-cached partitions on the cached disk are accessible when the MGPT is used by the OS to access the un-cached partitions; andaccessing the data on the cached disk using information based on the GPT.2. A method as recited in claim 1 , wherein partition entries in the MGPT for cached partitions have begin and end locations different than stored in corresponding entries in the GPT for the cached disk.3. A method as recited in claim 2 , wherein partition entries in the MGPT for un-cached partitions are the same as corresponding entries in the GPT for the cached disk.4. A method as recited in claim 2 , wherein the MGPT is stored ...

Подробнее
06-03-2014 дата публикации

Data analysis system

Номер: US20140067920A1
Принадлежит: International Business Machines Corp

A data processing method includes obtaining a data access pattern of a client terminal with respect to a data storage unit, performing caching operations on the data storage unit according to a caching criterion to thereby obtain and store cache data in the cache memory, and sending the cache data to an analyst server via the data transmission interface so as for the analyst server to analyze the cache data and thereby generate an analysis result.

Подробнее
06-03-2014 дата публикации

DATA PROCESSING APPARATUS, METHOD FOR PROCESSING DATA, AND COMPUTER READABLE RECORDING MEDIUM RECORDED WITH PROGRAM TO PERFORM THE METHOD

Номер: US20140068156A1
Автор: LEE Kwang-ho
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A data processing apparatus includes a first storage device which stores compressed data therein, a second storage device which accesses and temporarily stores the compressed data stored in the first storage device, a data decompressor which generates decompressed data by decompressing the compressed data and outputs the decompressed data to the second storage device so that the decompressed data is temporarily stored in the second storage device, and a controller which accesses the decompressed data temporarily stored in the second storage device. The data decompressor directly scatters the decompressed data into a page cache based on addresses of the page cache. Accordingly, the operating speed of the program and the data processing apparatus can be improved. 1. A data processing apparatus comprising:a first storage device to store compressed data;a second storage device to access the compressed data stored in the first storage device, and to temporarily store the compressed data;a data decompressor to generate decompressed data by decompressing the compressed data and outputting the decompressed data to the second storage device which temporarily stores the decompressed data; anda controller to access the decompressed data temporarily stored in the second storage device.2. The data processing apparatus as claimed in claim 1 , wherein the second storage device comprises:an input buffer to temporarily store the compressed data; anda page cache to temporarily store the decompressed data.3. The data processing apparatus as claimed in claim 2 , wherein the second storage device temporarily stores the compressed data input from the first storage device in the input buffer and then outputs the compressed data to the data decompressor claim 2 , andthe data decompressor decompresses the compressed data and directly scatters the decompressed data into the page cache.4. The data processing apparatus as claimed in claim 3 , wherein the second storage device accesses the ...

Подробнее
06-03-2014 дата публикации

PERFORMING ASYNCHRONOUS DISCARD SCANS WITH STAGING AND DESTAGING OPERATIONS

Номер: US20140068163A1
Принадлежит:

A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether one or more discard scans are being performed or queued for the area of the cache. In response to determining that one or more discard scans are being performed or queued for the area of the cache, the controller avoids satisfying the request to perform the staging or the destaging operations or a read hit with respect to the area of the cache. 1. A method , comprising:receiving, by a controller, a request to perform staging or destaging operations with respect to an area of a cache;determining whether one or more discard scans are being performed or queued for the area of the cache; andin response to determining that one or more discard scans are being performed or queued for the area of the cache, avoiding satisfying the request to perform the staging or the destaging operations or a read hit with respect to the area of the cache.2. The method of claim 1 , the method further comprising:in response to determining that one or more discard scans are not being performed or queued for the area of the cache, satisfying the request to perform the staging or the destaging operations or the read hit with respect to the area of the cache.3. The method of claim 1 , wherein the cache is a flash cache and discard scans are performed asynchronously with respect to a request from a host to the controller to release space in the flash cache.4. The method of claim 1 , wherein the area of the cache corresponds to an extent claim 1 , a track claim 1 , a volume claim 1 , a logical subsystem or any other representation of storage.5. The method of claim 1 , wherein the cache is a flash cache claim 1 , wherein the controller maintains a plurality of logical subsystems claim 1 , wherein each logical subsystem stores a plurality of volumes claim 1 , wherein a logical storage group is a plurality of logical subsystems that is owned for input/ ...

Подробнее
06-03-2014 дата публикации

PROCESSOR, INFORMATION PROCESSING APPARATUS, AND CONTROL METHOD

Номер: US20140068179A1
Принадлежит:

A processor includes a cache memory that holds data from a main storage device. The processor includes a first control unit that controls acquisition of data, and that outputs an input/output request that requests the transfer of the target data. The processor includes a second control unit that controls the cache memory, that determines, when an instruction to transfer the target data and a response output by the first processor on the basis of the input/output request that has been output to the first processor is received, whether the destination of the response is the processor, and that outputs, to the first control unit when the second control unit determines that the destination of the response is the processor, the response and the target data with respect to the input/output request. 1. A processor comprising:a cache memory that holds data from a main storage device connected to a first processor;a first control unit that controls acquisition of data performed by a input/output device connected to the processor and that outputs, to the first processor connected to the processor when the input/output device requests a transfer of target data stored in the main storage device connected to the first processor, an input/output request that requests the transfer of the target data; anda second control unit that controls the cache memory, that determines, when an instruction to transfer the target data and a response output by the first processor on the basis of the input/output request that has been output to the first processor is received from the first processor, whether the destination of the response is the processor, and that outputs, to the first control unit when the second control unit determines that the destination of the response is the processor, the response and the target data with respect to the input/output request.2. The processor according to claim 1 , wherein claim 1 , when the second control unit determines that the destination of the ...

Подробнее
06-03-2014 дата публикации

DATA ANALYSIS SYSTEM

Номер: US20140068180A1
Принадлежит:

A data analysis system, particularly, a system capable of efficiently analyzing big data is provided. The data analysis system includes an analyst server, at least one data storage unit, a client terminal independent of the analyst server, and a caching device independent of the analyst server. The caching device includes a caching memory, a data transmission interface, and a controller for obtaining a data access pattern of the client terminal with respect to the at least one data storage unit, performing caching operations on the at least one data storage unit according to a caching criterion to obtain and store cache data in the caching memory, and sending the cache data to the analyst server via the data transmission interface, such that the analyst server analyzes the cache data to generate an analysis result, which may be used to request a change in the caching criterion. 1. A data analysis system , comprising:an analyst server;at least one data storage unit;a client terminal independent of the analyst server; anda caching device independent of the analyst server, the caching device further comprising a cache memory, a data transmission interface, and a controller in communication with the analyst server, the client terminal, and the storage unit, wherein the controller obtains a data access pattern of the client terminal with respect to the storage unit and performs caching operations on the storage unit according to a caching criterion to obtain and store cache data in the cache memory and send the cache data to the analyst server via the data transmission interface, thereby allowing the analyst server to analyze the cache data and generate an analysis result.2. The data analysis system of claim 1 , wherein the caching criterion is specified or changeable by the analyst server.3. The data analysis system of claim 2 , wherein the caching criterion relates to a given access frequency.4. The data analysis system of claim 2 , wherein the caching criterion ...

Подробнее
06-03-2014 дата публикации

Systems, methods, and interfaces for adaptive cache persistence

Номер: US20140068197A1
Принадлежит: Fusion IO LLC

A storage module may be configured to service I/O requests according to different persistence levels. The persistence level of an I/O request may relate to the storage resource(s) used to service the I/O request, the configuration of the storage resource(s), the storage mode of the resources, and so on. In some embodiments, a persistence level may relate to a cache mode of an I/O request. I/O requests pertaining to temporary or disposable data may be serviced using an ephemeral cache mode. An ephemeral cache mode may comprise storing I/O request data in cache storage without writing the data through (or back) to primary storage. Ephemeral cache data may be transferred between hosts in response to virtual machine migration.

Подробнее
13-03-2014 дата публикации

CACHE OPTIMIZATION

Номер: US20140075109A1
Принадлежит: Amazon Technologies, Inc.

A system and method for management and processing of resource requests at cache server computing devices is provided. Cache server computing devices segment content into an initialization fragment for storage in memory and one or more remaining fragments for storage in a media having higher latency than the memory. Upon receipt of a request for the content, a cache server computing device transmits the initialization fragment from the memory, retrieves the one or more remaining fragments, and transmits the one or more remaining fragments without retaining the one or more remaining fragments in the memory for subsequent processing. 1. A computer-implemented method comprising:receiving, at a cache component, an object for storage;segmenting the object into a first fragment for storage in memory, a second fragment for storage in a first media having higher latency than the memory, and a third fragment for storage in a second media having higher latency than the first media, wherein the size of the first fragment is based on a latency associated with retrieval of the second fragment, and wherein the size of the second fragment is based on a latency associated with retrieval of the third fragment;receiving a request for the object at a cache component;causing transmission of the first fragment of the object from the memory;causing transmission of the second fragment without retaining the second fragment in memory for subsequent processing; andcausing transmission of the third fragment without retaining the third fragment in the memory for subsequent processing.2. A computer-implemented method comprising:receiving a request for an object at a cache component;causing transmission of a first fragment of the object from memory;causing transmission of a second fragment of the object without retaining the second fragment in memory for subsequent processing; andcausing transmission of a third fragment of the object without retaining the third fragment in memory for subsequent ...

Подробнее
13-03-2014 дата публикации

Replicating tracks from a first storage site to a second and third storage sites

Номер: US20140075114A1
Принадлежит: International Business Machines Corp

Provided are a computer program product, system, and method for replicating tracks from a first storage to a second and third storages. A determination is made of a track in the first storage to transfer to the second storage as part of a point-in-time copy relationship and of a stride of tracks including the target track. The stride of tracks including the target track is staged from the first storage to a cache according to the point-in-time copy relationship. The staged stride is destaged from the cache to the second storage. The stride in the cache is transferred to the third storage as part of a mirror copy relationship. The stride of tracks in the cache is demoted in response to destaging the stride of the tracks in the cache to the second storage and transferring the stride of tracks in the cache to the third storage.

Подробнее
20-03-2014 дата публикации

STORAGE APPARATUS AND METHOD FOR CONTROLLING INTERNAL PROCESS

Номер: US20140082276A1
Принадлежит: FUJITSU LIMITED

According to an aspect of the present invention, provided is a storage apparatus including a plurality of solid state drives (SSDs) and a processor. The SSDs store data in a redundant manner. The processor controls a reading process of reading data from an SSD and a writing process of writing data into an SSD. The processor controls an internal process, which is performed during the writing process, to be performed in each of the SSDs when any one of the SSDs satisfies a predetermined condition. 1. A storage apparatus comprising:a plurality of solid state drives (SSDs) to store data in a redundant manner; and control a reading process of reading data from an SSD and a writing process of writing data into an SSD, and', 'control an internal process, which is performed during the writing process, to be performed in each of the SSDs when any one of the SSDs satisfies a predetermined condition., 'a processor to'}3. The storage apparatus according to claim 1 , whereinthe predetermined condition is that an amount of data which has been written into any one of the SSDs reaches or exceeds a start amount, the start amount being an estimated amount of data to be written by a time when an internal process is started.4. The storage apparatus according to claim 1 , whereinthe predetermined condition is that a waiting time in any one of the SSDs exceeds a threshold, the waiting time being a time length from a time at which a write command is issued to an SSD to a time at which a response is made, the threshold being determined based on a waiting time in a state where no internal process is performed.5. The storage apparatus according to claim 1 , whereinthe plurality of SSDs are arranged in a Redundant Array of Inexpensive Disks (RAID) configuration.6. A method for controlling an internal process of a plurality of solid state drives (SSDs) configured to store data in a redundant manner claim 1 , the method comprising:controlling, by a storage apparatus, a reading process of ...

Подробнее
20-03-2014 дата публикации

EFFICIENT PROCESSING OF CACHE SEGMENT WAITERS

Номер: US20140082277A1

For a plurality of input/output (I/O) operations waiting to assemble complete data tracks from data segments, a process, separate from a process responsible for the data assembly into the complete data tracks, is initiated for waking a predetermined number of the waiting I/O operations. A total number of I/O operations to be awoken at each of an iterated instance of the waking is limited. 1. A method for cache management by a processor device in a computing storage environment , the method comprising:for a plurality of input/output (I/O) operations waiting to assemble complete data tracks from data segments, initiating a process, separate from a process responsible for the data assembly into the complete data tracks, for waking a predetermined number of the waiting I/O operations, wherein a total number of I/O operations to be awoken at each of an iterated instance of the waking is limited.2. The method of claim 1 , further including performing the waking process for a first iteration subsequent to the data assembly process building at least one complete data track.3. The method of claim 2 , further including claim 2 , pursuant to the waking process claim 2 , removing claim 2 , by a first I/O waiter claim 2 , the at least one complete data track off of a free list.4. The method of claim 3 , further including claim 3 , pursuant to the waking process claim 3 , if additional complete data tracks are available on the free list claim 3 , waking at least a second I/O waiter to remove the additional complete data tracks off the free list.5. The method of claim 4 , further including iterating through at least one additional waking process corresponding to a predetermined wake up depth.6. The method of claim 1 , further including setting the predetermined number of waiting I/O operations to be awoken according to the waking process. This application is a Continuation of U.S. patent application Ser. No. 13/616,902, filed on Sep. 14, 2012.The present invention relates in ...

Подробнее
20-03-2014 дата публикации

Intercluster relationship management

Номер: US20140082285A1
Автор: Steven M. Ewing
Принадлежит: NetApp Inc

Data storage and management systems can be interconnected as clustered systems to distribute data and operational loading. Further, independent clustered storage systems can be associated to form peered clusters. As provided herein, methods and systems for creating and managing intercluster relationships between independent clustered storage systems, allowing the respective independent clustered storage systems to exchange data and distribute management operations between each other while mitigating administrator involvement. Cluster introduction information is provided on a network interface of one or more nodes in a cluster, and intercluster relationships are created between peer clusters. A relationship can be created by initiating contact with a peer using a logical interface, and respective peers retrieving the introduction information provided on the network interface. Respective peers have a role/profile associated with the provided introduction information, which is mapped to the peers, allowing pre-defined access to respective peers.

Подробнее
20-03-2014 дата публикации

Deferred re-mru operations to reduce lock contention

Номер: US20140082296A1
Принадлежит: International Business Machines Corp

Data operations, requiring a lock, are batched into a set of operations to be performed on a per-core basis. A global lock for the set of operations is periodically acquired, the set of operations is performed, and the global lock is freed so as to avoid excessive duty cycling of lock and unlock operations in the computing storage environment.

Подробнее
07-01-2016 дата публикации

NVRAM CACHING AND LOGGING IN A STORAGE SYSTEM

Номер: US20160004637A1
Автор: Kimmel Jeffrey S.
Принадлежит:

In one embodiment, a node coupled to solid state drives (SSDs) of a plurality of storage arrays executes a storage input/output (I/O) stack having a plurality of layers. The node includes a non-volatile random access memory (NVRAM). A first portion of the NVRAM is configured as a write-back cache to store write data associated with a write request and a second portion of the NVRAM is configured as one or more non-volatile logs (NVLogs) to record metadata associated with the write request. The write data is passed from the write-back cache over a first path of the storage I/O stack for storage on a first storage array and the metadata is passed from the one or more NVLogs over a second path of the storage I/O stack for storage on a second storage array, wherein the first path is different from the second path. 1. A system comprising:a central processing unit (CPU) of a node of a cluster coupled to solid state drives (SSDs) of a plurality of storage arrays;a memory coupled to the CPU and configured to store a storage input/output (I/O) stack having a plurality of layers executable by the CPU; anda non-volatile random access memory (NVRAM) coupled to the CPU, a first portion of the NVRAM configured as a write-back cache to store write data associated with a write request and a second portion of the NVRAM configured as one or more non-volatile logs (NVLogs) to record metadata associated with the write request, the write data passed from the write-back cache over a first path of the storage I/O stack for storage on a first storage array and the metadata passed from the one or more NVLogs over a second path of the storage I/O stack for storage on a second storage array, wherein the first path is different from the second path.2. The system of wherein the write data is preserved in the write-back cache until successfully stored on the first storage array and the metadata is preserved in the one or more NVLogs until successfully stored on the second storage array.3. The ...

Подробнее
04-01-2018 дата публикации

Apparatus and method for a non-power-of-2 size cache in a first level memory device to cache data present in a second level memory device

Номер: US20180004433A1
Принадлежит: Intel Corp

Provided are an apparatus and method for a non-power-of-2 size cache in a first level memory device to cache data present in a second level memory device having a 2 n cache size. A request is to a target address having n bits directed to the second level memory device. A determination is made whether a target index, comprising m bits of the n bits of the target address, is within an index set of the first level memory device. A determination is made of a modified target index in the index set of the first level memory device having at least one index bit that differs from a corresponding at least one index bit in the target index. The request is processed with respect to data in a cache line at the modified target index in the first level memory device.

Подробнее
02-01-2020 дата публикации

ERROR CORRECTION DECODING AUGMENTED WITH ERROR TRACKING

Номер: US20200004628A1
Принадлежит:

Enhanced error correction for data stored in storage devices are presented herein. A storage controller retrieves an initial encoded data segment stored on a storage media, computes information relating to errors resultant from decoding the initial encoded data segment, and stores the information in a cache. The storage controller retrieves subsequent encoded data segments stored on the storage media, augments a decoder using at least the information retrieved from the cache, and decodes the subsequent encoded data with the decoder to produce resultant data. 1. An apparatus , comprising:a storage media configured to store encoded data;a cache memory configured to store information relating to errors resultant from decoding one or more prior read operations directed to the encoded data; anda decoder configured to decode the encoded data retrieved from the storage media, the decoding augmented by the information relating to the errors.2. The apparatus of claim 1 , wherein the information relating to the errors comprises at least one error location index indicating decoding error location information relating to at least a portion of the encoded data read from the storage media; andwherein the decoding is augmented by the at least one error location index maintained in the cache memory.3. The apparatus of claim 2 , wherein the decoder is further configured to:augment decoding of the encoded data by at least initializing reliability parameters in a low-density parity-check (LDPC) scheme with the error location information indicated in the at least one error location index.4. The apparatus of claim 2 , wherein the decoder is further configured to:determine further error location information resultant from decoding the encoded data; andbased at least on the further error location information differing from the at least one error location index, update the at least one error location index with the further error location information.5. The apparatus of claim 4 , wherein ...

Подробнее
01-01-2015 дата публикации

STORAGE DEVICE, ELECTRONIC APPARATUS, CONTROL METHOD OF STORAGE DEVICE, AND CONTROL PROGRAM FOR STORAGE DEVICE

Номер: US20150006811A1
Автор: Uehara Keiichi
Принадлежит: KABUSHIKI KAISHA TOSHIBA

According to one embodiment, a storage device includes: a first storage module configured to detect number of accesses to data to be stored, and configured to store data, which is larger in the detected number of accesses than a preset value, in a nonvolatile cache memory; and a second storage module configured to detect storable capacity of the nonvolatile cache memory, and configured to detect the number of accesses to each of data stored in the nonvolatile cache memory when the detected capacity is smaller than a preset capacity-value, move the data to a disk cache area provided in a hard disk and used for cache processing, and store the moved data in the disk cache area. 1. A storage device comprising:a first storage configured to detect number of accesses to data to be stored, and configured to store first data, which has a larger detected number of accesses than a first number, in a nonvolatile cache memory; anda second storage configured to detect a storable capacity of the nonvolatile cache memory, and configured to detect the number of accesses to second data stored in the nonvolatile cache memory when the detected capacity is smaller than a first capacity-value, move the second data to a disk cache area provided in a hard disk and used for cache processing, and store the second data in the disk cache area.2. The storage device according to claim 1 , wherein:the disk cache area comprises an outermost peripheral data track of the hard disk or a data track close to the outermost peripheral data track of the hard disk; anda bit density and a track density of the disk cache area are at low reference-values.3. The storage device according to claim 1 , wherein:a storable capacity of the disk cache area is detected; andcontinuous data is moved and stored at a first address in the hard disk when the storable capacity is smaller than a second value.4. The storage device according to claim 1 , whereinthird data, which has a smaller number of accesses than the first ...

Подробнее
01-01-2015 дата публикации

NON-VOLATILE HARD DISK DRIVE CACHE SYSTEM AND METHOD

Номер: US20150006812A1
Автор: Klein Dean A.
Принадлежит:

A non-volatile hard disk drive cache system is coupled between a processor and a hard disk drive. The cache system includes a control circuit, a non-volatile memory and a volatile memory. the control circuit causes a subset of the data stored in the hard disk drive to be written to the non-volatile memory. In response to a request to read data are stored in the non-volatile memory. If so, the requested read data are provided form the non-volatile memory. Otherwise, the requested read data are provided from the hard disk drive. the volatile memory is used as a write buffer and to store disk access statistics, such as the disk drive locations that are most frequently read, which are used by the control circuit to determine which data to store in the non-volatile memory. 1. A method for providing data stored on a hard drive to a system processor , comprising:identifying data on a hard drive that is anticipated to be needed by a system processor,retrieving the identified data from a hard drive;storing the retrieved data in a non-volatile memory; andin response to receiving a request from the system processor to retrieve the identified data from the hard drive, providing the stored data from the non-volatile memory to the system processor.2. The method of claim 1 , wherein the data is a dynamic link library.3. The method of claim 1 , wherein the data is operating system files.4. The method of claim 1 , wherein the data is identified in response to an instruction from the system processor to tag the data for retrieval.5. The method of claim 4 , wherein an application being run by the system processor issues the instruction.6. A cache system claim 4 , comprising:a non-volatile memory operable to store data;a first interface circuit operable to request data from the non-volatile memory and to write data to the non-volatile memory;a second interface circuit operable to request data from a storage device and to write data to the storage device;a microprocessor operable to ...

Подробнее
01-01-2015 дата публикации

CACHING DATA BETWEEN A DATABASE SERVER AND A STORAGE SYSTEM

Номер: US20150006813A1
Принадлежит:

Techniques are provided for using an intermediate cache between the shared cache of an application and the non-volatile storage of a storage system. The application may be any type of application that uses a storage system to persistently store data. The intermediate cache may be local to the machine upon which the application is executing, or may be implemented within the storage system. In one embodiment where the application is a database server, the database system includes both a DB server-side intermediate cache, and a storage-side intermediate cache. The caching policies used to populate the intermediate cache are intelligent, taking into account factors that may include which object an item belongs to, the item type of the item, a characteristic of the item, or the type of operation in which the item is involved. 1. A method comprising:at a storage system, responding to input/output (I/O) requests from one or more database servers by retrieving requested disk blocks from one or more storage devices within the storage system, the requested disk blocks storing data representative of database objects with respect to which the one or more database servers perform database operations; whether a given database object, for which the given disk block stores data, is associated with a particular designation;', 'whether the given disk block is of an index block type;', 'whether the given disk block is of a data block type;', 'whether the given disk block is of an undo block type;', 'whether the given disk block is encrypted;', 'whether the given disk block is a secondary copy of a mirrored item; or', 'whether the given disk block is involved in a table scan operation;, 'for a given disk block of the requested disk blocks, the storage system determining whether to cache the given disk block in an intermediate cache within the storage system, the determining being based at least partially upon one or more ofwhen a particular disk block is cached in the intermediate ...

Подробнее
01-01-2015 дата публикации

BACKUP OF CACHED DIRTY DATA DURING POWER OUTAGES

Номер: US20150006815A1
Принадлежит: LSI Corporation

Systems and methods presented herein provide for backing up cached dirty data during power outages. In one embodiment, a system includes a controller operable to process input/output requests from a host system, and a cache memory operable to cache dirty data pertaining to the input/output requests. The system also includes a nonvolatile memory operable to back up the dirty data during a power outage. The controller comprises a hardware register operable to map directly to the cache memory to track the dirty data. The controller is further operable to detect the power outage, and, based on the detected power outage, to direct the hardware register to perform a direct memory access (DMA) of the dirty data in the cache memory according to the mapping between the hardware register and the cache memory, and to write the dirty data to the nonvolatile memory. 1. A system , comprising:a controller operable to process input/output requests from a host system;a cache memory operable to cache dirty data pertaining to the input/output requests; anda nonvolatile memory operable to back up the dirty data during a power outage,wherein the controller comprises a hardware register operable to map directly to the cache memory to track the dirty data,wherein the controller is further operable to detect the power outage, and, based on the detected power outage, to direct the hardware register to perform a direct memory access of the dirty data in the cache memory according to the mapping between the hardware register and the cache memory, and to write the dirty data to the nonvolatile memory.2. The system of claim 1 , wherein:the controller is further operable to detect power being restored, and, based on the detected power restoration, to direct the hardware register to perform a direct memory access of the dirty data from the nonvolatile memory, and to direct the hardware register to write the dirty data to the cache memory until the dirty data can be written to long-term storage.3. ...

Подробнее
08-01-2015 дата публикации

SYSTEMS AND METHODS FOR SIGNATURE COMPUTATION IN A CONTENT LOCALITY BASED CACHE

Номер: US20150010143A1
Автор: YANG Ken Qing
Принадлежит:

The present disclosure relates to methods and circuits for signature computation in a content locality cache. A method can include dividing a received block into shingles, where each shingle represents a subset of the received block. The method can include, for each shingle, determining an intermediate fingerprint by processing the shingle, and determining whether the intermediate fingerprint is more representative of the contents of the block than a previous fingerprint. If so, the method can include storing the intermediate fingerprint as a representative fingerprint. If not, the method can include keeping the previous fingerprint as the representative fingerprint. The method can further include determining whether there are more shingles to process. If so, the method can include processing the next shingle. If not, the method can include computing the signature of the contents of the block by adding the representative fingerprint to a sketch of the received block. 1. A method for computing a signature of contents of a block in a cache , the method comprising:dividing a received block into shingles, wherein each shingle represents a subset of the received block; determining an intermediate fingerprint by processing the shingle;', 'determining whether the intermediate fingerprint is more representative of the contents of the block than a previous fingerprint;', 'if the intermediate fingerprint is determined to be more representative of the contents of the block, storing the intermediate fingerprint as a representative fingerprint;', 'if the intermediate fingerprint is determined to be less representative of the contents of the block, keeping the previous fingerprint as the representative fingerprint;', 'determining whether there are more shingles to process;', 'if there are more shingles to process, processing the next shingle; and, 'for each shingle,'}if there are no more shingles to process, computing the signature of the contents of the block by adding the ...

Подробнее
08-01-2015 дата публикации

Restoring temporal locality in global and local deduplication storage systems

Номер: US20150012698A1
Принадлежит: Dell Products LP

Techniques and mechanisms described herein facilitate the restoration temporal locality in global and local deduplication storage systems. According to various embodiments, when it is determined that cache memory in a storage system has reached a capacity threshold, each of a plurality of data dictionary entries stored in the cache memory may be associated with a respective merge identifier. Each data dictionary entry may correspond with a respective data chunk. Each data dictionary entry may indicate a storage location of the respective data chunk in the storage system. The respective merge identifier may indicate temporal locality information about the respective data chunk. The plurality of data dictionary entries may be stored to disk memory in the storage system. Each of the stored plurality of data dictionary entries may include the respective merge identifier.

Подробнее
08-01-2015 дата публикации

System and method of versioning cache for a clustering topology

Номер: US20150012699A1

Aspects of the disclosure pertain to a system and method for versioning cache for a clustered topology. In the clustered topology, a first controller mirrors write data from a cache of the first controller to a cache of the second controller. When communication between controllers of the topology is disrupted (e.g., when the second controller goes offline, while the first controller stays online), the first controller increments a cache version number stored in a disk data format of a logical disk, the logical disk being owned by the first controller and associated with the write data. The incremented cache version number provides an indication to the second controller that the data of the cache of the second controller is stale.

Подробнее
08-01-2015 дата публикации

MANAGING A CACHE IN A MULTI-NODE VIRTUAL TAPE CONTROLLER

Номер: US20150012700A1
Принадлежит:

According to one embodiment, a system includes a virtual tape library having a cache, a virtual tape controller (VTC) coupled to the virtual tape library, and an interface for coupling multiple hosts to the VTC. The cache is shared by the multiple hosts, and a common view of a cache state, a virtual library state, and a number of write requests pending is provided to the hosts by the VTC. In another embodiment, a method includes receiving data from at least one host using a VTC, storing data received from all the hosts to a cache using the VTC, sending an alert to all the hosts when free space is low and entering into a warning state, sending another alert to all the hosts when free space is critically low and entering into a critical state while allowing previously mounted virtual drives to continue normally. 1. A system , comprising:a virtual tape library having a cache;at least one virtual tape controller coupled to the virtual tape library; andan interface for coupling at least two hosts to the at least one virtual tape controller,wherein the cache is shared by the at least two hosts, andwherein a common view of a cache state, a virtual library state, and a number of write requests pending is provided to the at least two hosts by the virtual tape controller.2. The system as recited in claim 1 ,wherein the at least one virtual tape controller enters into a warning state and provides a first alert to the at least one host when a cache free space size is less than a first threshold, andwherein the at least one virtual tape controller enters into a critical state and provides a second alert to the at least one host when a cache free space size is less than a second threshold.3. The system as recited in claim 2 , wherein the at least one virtual tape controller allows previously mounted virtual drives to continue normal writing activity when in the critical state.4. The system as recited in claim 2 , wherein the at least one virtual tape controller throttles write ...

Подробнее
14-01-2016 дата публикации

STORAGE DEVICE AND COMPUTER SYSTEM

Номер: US20160011809A1
Принадлежит:

When computers and virtual machines operating in the computers both attempt to allocate a cache regarding the data in a secondary storage device to respective primary storage devices, identical data is prevented from being stored independently in multiple computers or virtual machines. An integrated cache management function in the computer arbitrates which computer or virtual machine should cache the data of the secondary storage device, and when the computer or the virtual machine executes input/output of data of the secondary storage device, the computer inquires the integrated cache management function, based on which the integrated cache management function retains the cache only in a single computer, and instructs the other computers to delete the cache. Thus, it is possible to prevent identical data from being cached in a duplicated manner in multiple locations of the primary storage device, and enables efficient capacity usage of the primary storage device. 1. A storage system comprising a processor , a primary storage device , and a secondary storage device; whereinthe storage system causes multiple virtual machines to be operated;an area of a portion of the primary storage device is used as a cache area for the multiple virtual machines to retain replica of data in the secondary storage device;the respective multiple virtual machines are capable of accessing a same area of the secondary storage device; andthe storage system causes only one of the replica of the data of the area in the secondary storage device to be retained in the cache area.2. The storage system according to claim 1 , whereineach of the multiple virtual machines has a cache partial region in the cache area as a non-shared area that is not accessed by other virtual machines; andthe storage system stores a replica of the data of the area in the secondary storage device only in the cache partial region of one of the virtual machines.3. The storage system according to claim 2 , whereineach of ...

Подробнее
14-01-2016 дата публикации

Memory System

Номер: US20160011812A1
Автор: Amaki Takehiko
Принадлежит:

According to one embodiment, a memory system includes a memory and a controller. The controller includes a data transfer unit and a speed control unit. The speed control unit controls a transfer speed of the data transfer unit based on a state of a data transfer destination during an operation of the memory. 1. A memory system comprising:a memory; and a buffer capable of controlling a data transfer speed;', 'an interface unit that transmits data transferred by the buffer to an outside of the memory controller; and', 'a speed control unit that reduces the transfer speed of the buffer by reducing at least a power voltage of the buffer based on information at least either from the buffer or the interface unit, when the data transfer speed of the buffer is higher than a data transmission speed of the interface unit., 'a memory controller that controls the memory, the memory controller comprising2. The memory system of claim 1 , whereinthe interface unit is a host interface;the buffer is a read buffer; andthe speed control unit controls a transfer speed of the read buffer based on information from the host interface during a data read operation of the memory.3. The memory system of claim 2 , whereinthe host interface comprises a read data buffer that transmits read data transferred from the read buffer to the outside; andthe speed control unit, during the data read operation of the memory, reduces the transfer speed of the read buffer when a storage state of the read data buffer is a predetermined state and returns the data transfer speed of the read buffer when the storage state of the read data buffer deviates from the predetermined state.4. The memory system of claim 3 , whereinthe data transfer speed of the read buffer is set in a plurality of stages; andthe speed control unit, during the data read operation of the memory, reduces the data transfer speed of the read buffer from a maximum value at least by one stage when the storage state of the read data buffer is ...

Подробнее
10-01-2019 дата публикации

APPARATUS AND METHOD FOR ENFORCING TIMING REQUIREMENTS FOR A MEMORY DEVICE

Номер: US20190012096A1
Автор: YU Jason K.
Принадлежит:

Provided are an apparatus and method for enforcing timing requirements for a memory device. An event command directed to a target addressable location comprising one of the addressable locations is received. A determination is made as to whether a time difference of a current time and a timestamp associated with a completed event directed to a threshold location including the target addressable location exceeds a time threshold. The received event command is executed against the target addressable location in response to determining that the time difference exceeds the time threshold 125- (canceled)26. An apparatus comprising:an interface to connect with one or more memory devices based at least, in part, on a connection of the one or more memory devices with the interface and receive a command A associated with a region of a memory device;', 'store a time stamp associated with the command A;', 'receive a command B associated with at least a portion of the region of the memory device;', 'determine a time stamp associated with the command B;', 'determine a time difference between the time stamp associated with the command B and the time stamp associated with the command A;', 'permit execution of the command B to proceed in response to the time difference meeting or exceeding a time gap; and', 'in response to the time gap not being met or exceeded, delay permission to commence execution of the command B until at least a second time difference meets or exceeds the time gap, wherein the second time difference is a difference between a current time stamp and the time stamp associated with the command A., 'a controller, coupled to the interface, the controller to27. The apparatus of claim 26 , whereinthe time stamp associated with the command A comprises a completion or processing of the command A andthe time stamp associated with the command B comprises a receipt of the command B.28. The apparatus of claim 26 , whereinthe time gap is based at least, in part, on a type of ...

Подробнее
10-01-2019 дата публикации

Buffer Management in a Data Storage Device

Номер: US20190012114A1
Принадлежит: SEAGATE TECHNOLOGY LLC

Method and apparatus for managing data buffers in a data storage device. In some embodiments, a write manager circuit stores user data blocks in a write cache pending transfer to a non-volatile memory (NVM). The write manager circuit sets a write cache bit value in a forward map describing the NVM to a first value upon storage of the user data blocks in the write cache, and subsequently sets the write cache bit value to a second value upon transfer of the user data blocks to the NVM. A read manager circuit accesses the write cache bit value in response to a read command for the user data blocks. The read manager circuit searches the write cache for the user data blocks responsive to the first value, and retrieves the requested user data blocks from the NVM without searching the write cache responsive to the second value.

Подробнее
21-01-2016 дата публикации

RAID SYSTEM FOR PROCESSING I/O REQUESTS UTILIZING XOR COMMANDS

Номер: US20160018995A1
Принадлежит:

A controller for maintaining data consistency without utilizing region lock is disclosed. The controller is connected to multiple physical disk drives, and the physical disk drives include a data portion and a parity data portion that corresponds to the data portion. The controller can receive a first input/output command (I/O) from a first computing device for writing write data to the data portion and a second I/O command from a second computing device for accessing data from the data portion. The controller allocates a first buffer for storing data associated with the first I/O command and allocates a second buffer for storing data associated with a logical operation. The controller initiates a logical operation that comprises an exclusive OR operation directed to the write data and the read data to obtain resultant exclusive OR data and copies the write data to the data portion.

Подробнее
15-01-2015 дата публикации

Writing adjacent tracks to a stride, based on a comparison of a destaging of tracks to a defragmentation of the stride

Номер: US20150019810A1
Автор: Lokesh M. Gupta
Принадлежит: International Business Machines Corp

Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks. The one or more tracks are written to a selected stride of the plurality of strides, based on comparing the number of operations required to destage selected tracks from the selected stride to the number of operations required to defragment the compressed data in the selected stride.

Подробнее