Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 21574. Отображено 100.
10-11-2016 дата публикации

Система хранения данных с модулем хеширования

Номер: RU0000165821U1

Полезная модель относится к вычислительной технике, в частности к системам хранения данных. Полезная модель может быть применена для уменьшения времени поиска адресов блоков данных, хранимых в дисковой кэш-памяти системы хранения данных. Это достигается тем, что система хранения данных с модулем хеширования содержит дисковые устройства, дисковую кэш-память, управляющий процессор, системную шину, интерфейсы хост-узлов, интерфейсы дисковых устройств, системную память, хранящую управляющие таблицы в виде хеш-таблиц с цепочками коллизий и модуль хеширования, выполняющий поиск адреса блока данных, хранимого в дисковой кэш-памяти системы хранения данных, в одной из цепочек коллизий хеш-таблиц. Техническим результатом, обеспечиваемым приведенной совокупностью признаков, является ускоренный поиск адресов блоков данных, хранимых в дисковой кэш-памяти системы хранения данных, реализуемый модулем хеширования. Более быстрый поиск адресов блоков данных в цепочке коллизий хеш-таблицы в сравнении со списками и таблицами обусловлен тем, что количество адресов блоков данных в цепочке коллизий хеш-таблицы меньше, чем количество адресов блоков данных, хранимых в таблице или двусвязном кольцевом списке. Полезная модель может быть осуществлена на основе системы хранения данных прототипа. Система хранения данных должна иметь модуль хеширования. Управляющие таблицы, хранимые в системной памяти системы хранения данных, должны быть в виде хеш-таблиц с цепочками коллизий. РОССИЙСКАЯ ФЕДЕРАЦИЯ (19) RU (11) (13) 165 821 U1 (51) МПК G06F 12/0893 (2016.01) G06F 12/0864 (2016.01) ФЕДЕРАЛЬНАЯ СЛУЖБА ПО ИНТЕЛЛЕКТУАЛЬНОЙ СОБСТВЕННОСТИ (12) ТИТУЛЬНЫЙ (21)(22) Заявка: ЛИСТ ОПИСАНИЯ ПОЛЕЗНОЙ МОДЕЛИ К ПАТЕНТУ 2016122968/08, 09.06.2016 (24) Дата начала отсчета срока действия патента: 09.06.2016 (72) Автор(ы): Сибиряков Максим Андреевич (RU) (73) Патентообладатель(и): Сибиряков Максим Андреевич (RU) R U Приоритет(ы): (22) Дата подачи заявки: 09.06.2016 (45) Опубликовано: 10.11.2016 Бюл. № 31 R U 1 6 5 8 2 ...

Подробнее
09-02-2012 дата публикации

Semiconductor storage device with volatile and nonvolatile memories

Номер: US20120033496A1
Принадлежит: Individual

A semiconductor storage device includes a first memory area configured in a volatile semiconductor memory, second and third memory areas configured in a nonvolatile semiconductor memory, and a controller which executes following processing. The controller executes a first processing for storing a plurality of data by the first unit in the first memory area, a second processing for storing data outputted from the first memory area by a first management unit in the second memory area, and a third processing for storing data outputted from the first memory area by a second management unit in the third memory area.

Подробнее
16-02-2012 дата публикации

Scatter-Gather Intelligent Memory Architecture For Unstructured Streaming Data On Multiprocessor Systems

Номер: US20120042121A1
Принадлежит: Individual

A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.

Подробнее
16-02-2012 дата публикации

Intelligent cache management

Номер: US20120042123A1
Автор: Curt Kolovson
Принадлежит: Curt Kolovson

An exemplary storage network, storage controller, and methods of operation are disclosed. In one embodiment, a method of managing cache memory in a storage controller comprises receiving, at the storage controller, a cache hint generated by an application executing on a remote processor, wherein the cache hint identifies a memory block managed by the storage controller, and managing a cache memory operation for data associated with the memory block in response to the cache hint received by the storage controller.

Подробнее
23-02-2012 дата публикации

Computer system, control apparatus, storage system and computer device

Номер: US20120047502A1
Автор: Akiyoshi Hashimoto
Принадлежит: HITACHI LTD

The computer system includes a server being configured to manage a first virtual machine to which a first part of a server resource included in the server is allocated and a second virtual machine to which a second part of the server resource is allocated. The computer system also includes a storage apparatus including a storage controller and a plurality of storage devices and being configured to manage a first virtual storage apparatus to which a first storage area on the plurality of storage devices is allocated and a second virtual storage apparatus to which a second storage area on the plurality of storage devices is allocated. The first virtual machine can access to the first virtual storage apparatus but not the second virtual storage apparatus and the second virtual machine can access to the second virtual storage apparatus but not the first virtual storage apparatus.

Подробнее
01-03-2012 дата публикации

Method and apparatus for fuzzy stride prefetch

Номер: US20120054449A1
Автор: Shiliang Hu, Youfeng Wu
Принадлежит: Intel Corp

In one embodiment, the present invention includes a prefetching engine to detect when data access strides in a memory fall into a range, to compute a predicted next stride, to selectively prefetch a cache line using the predicted next stride, and to dynamically control prefetching. Other embodiments are also described and claimed.

Подробнее
29-03-2012 дата публикации

Method and apparatus for reducing processor cache pollution caused by aggressive prefetching

Номер: US20120079205A1
Автор: Patrick Conway
Принадлежит: Advanced Micro Devices Inc

A method and apparatus for controlling a first and second cache is provided. A cache entry is received in the first cache, and the entry is identified as having an untouched status. Thereafter, the status of the cache entry is updated to accessed in response to receiving a request for at least a portion of the cache entry, and the cache entry is subsequently cast out according to a preselected cache line replacement algorithm. The cast out cache entry is stored in the second cache according to the status of the cast out cache entry.

Подробнее
05-04-2012 дата публикации

Disk control apparatus, disk control method, and storage medium storing disk control program

Номер: US20120084503A1
Автор: Yuichi Hagiwara
Принадлежит: Canon Inc

A disk control apparatus that is capable of performing a reliable mirroring control for both of an SSD and an HDD. The disk control apparatus that performs a mirroring control to the SSD and the HDD. An acquisition unit acquires the data rewriting number in the SSD. A derivation unit derives a data retention period of the SSD from the data rewriting number acquired. A comparison unit compares a predetermined threshold value with an increment between the data rewriting number acquired at a predetermined timing and the data rewriting number acquired after a retention period, which is derived from the data rewriting number at the predetermined timing, elapses. A setting unit sets so as to read data from the SSD by default and to read data from the HDD when the comparison unit determines that the increment is less than the threshold value.

Подробнее
05-04-2012 дата публикации

Circuit and method for determining memory access, cache controller, and electronic device

Номер: US20120084513A1
Автор: Kazuhiko Okada
Принадлежит: Fujitsu Semiconductor Ltd

A memory access determination circuit includes a counter that switches between a first reference value and a second reference value in accordance with a control signal to generate a count value based on the first reference value or the second reference value. A controller performs a cache determination based on an address that corresponds to the count value and outputs the control signal in accordance with the cache determination. A changing unit changes the second reference value in accordance with the cache determination.

Подробнее
12-04-2012 дата публикации

Method for managing and tuning data movement between caches in a multi-level storage controller cache

Номер: US20120089782A1
Принадлежит: LSI Corp

A method for managing data movement in a multi-level cache system having a primary cache and a secondary cache. The method includes determining whether an unallocated space of the primary cache has reached a minimum threshold; selecting at least one outgoing data block from the primary cache when the primary cache reached the minimum threshold; initiating a de-stage process for de-staging the outgoing data block from the primary cache; and terminating the de-stage process when the unallocated space of the primary cache has reached an upper threshold. The de-stage process further includes determining whether a cache hit has occurred in the secondary cache before; storing the outgoing data block in the secondary cache when the cache hit has occurred in the secondary cache before; generating and storing metadata regarding the outgoing data block; and deleting the outgoing data block from the primary cache.

Подробнее
19-04-2012 дата публикации

Cache memory device, cache memory control method, program and integrated circuit

Номер: US20120096213A1
Автор: Kazuomi Kato
Принадлежит: Panasonic Corp

To aim to provide a cache memory device that performs a line size determination process for determining a refill size, in advance of a refill process that is performed at cache miss time. According to the line size determination process, the number of reads/writes of a management target line that belongs to a set is acquired (S 51 ), and in the case where the numbers of reads completely match one another and the numbers of writes completely match one another (S 52 : Yes), the refill size is determined to be large (S 54 ). Otherwise (S 52 : No), the refill size is determined to be small (S 55 ).

Подробнее
19-04-2012 дата публикации

System and Method for the Synchronization of a File in a Cache

Номер: US20120096228A1
Автор: David Thomas, Scott Wells
Принадлежит: Individual

The present invention provides a system and method for bi-directional synchronization of a cache. One embodiment of the system of this invention includes a software program stored on a computer readable medium. The software program can be executed by a computer processor to receive a database asset from a database; store the database asset as a cached file in a cache; determine if the cached file has been modified; and if the cached file has been modified, communicate the cached file directly to the database. The software program can poll a cached file to determine if the cached file has changed. Thus, bi-directional synchronization can occur.

Подробнее
26-04-2012 дата публикации

Multiplexing Users and Enabling Virtualization on a Hybrid System

Номер: US20120102138A1
Принадлежит: International Business Machines Corp

A method, hybrid server system, and computer program product, support multiple users in an out-of-core processing environment. At least one accelerator system in a plurality of accelerator systems is partitioned into a plurality of virtualized accelerator systems. A private client cache is configured on each virtualized accelerator system in the plurality of virtualized accelerator systems. The private client cache of each virtualized accelerator system stores data that is one of accessible by only the private client cache and accessible by other private client caches associated with a common data set. Each user in a plurality of users is assigned to a virtualized accelerator system from the plurality of virtualized accelerator systems.

Подробнее
10-05-2012 дата публикации

Hybrid Server with Heterogeneous Memory

Номер: US20120117312A1
Принадлежит: International Business Machines Corp

A method, hybrid server system, and computer program product, for managing access to data stored on the hybrid server system. A memory system residing at a server is partitioned into a first set of memory managed by the server and a second set of memory managed by a set of accelerator systems. The set of accelerator systems are communicatively coupled to the server. The memory system comprises heterogeneous memory types. A data set stored within at least one of the first set of memory and the second set of memory that is associated with at least one accelerator system in the set of accelerator systems is identified. The data set is transformed from a first format to a second format, wherein the second format is a format required by the at least one accelerator system.

Подробнее
24-05-2012 дата публикации

Signal processing system, integrated circuit comprising buffer control logic and method therefor

Номер: US20120131241A1
Принадлежит: FREESCALE SEMICONDUCTOR INC

A signal processing system comprising buffer control logic arranged to allocate a plurality of buffers for the storage of information fetched from at least one memory element. Upon receipt of fetched information to be buffered, the buffer control logic is arranged to categorise the information to be buffered according to at least one of: a first category associated with sequential flow and a second category associated with change of flow, and to prioritise respective buffers from the plurality of buffers storing information relating to the first category associated with sequential flow ahead of buffers storing information relating to the second category associated with change of flow when allocating a buffer for the storage of the fetched information to be buffered.

Подробнее
24-05-2012 дата публикации

Correlation-based instruction prefetching

Номер: US20120131311A1
Автор: Yuan C. Chou
Принадлежит: Oracle International Corp

The disclosed embodiments provide a system that facilitates prefetching an instruction cache line in a processor. During execution of the processor, the system performs a current instruction cache access which is directed to a current cache line. If the current instruction cache access causes a cache miss or is a first demand fetch for a previously prefetched cache line, the system determines whether the current instruction cache access is discontinuous with a preceding instruction cache access. If so, the system completes the current instruction cache access by performing a cache access to service the cache miss or the first demand fetch, and also prefetching a predicted cache line associated with a discontinuous instruction cache access which is predicted to follow the current instruction cache access.

Подробнее
31-05-2012 дата публикации

Method and apparatus for selectively performing explicit and implicit data line reads

Номер: US20120136857A1
Автор: Greggory D. Donley
Принадлежит: Advanced Micro Devices Inc

A method and apparatus are described for selectively performing explicit and implicit data line reads. When a data line request is received, a determination is made as to whether there are currently sufficient data resources to perform an implicit data line read. If there are not currently sufficient data resources to perform an implicit data line read, a time period (number of clock cycles) before sufficient data resources will become available to perform an implicit data line read is estimated. A determination is then made as to whether the estimated time period exceeds a threshold. An explicit tag request is generated if the estimated time period exceeds the threshold. If the estimated time period does not exceed the threshold, the generation of a tag request is delayed until sufficient data resources become available. An implicit tag request is then generated.

Подробнее
07-06-2012 дата публикации

Method and apparatus of route guidance

Номер: US20120143504A1
Принадлежит: Google LLC

Systems and methods of route guidance on a user device are provided. In one aspect, a system and method transmit partitions of map data to a client device. Each map partition may contain road geometries, road names, road network topology, or any other information needed to provide turn-by-turn navigation or driving directions within the partition. Each map partition may be encoded with enough data to allow them to be stitched together to form a larger map. Map partitions may be fetched along each route to be used in the event of a network outage or other loss of network connectivity. For example, if a user deviates from the original route and a network outage occurs, the map data may be assembled and a routing algorithm may be applied to the map data in order to direct the user back to the original route.

Подробнее
07-06-2012 дата публикации

Recommendation based caching of content items

Номер: US20120144117A1
Принадлежит: Microsoft Corp

Content item recommendations are generated for users based on metadata associated with the content items and a history of content item usage associated with the users. Each content item recommendation identifies a user and a content item and includes a score that indicates how likely the user is to view the content item. Based on the content item recommendations, and constraints of one or more caches, the content items are selected for storage in one or more caches. The constraints may include users that are associated with each cache, the geographical location of each cache, the size of each cache, and/or costs associated with each cache such as bandwidth costs. The content items stored in a cache are recommended to users associated with the cache.

Подробнее
07-06-2012 дата публикации

Read-ahead processing in networked client-server architecture

Номер: US20120144123A1
Принадлежит: International Business Machines Corp

Various embodiments for read-ahead processing in a networked client-server architecture by a processor device are provided. Read messages are grouped by a plurality of unique sequence identifications (IDs), where each of the sequence IDs corresponds to a specific read sequence, consisting of all read and read-ahead requests related to a specific storage segment that is being read sequentially by a thread of execution in a client application. The storage system uses the sequence id value in order to identify and filter read-ahead messages that are obsolete when received by the storage system, as the client application has already moved to read a different storage segment. Basically, a message is discarded when its sequence id value is less recent than the most recent value already seen by the storage system. The sequence IDs are used by the storage system to determine corresponding read-ahead data to be loaded into a read-ahead cache.

Подробнее
14-06-2012 дата публикации

Systems and methods for background destaging storage tracks

Номер: US20120151148A1
Принадлежит: International Business Machines Corp

Systems and methods for background destaging storage tracks from cache when one or more hosts are idle are provided. One system includes a write cache configured to store a plurality of storage tracks and configured to be coupled to one or more hosts, and a processor coupled to the write cache. The processor includes code that, when executed by the processor, causes the processor to perform the method below. One method includes monitoring the write cache for write operations from the host(s) and determining if the host(s) is/are idle based on monitoring the write cache for write operations from the host(s). The storage tracks are destaged from the write cache if the host(s) is/are idle and are not destaged from the write cache if one or more of the hosts is/are not idle. Also provided are physical computer storage mediums including a computer program product for performing the above method.

Подробнее
14-06-2012 дата публикации

Cache Line Fetching and Fetch Ahead Control Using Post Modification Information

Номер: US20120151150A1
Принадлежит: LSI Corp

A method is provided for performing cache line fetching and/or cache fetch ahead in a processing system including at least one processor core and at least one data cache operatively coupled with the processor. The method includes the steps of: retrieving post modification information from the processor core and a memory address corresponding thereto; and the processing system performing, as a function of the post modification information and the memory address retrieved from the processor core, cache line fetching and/or cache fetch ahead control in the processing system.

Подробнее
14-06-2012 дата публикации

Systems and methods for managing cache destage scan times

Номер: US20120151151A1
Принадлежит: International Business Machines Corp

Systems and methods for managing destage scan times in a cache are provided. One system includes a cache and a processor. The processor is configured to utilize a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilize a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time. One method includes utilizing a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilizing a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time. Physical computer storage mediums including a computer program product for performing the above method are also provided.

Подробнее
14-06-2012 дата публикации

Virtual storage system and control method thereof

Номер: US20120151160A1
Принадлежит: Individual

A virtual storage system is equipped with a plurality of storage systems and a virtualization device for virtualizing the plurality of storage systems logically into a single storage resource provided to a host computer. When one of the storage systems receives a command from the host computer, in the event that the storage system itself is not in possession of a function corresponding to the command, the storage system retrieves a storage system in possession of a function corresponding to the command and transfers this command to the storage system in possession of the function corresponding to the command.

Подробнее
14-06-2012 дата публикации

System and method for maintaining a data redundancy scheme in a solid state memory in the event of a power loss

Номер: US20120151253A1
Автор: Robert L. Horn
Принадлежит: Western Digital Technologies Inc

Embodiments of the invention are directed to systems and methods for reducing an amount of backup power needed to provide power fail safe preservation of a data redundancy scheme such as RAID that is implemented in solid state storage devices where new write data is accumulated and written along with parity data. Because new write data cannot be guaranteed to arrive in integer multiples of stripe size, a full stripe's worth of new write data may not exist when power is lost. Various embodiments use truncated RAID stripes (fewer storage elements per stripe) to save cached write data when a power failure occurs. This approach allows the system to maintain RAID parity data protection in a power fail cache flush case even though a full stripe of write data may not exist, thereby reducing the amount of backup power needed to maintain parity protection in the event of power loss.

Подробнее
21-06-2012 дата публикации

System and method for handling io to drives in a raid system

Номер: US20120159067A1
Принадлежит: LSI Corp

A system and method for handling IO to drives in a RAID system is described. In one embodiment, the method includes providing a multiple disk system with a predefined strip size. IO request with a logical block address is received for execution on the multiple disk system. A plurality of sub-IO requests with a sub-strip size is generated, where the sub-strip size is smaller than the strip size. The generated sub-IO commands are executed on the multiple disk system. In one embodiment, a cache line size substantially equal to the sub-strip size is assigned to process the IO request.

Подробнее
21-06-2012 дата публикации

Computer system management apparatus and management method

Номер: US20120159112A1
Принадлежит: HITACHI LTD

The present invention makes it possible for different types of application programs to efficiently use a virtual volume created on a basis of a hierarchized pool. A configuration management part P 30 determines, based on access information, to which of storage tiers 211 actual areas 212 allocated to virtual volumes 220 should be allocated. The configuration management part comprises a determination part P 3020 for determining a type of an application program that uses an actual area, and reallocation destination instruction parts P 3021 and P 3022 for determining reallocation destinations of the actual areas in accordance with the determination result, and instructing the storage apparatus as to these determinations.

Подробнее
28-06-2012 дата публикации

Data management in solid-state storage devices and tiered storage systems

Номер: US20120166749A1
Принадлежит: International Business Machines Corp

A method for managing data in a data storage system having a solid-state storage device and alternative storage includes identifying data to be moved in the solid-state storage device for internal management of the solid-state storage; moving at least some of the identified data to the alternative storage instead of the solid-state storage; and maintaining metadata indicating the location of data in the solid-state storage device and the alternative storage.

Подробнее
05-07-2012 дата публикации

Apparatus and method for determining a cache line in an n-way set associative cache

Номер: US20120173844A1
Принадлежит: LSI Corp

A method and apparatus for determining a cache line in an N-way set associative cache are disclosed. In one example embodiment, a key associated with a cache line is obtained. A main hash is generated using a main hash function on the key. An auxiliary hash is generated using an auxiliary hash function on the key. A bucket in a main hash table residing in an external memory is determined using the main hash. An entry in a bucket in an auxiliary hash table residing in an internal memory is determined using the determined bucket and the auxiliary hash. The cache line in the main hash table is determined using the determined entry in the auxiliary hash table.

Подробнее
19-07-2012 дата публикации

Method and system for cache endurance management

Номер: US20120185638A1
Принадлежит: Sandisk IL Ltd

A system and method for cache endurance management is disclosed. The method may include the steps of querying a storage device with a host to acquire information relevant to a predicted remaining lifetime of the storage device, determining a download policy modification for the host in view of the predicted remaining lifetime of the storage device and updating the download policy database of a download manager in accordance with the determined download policy modification.

Подробнее
09-08-2012 дата публикации

Selecting a virtual tape server in a storage system to provide data copy while minimizing system job load

Номер: US20120203964A1
Принадлежит: International Business Machines Corp

In a storage system including plural source storage devices, a target storage device selects which source storage device to accept a copy request from the target storage device so as to minimize the load on the entire system. The system calculates first and second load values for job loads being processed. System load values for the system are derived from job load value of a specific data, and respective load values for first and second source storage devices. The system compares the system load values to select a storage device to provide the data copy so as to minimize the load on the entire system.

Подробнее
16-08-2012 дата публикации

Managing read requests from multiple requestors

Номер: US20120210022A1
Автор: Alexander B. Beaman
Принадлежит: Apple Computer Inc

Techniques are disclosed for managing data requests from multiple requestors. According to one implementation, when a new data request is received, a determination is made as to whether a companion relationship should be established between the new data request and an existing data request. Such a companion relationship may be appropriate under certain conditions. If a companion relationship is established between the new data request and an existing data request, then when data is returned for one request, it is used to satisfy the other request as well. This helps to reduce the number of data accesses that need to be made to a data storage, which in turn enables system efficiency to be improved.

Подробнее
16-08-2012 дата публикации

Cascaded raid controller

Номер: US20120210059A1
Принадлежит: Ithaca Tech LLC

A cascaded RAID controller includes a master RAID 1 controller having a control level and M slave RAID 1 controllers, where M is an integer greater than or equal to 1. Each of the M+1 RAID 1 controllers is configured respectively to have three ports, including a primary port configured to communicate bi-directionally with computer hardware having a higher control level than that of the respective one of the M+1 RAID 1 controllers and including two secondary ports. The cascaded RAID controller is configured to provide connections to a total of M+2 memory devices, and configured to record the same information on each of the M+2 memory devices to create M+2 identical copies of the information. A process for initializing a cascaded RAID controller is also described.

Подробнее
16-08-2012 дата публикации

Shared cache for a tightly-coupled multiprocessor

Номер: US20120210069A1
Принадлежит: Plurality Ltd

Computing apparatus ( 11 ) includes a plurality of processor cores ( 12 ) and a cache ( 10 ), which is shared by and accessible simultaneously to the plurality of the processor cores. The cache includes a shared memory ( 16 ), including multiple block frames of data imported from a level-two (L2) memory ( 14 ) in response to requests by the processor cores, and a shared tag table ( 18 ), which is separate from the shared memory and includes table entries that correspond to the block frames and contain respective information regarding the data contained in the block frames.

Подробнее
16-08-2012 дата публикации

Storage system and method for controlling the same

Номер: US20120210329A1
Принадлежит: Individual

Optimum load distribution processing is selected and executed based on settings made by a user in consideration of load changes caused by load distribution in a plurality of asymmetric cores, by using: a controller having a plurality of cores, and configured to extract, for each LU, a pattern showing the relationship between a core having an LU ownership and a candidate core as an LU ownership change destination based on LU ownership management information; to measure, for each LU, the usage of a plurality of resources; to predicate, for each LU based on the measurement results, a change in the usage of the plurality of resources and overhead to be generated by transfer processing itself; to select, based on the respective prediction results, a pattern that matches the user's setting information; and to transfer the LU ownership to the core belonging to the selected pattern.

Подробнее
23-08-2012 дата публикации

Secure management of keys in a key repository

Номер: US20120213369A1
Принадлежит: International Business Machines Corp

A method for managing keys in a computer memory including receiving a request to store a first key to a first key repository, storing the first key to a second key repository in response to the request, and storing the first key from the second key repository to the first key repository within said computer memory based on a predetermined periodicity.

Подробнее
23-08-2012 дата публикации

Recycling of cache content

Номер: US20120215981A1
Принадлежит: International Business Machines Corp

A method of operating a storage system comprises detecting a cut in an external power supply, switching to a local power supply, preventing receipt of input/output commands, copying content of cache memory to a local storage device and marking the content of the cache memory that has been copied to the local storage device. When a resumption of the external power supply is detected, the method continues by charging the local power supply, copying the content of the local storage device to the cache memory, processing the content of the cache memory with respect to at least one storage volume and receiving input/output commands. When detecting a second cut in the external power supply, the system switches to the local power supply, prevents receipt of input/output commands, and copies to the local storage device only the content of the cache memory that is not marked as present.

Подробнее
30-08-2012 дата публикации

Opportunistic block transmission with time constraints

Номер: US20120221792A1
Принадлежит: Endeavors Technology Inc

A technique for determining a data window size allows a set of predicted blocks to be transmitted along with requested blocks. A stream enabled application executing in a virtual execution environment may use the blocks when needed.

Подробнее
30-08-2012 дата публикации

Storage apparatus and data processing method of the same

Номер: US20120221809A1
Принадлежит: HITACHI LTD

Comprises a memory control unit which transmits and receives data to and from respective interface control units in accordance with access requests and also controls access to the memory and a buffer which temporarily stores data smaller than 64 B, wherein the memory control unit, during access to the memory, if the processing data to be processed is 64 B, accesses the memory by using the processing data or, if the processing data is data smaller than 64 B, stores the data smaller than 64 B in the buffer, subsequently, if the address of the new processing data which became the processing data is sequential to the address of the data smaller than 64 B stored in the buffer, combines the new processing data and the data of the buffer and, on condition that the combined processing data is 64 B data, writes the combined processing data in the memory.

Подробнее
06-09-2012 дата публикации

Systems and methods thereto for acceleration of web pages access using next page optimization, caching and pre-fetching techniques

Номер: US20120226766A1
Принадлежит: Limelight Networks Inc

A method and system for acceleration of access to a web page using next page optimization, caching and pre-fetching techniques. The method comprises receiving a web page responsive to a request by a user; analyzing the received web page for possible acceleration improvements of the web page access; generating a modified web page of the received web page using at least one of a plurality of pre-fetching techniques; providing the modified web page to the user, wherein the user experiences an accelerated access to the modified web page resulting from execution of the at least one of a plurality of pre-fetching techniques; and storing the modified web page for use responsive to future user requests.

Подробнее
06-09-2012 дата публикации

File server apparatus, management method of storage system, and program

Номер: US20120226869A1
Принадлежит: Hitachi Solutions Ltd

When a storage capacity of a file server is expanded using an online storage service, elimination of an upper-limit constraint of the file size as a constraint of the online storage service and reduction in the communication cost are realized. A kernel module including logical volumes on the online storage service divides a file into block files at a fixed length and stores and manages the block files to prevent the upper-limit constraint of the file size. When a READ/WRITE request is generated for a mounted file system, only necessary block files are downloaded and used from the online storage service based on an offset value and size information to optimize the communication and realize the communication cost reduction.

Подробнее
06-09-2012 дата публикации

Method, apparatus, and system for speculative execution event counter checkpointing and restoring

Номер: US20120227045A1
Принадлежит: Intel Corp

An apparatus, method, and system are described herein for providing programmable control of performance/event counters. An event counter is programmable to track different events, as well as to be checkpointed when speculative code regions are encountered. So when a speculative code region is aborted, the event counter is able to be restored to it pre-speculation value. Moreover, the difference between a cumulative event count of committed and uncommitted execution and the committed execution, represents an event count/contribution for uncommitted execution. From information on the uncommitted execution, hardware/software may be tuned to enhance future execution to avoid wasted execution cycles.

Подробнее
20-09-2012 дата публикации

Hybrid system architecture for random access memory

Номер: US20120239856A1
Автор: Byungcheol Cho
Принадлежит: Taejin Infotech Co Ltd

Embodiments of the present invention provide a hybrid system architecture random access memory (RAM) such as Phase-Change RAM (PRAM), Magnetoresistive RAM (MRAM) and/or Ferroelectric RAM (FRAM). Specifically, embodiments of this invention provide a hybrid RAID controller coupled to a system control board. Coupled to the hybrid RAID controller are a DDR RAID controller, a RAM RAID controller, and a HDD/Flash RAID controller. A DDR RAID control block is coupled to the DDR RAID controller and includes (among other things) a set of DDR memory disks. Further, a RAM control block is coupled to the RAM RAID controller and includes a set of RAM SSDs. Still yet, a HDD RAID control block is coupled to the HDD/Flash RAID controller and includes a set of HDD/Flash SSD Units.

Подробнее
20-09-2012 дата публикации

Locating host data records on a physical stacked volume

Номер: US20120239877A1
Автор: Jonathan W. Peake
Принадлежит: International Business Machines Corp

According to one embodiment, a method for accessing host data records stored on a VTS system includes receiving a mount request to access at least one host data record on a VTS system, determining a number of host compressed data records per physical block on a sequential access storage medium, determining a PBID that corresponds to the requested at least one host data record, accessing a physical block on the sequential access storage medium corresponding to the PBID, and outputting the physical block without outputting an entire logical volume that the physical block is stored to. In another embodiment, a VTS system includes random access storage, sequential access storage, support for at least one virtual volume, a storage manager having logic for determining a PBID that corresponds to a SLBID, and logic for performing the above described method. Other methods are also described.

Подробнее
20-09-2012 дата публикации

Flash storage device with read disturb mitigation

Номер: US20120239990A1
Принадлежит: Stec Inc

A method for managing a flash storage device includes initiating a read request and reading requested data from a first storage block of a plurality of storage blocks in the flash storage device based on the read request. The method further includes incrementing a read count for the first storage block and moving the data in the first storage block to an available storage block of the plurality of storage blocks when the read count reaches a first threshold value.

Подробнее
27-09-2012 дата публикации

Communication device, communication method, and computer- readable recording medium storing program

Номер: US20120246402A1
Автор: Shunsuke Akimoto
Принадлежит: NEC Corp

A communication device reducing the processing time to install data on a disc storage medium onto multiple servers is provided. A protocol serializer 10 of a communication device 5 serializes read requests received from servers A 1 to A 2 for target data stored on a disc storage medium K in a processing order. A cache controller 11 determines whether the target data corresponding to the read requests are present in a cache memory 4 in the order of serialized read requests and, if present, receives the target data from the cache memory 4 via a memory controller 12 . If not present, the cache controller 11 acquires the target data from the disc storage medium K via a DVD/CD controller 13 . Then, the protocol serializer 10 sends the target data acquired by the cache controller 11 to the server of the transmission source of the read request corresponding to the target data.

Подробнее
04-10-2012 дата публикации

Extending Cache for an External Storage System into Individual Servers

Номер: US20120254509A1
Принадлежит: International Business Machines Corp

Mechanisms are provided for extending cache for an external storage system into individual servers. Certain servers may have cards with cache in the form of dynamic random access memory (DRAM) and non-volatile storage, such as flash memory or solid-state drives (SSDs), which may be viewed as actual extensions of the external storage system. In this way, the storage system is distributed across the storage area network (SAN) into various servers. Several new semantics are used in communication between the cards and the storage system to keep the read caches coherent.

Подробнее
04-10-2012 дата публикации

Method for giving read commands and reading data, and controller and storage system using the same

Номер: US20120254522A1
Автор: Chih-Kang Yeh
Принадлежит: Phison Electronics Corp

A method for giving a read command to a flash memory chip to read data to be accessed by a host system is provided. The method includes receiving a host read command; determining whether the received host read command follows a last host read command; if yes, giving a cache read command to read data from the flash memory chip; and if no, giving a general read command and the cache read command to read data from the flash memory chip. Accordingly, the method can effectively reduce time needed for executing the host read commands by using the cache read command to combine the host read commands which access continuous physical addresses and pre-read data stored in a next physical address.

Подробнее
04-10-2012 дата публикации

Cache memory allocation process based on tcpip network and/or storage area network array parameters

Номер: US20120254533A1
Принадлежит: LSI Corp

An apparatus comprising a controller, one or more host devices and one or more storage devices. The controller may be configured to store and/or retrieve data in response to one or more input/output requests. The one or more host devices may be configured to present the input/output requests. The one or more storage devices may be configured to store and/or retrieve the data. The controller may include a cache memory configured to store the input/output requests. The cache memory may be configured as a memory allocation table to store and/or retrieve a compressed version of a portion of the data in response to one or more network parameters. The compressed version may be retrieved from the memory allocation table instead of the storage devices based on the input/output requests to improve overall storage throughput.

Подробнее
25-10-2012 дата публикации

Efficient data prefetching in the presence of load hits

Номер: US20120272004A1
Принадлежит: Via Technologies Inc

A memory subsystem in a microprocessor includes a first-level cache, a second-level cache, and a prefetch cache configured to speculatively prefetch cache lines from a memory external to the microprocessor. The second-level cache and the prefetch cache are configured to allow the same cache line to be simultaneously present in both. If a request by the first-level cache for a cache line hits in both the second-level cache and in the prefetch cache, the prefetch cache invalidates its copy of the cache line and the second-level cache provides the cache line to the first-level cache.

Подробнее
08-11-2012 дата публикации

Method and apparatus for saving power by efficiently disabling ways for a set-associative cache

Номер: US20120284462A1
Принадлежит: Individual

A method and apparatus for disabling ways of a cache memory in response to history based usage patterns is herein described. Way predicting logic is to keep track of cache accesses to the ways and determine if an access to some ways are to be disabled to save power, based upon way power signals having a logical state representing a predicted miss to the way. One or more counters associated with the ways count accesses, wherein a power signal is set to the logical state representing a predicted miss when one of said one or more counters reaches a saturation value. Control logic adjusts said one or more counters associated with the ways according to the accesses.

Подробнее
22-11-2012 дата публикации

Optimized flash based cache memory

Номер: US20120297113A1
Принадлежит: International Business Machines Corp

Embodiments of the invention relate to throttling accesses to a flash memory device. The flash memory device is part of a storage system that includes the flash memory device and a second memory device. The throttling is performed by logic that is external to the flash memory device and includes calculating a throttling factor responsive to an estimated remaining lifespan of the flash memory device. It is determined whether the throttling factor exceeds a threshold. Data is written to the flash memory device in response to determining that the throttling factor does not exceed the threshold. Data is written to the second memory device in response to determining that the throttling factor exceeds the threshold.

Подробнее
22-11-2012 дата публикации

Dynamic hierarchical memory cache awareness within a storage system

Номер: US20120297142A1
Принадлежит: International Business Machines Corp

Described is a system and computer program product for implementing dynamic hierarchical memory cache (HMC) awareness within a storage system. Specifically, when performing dynamic read operations within a storage system, a data module evaluates a data prefetch policy according to a strategy of determining if data exists in a hierarchical memory cache and thereafter amending the data prefetch policy, if warranted. The system then uses the data prefetch policy to perform a read operation from the storage device to minimize future data retrievals from the storage device. Further, in a distributed storage environment that include multiple storage nodes cooperating to satisfy data retrieval requests, dynamic hierarchical memory cache awareness can be implemented for every storage node without degrading the overall performance of the distributed storage environment.

Подробнее
22-11-2012 дата публикации

Information system and data transfer method of information system

Номер: US20120297157A1
Принадлежит: Individual

Availability of an information system including a storage apparatus and a host computer is improved. A host system includes a first storage apparatus provided with a first volume for storing data, and a second storage apparatus for storing the data sent from the first storage apparatus. In case of a failure occurring in the first storage apparatus, the host sends the data to be sent to the first storage apparatus to the second storage apparatus.

Подробнее
29-11-2012 дата публикации

Populating strides of tracks to demote from a first cache to a second cache

Номер: US20120303875A1
Принадлежит: International Business Machines Corp

Provided are a computer program product, system, and method for populating strides of tracks to demote from a first cache to a second cache. A first cache maintains modified and unmodified tracks from a storage system subject to Input/Output (I/O) requests. A determination is made to demote tracks from the first cache. A determination is made as to whether there are enough tracks ready to demote to form a stride, wherein tracks are written to a second cache in strides defined for a Redundant Array of Independent Disk (RAID) configuration. A stride is populated with tracks ready to demote in response to determining that there are enough tracks ready to demote to form the stride. The stride of tracks, to demote from the first cache, are promoted to the second cache. The tracks in the second cache that are modified are destaged to the storage system.

Подробнее
29-11-2012 дата публикации

Implementing storage adapter performance optimization with hardware chains to select performance path

Номер: US20120303886A1
Принадлежит: International Business Machines Corp

A method and controller for implementing storage adapter performance optimization with a predefined chain of hardware operations configured to implement a particular performance path minimizing hardware and firmware interactions, and a design structure on which the subject controller circuit resides are provided. The controller includes a plurality of hardware engines; and a data store configured to store a plurality of control blocks selectively arranged in one of a plurality of predefined chains. Each predefined chain defines a sequence of operations. Each control block is designed to control a hardware operation in one of the plurality of hardware engines. A resource handle structure is configured to select a predefined chain based upon a particular characteristic of the system. Each predefined chain is configured to implement a particular performance path to maximize performance.

Подробнее
29-11-2012 дата публикации

Intelligent caching

Номер: US20120303896A1
Принадлежит: International Business Machines Corp

Intelligent caching includes defining a cache policy for a data source, selecting parameters of data in the data source to monitor, the parameters forming a portion of the cache policy, and monitoring the data source for an event based on the cache policy. Upon an occurrence of an event, the intelligent caching also includes retrieving target data subject to the cache policy from a first location and moving the target data to a second location.

Подробнее
29-11-2012 дата публикации

Managing track discard requests to include in discard track messages

Номер: US20120303899A1
Принадлежит: International Business Machines Corp

Provided are a computer program product, system, and method for managing track discard requests to include in discard track messages. A backup copy of a track in a cache is maintained in the cache backup device. A track discard request is generated to discard tracks in the cache backup device removed from the cache. Track discard requests are queued in a discard track queue. In response to detecting that a predetermined number of track discard requests are queued in the discard track queue while processing in a discard multi-track mode, one discard multiple tracks message is sent indicating the tracks indicated in the queued predetermined number of track discard requests to the cache backup device instructing the cache backup device to discard the tracks indicated in the discard multiple tracks message. In response to determining a predetermined number of periods of inactivity while processing in the discard multi-track mode, processing the track discard requests is switched to a discard single track mode.

Подробнее
29-11-2012 дата публикации

Implementing storage adapter performance optimization with enhanced resource pool allocation

Номер: US20120303922A1
Принадлежит: International Business Machines Corp

A method and controller for implementing storage adapter performance optimization with enhanced resource pool allocation, and a design structure on which the subject controller circuit resides are provided. The controller includes a plurality of hardware engines; a processor, and a plurality of resource pools. A plurality of work queues is associated with the resource pools. The processor initializes a list of types, and the associated amount of pages for each allocate type. The hardware engines maintain a count of allocate types, specifying a type on each allocation and deallocation, and performing allocation from the resource pools for deadlock avoidance.

Подробнее
06-12-2012 дата публикации

Pre-Caching Resources Based on A Cache Manifest

Номер: US20120311020A1
Принадлежит: Research in Motion Ltd

A method executed on a first electronic device for accessing an application server on a second electronic device includes receiving a cache manifest for an application, the cache manifest identifying a resource item that can be pre-cached on the first electronic device, pre-caching the resource item as a cached resource item in a cache memory of the first electronic device prior to launching an application client on the first electronic device. The method further includes, upon launching the application client on the first electronic device, retrieving data from the application server, wherein the data includes content and a reference to the resource item, obtaining, from the cache memory, the cached resource item that corresponds to the resource item, and displaying an output based upon the content and the cached resource item.

Подробнее
06-12-2012 дата публикации

Storage system comprising microprocessor load distribution function

Номер: US20120311204A1
Принадлежит: HITACHI LTD

Among a plurality of microprocessors 12, 32, when the load on a microprocessor 12 which performs I/O task processing of received I/O requests is equal to or greater than a first load, the microprocessor assigns at least an I/O task portion of the I/O task processing to another microprocessor 12 or 32, and the other microprocessor 12 or 32 executes at least the I/O task portion. The I/O task portion is a task processing portion comprising cache control processing, comprising the securing in cache memory 20 of a cache area, which is one area in cache memory 20, for storage of data.

Подробнее
06-12-2012 дата публикации

Storage apparatus and storage apparatus management method

Номер: US20120311602A1
Принадлежит: HITACHI LTD

The overall processing function of a storage apparatus is improved by suitably migrating ownership. The storage apparatus comprises a plurality of microprocessors; a plurality of storage areas formed in a drive group configured from a plurality of physical drives; and a management unit which manages, as the microprocessors which possess ownership to the storage areas, the microprocessors which handle data I/Os to/from one or more storage areas among the plurality of storage areas, wherein the management unit detects variations in the processing loads of the plurality of microprocessors, selects a migration-source microprocessor which migrates the ownership and a migration-destination microprocessor which is the ownership migration destination on the basis of variations in the processing load, and determines whether to migrate the ownership on the basis of information on a usage status of resources of each of the storage areas to which the migration-source microprocessor possesses ownership.

Подробнее
13-12-2012 дата публикации

Semiconductor memory device and method of driving semiconductor memory device

Номер: US20120314513A1
Автор: Yoshiyuki Kurokawa
Принадлежит: Semiconductor Energy Laboratory Co Ltd

A semiconductor memory device includes a memory portion that includes i (i is a natural number) sets each including j (j is a natural number of 2 or larger) arrays each including k (k is a natural number of 2 or larger) lines to each of which a first bit column of an address is assigned in advance; a comparison circuit; and a control circuit. The i×j lines to each of which a first bit column of an objective address is assigned in advance are searched more than once and less than or equal to j times with the use of the control circuit and a cache hit signal or a cache miss signal output from the selection circuit. In such a manner, the line storing the objective data is specified.

Подробнее
13-12-2012 дата публикации

Cache prefetching from non-uniform memories

Номер: US20120317364A1
Автор: Gabriel H. Loh
Принадлежит: Advanced Micro Devices Inc

An apparatus is disclosed for performing cache prefetching from non-uniform memories. The apparatus includes a processor configured to access multiple system memories with different respective performance characteristics. Each memory stores a respective subset of system memory data. The apparatus includes caching logic configured to determine a portion of the system memory to prefetch into the data cache. The caching logic determines the portion to prefetch based on one or more of the respective performance characteristics of the system memory that stores the portion of data.

Подробнее
20-12-2012 дата публикации

List based prefetch

Номер: US20120324142A1
Принадлежит: International Business Machines Corp

A list prefetch engine improves a performance of a parallel computing system. The list prefetch engine receives a current cache miss address. The list prefetch engine evaluates whether the current cache miss address is valid. If the current cache miss address is valid, the list prefetch engine compares the current cache miss address and a list address. A list address represents an address in a list. A list describes an arbitrary sequence of prior cache miss addresses. The prefetch engine prefetches data according to the list, if there is a match between the current cache miss address and the list address.

Подробнее
27-12-2012 дата публикации

Information processing method for determining weight of each feature in subjective hierarchical clustering

Номер: US20120330957A1
Принадлежит: International Business Machines Corp

An information processing apparatus determines a weight of each physical feature for hierarchical clustering by acquiring training data of multiple pieces of content in triplets with label information indicating a pair specified by a user as having a highest degree of similarity among three contents of the triplet and executing hierarchical clustering using a feature vector of each piece of content of the training data and the weight of each feature to determine the hierarchical structure of the training data. The information processing apparatus updates the weight of each feature so that the degree of agreement between a pair combined first as being the same clusters among three contents of the triplet in a determined hierarchical structure and a pair indicated by label information corresponding to the triplet increases.

Подробнее
03-01-2013 дата публикации

Browser Storage Management

Номер: US20130007371A1
Принадлежит: Individual

Browser storage management techniques are described. In one or more implementations, inputs are received at a computing device that specify maximum aggregate sizes of application and database caches, respectively, of browser storage to be used to locally store data at the computing device. For example, the inputs may be provided using a policy, by an administrator of the computing device, and so on. The maximum aggregate sizes are set of application and database caches, respectively, of browser storage at the computing device to the sizes specified by the inputs.

Подробнее
17-01-2013 дата публикации

Method and system for ensuring cache coherence of metadata in clustered file systems

Номер: US20130019067A1
Принадлежит: VMware LLC

Metadata of a shared file in a clustered file system is changed in a way that ensures cache coherence amongst servers that can simultaneously access the shared file. Before a server changes the metadata of the shared file, it waits until no other server is attempting to access the shared file, and all I/O operations to the shared file are blocked. After writing the metadata changes to the shared file, local caches of the other servers are updated, as needed, and I/O operations to the shared file are unblocked.

Подробнее
24-01-2013 дата публикации

Method and apparatus for high speed cache flushing in a non-volatile memory

Номер: US20130024623A1
Автор: Robert Alan Reid
Принадлежит: Cadence Design Systems Inc

An invention is provided for performing flush cache in a non-volatile memory. The invention includes maintaining a plurality of free memory blocks within a non-volatile memory. When a flush cache command is issued, a flush cache map is examined to obtain a memory address of a memory block in the plurality of free memory blocks within the non-volatile memory. The flush cache map includes a plurality of entries, each entry indicating a memory block of the plurality of free memory blocks. Then, a cache block is written to a memory block at the obtained memory address within the non-volatile memory. In this manner, when a flush cache command is received, the flush cache map allows cache blocks to be written to free memory blocks in the non-volatile memory without requiring a non-volatile memory search for free blocks or requiring erasing of memory blocks storing old data.

Подробнее
07-02-2013 дата публикации

Method and apparatus to move page between tiers

Номер: US20130036250A1
Автор: Shinichi Hayashi
Принадлежит: HITACHI LTD

The thin provisioning storage system maintains migration history between the first and the second group which the unallocated pages of virtual volume would be allocated from, and updates on writes against storage areas of virtual volume having migration history. Before the storage controller determines to migrate data allocated in the first group to the second group, the storage controller checks the migration history and if the data stored in the first group has been previously migrated from the second group and is still maintained in the second group, the storage controller would change the allocation between the virtual volume and the first group to the second group for the data subject to migration and not perform the data migration.

Подробнее
21-02-2013 дата публикации

Storage apparatus, control apparatus, and data copying method

Номер: US20130046946A1
Принадлежит: Fujitsu Ltd

A determining unit selects one storage device each from storage devices of an external storage apparatus and storage devices of a storage apparatus to which the determining unit belongs. At this point, based on a copy request, the determining unit preferentially selects, within each of the external storage apparatus and the storage apparatus, a storage device including a larger number of logical volumes (LVs) which belong to copy unexecuted LV pairs compared to other storage devices therein. Further, the determining unit determines, as a copy execution target, a copy unexecuted LV pair in which a LV provided in one of the selected two storage devices is a copy source and a LV provided in the other storage device is a copy destination. A copy unit copies data stored in the copy source LV, which belongs to the determined LV pair, to the copy destination LV of the LV pair.

Подробнее
14-03-2013 дата публикации

Caching for a file system

Номер: US20130067168A1
Принадлежит: Microsoft Corp

Aspects of the subject matter described herein relate to caching data for a file system. In aspects, in response to requests from applications and storage and cache conditions, cache components may adjust throughput of writes from cache to the storage, adjust priority of I/O requests in a disk queue, adjust cache available for dirty data, and/or throttle writes from the applications.

Подробнее
14-03-2013 дата публикации

Information processing method, information processing system, information processing apparatus, and program

Номер: US20130067177A1
Принадлежит: Sony Corp

An information processing method includes: grouping temporarily consecutive data into a plurality of groups based on a reference defined in advance and storing the grouped data; reading, in response to an access request from an external apparatus, target data to be a target of the request from a first group including the target data and outputting the read target data to the external apparatus; and reading, in response to the reading of the target data, at least part of data from a second group different from the first group as read-ahead target data.

Подробнее
21-03-2013 дата публикации

Data storage architecture extension system and method

Номер: US20130073747A1
Автор: Kevin Mark Klughart
Принадлежит: Individual

A data storage architecture extension (DAX) system and method that permits multiple disk drive storage elements to be logically daisy-chained to allow a single host bus adapter (HBA) to view the storage elements as one logical disk drive is disclosed. The system/method may be broadly described as comprising a pass-thru disk drive controller (PTDDC) further comprising a HBA port, a disk drive interface port, pass-thru input port, and a pass-thru output port. The PTDDC intercepts and translates the HBA port input to the requirements of an individual disk drive connected to the drive interface port. Each PTDDC may be daisy-chained to other PTDDCs to permit a plethora of disk drives to be associated with a given HBA, with the first PTDDC providing a presentation interface to the HBA integrating all disk drive storage connected to the PTDDCs. The system/method also permits RAID configuration of disk drives using one or more PTDDCs.

Подробнее
28-03-2013 дата публикации

Storage caching/tiering acceleration through staggered asymmetric caching

Номер: US20130080696A1
Автор: Luca Bert
Принадлежит: LSI Corp

A multi-tiered system of data storage includes a plurality of data storage solutions. The data storage solutions are organized such that the each progressively faster, more expensive solution serves as a cache for the previous solution, and each solution includes a dedicated data block to store individual data sets, newly written in a plurality of write operations, for later migration to slower data storage solutions in a single write operation.

Подробнее
04-04-2013 дата публикации

Storage system comprising nonvolatile semiconductor storage media

Номер: US20130086304A1
Принадлежит: HITACHI LTD

Logical-physical translation information comprises information denoting the corresponding relationships between multiple logical pages and multiple logical chunks forming a logical address space of a nonvolatile semiconductor storage medium, and information denoting the corresponding relationships between the multiple logical chunks and multiple physical storage areas. Each logical page is a logical storage area conforming to a logical address range. Each logical chunk is allocated to two or more logical pages of multiple logical pages. Two or more physical storage areas of multiple physical storage areas are allocated to each logical chunk. A controller adjusts the number of physical storage areas to be allocated to each logical chunk.

Подробнее
18-04-2013 дата публикации

STORAGE DEVICE AND REBUILD PROCESS METHOD FOR STORAGE DEVICE

Номер: US20130097375A1
Автор: lida Takashi
Принадлежит: NEC Corporation

A storage device includes a plurality of magnetic disk devices each having a write cache, a processor unit that redundantly stores data, a rebuild execution control unit that performs a rebuild process, a write cache control unit that, at the time of the rebuild process, enables a write cache of a storage device that stores rebuilt data, and a rebuild progress management unit that is configured using a nonvolatile memory and manages progress information of the rebuild process. In the case where power discontinuity is caused during the rebuild process and then power is restored, the rebuild execution control unit calculates an address that is before an address of last written rebuilt data by an amount corresponding to the capacity of the write cache based on the progress information of the rebuild process managed by the progress management unit and resumes the rebuild process from that calculated address. 1. A storage device comprising:a storage unit including a plurality of memory devices each having a write cache;a first control unit that redundantly stores data in the plurality of memory devices;a second control unit that performs a rebuild process of rebuilding the data;a write cache control unit that, at a time of of the rebuild process, enables a write cache of a memory device that stores rebuilt data; anda progress management unit that is configured using a nonvolatile memory and manages, as progress information of the rebuild process, an address of rebuilt data for which rebuilding is completed and which is written in the write cache,wherein, in a case where power discontinuity is caused during the rebuild process and then power is restored, the second control unit calculates an address that is before an address of last written rebuilt data by an amount corresponding to a capacity of the write cache based on the progress information of the rebuild process managed by the progress management unit and resumes the rebuild process from that calculated address.2. ...

Подробнее
18-04-2013 дата публикации

Memory-based apparatus and method

Номер: US20130097387A1
Принадлежит: Leland Stanford Junior University

Aspects of various embodiments are directed to memory circuits, such as cache memory circuits. In accordance with one or more embodiments, cache-access to data blocks in memory is controlled as follows. In response to a cache miss for a data block having an associated address on a memory access path, data is fetched for storage in the cache (and serving the request), while one or more additional lookups are executed to identify candidate locations to store data. An existing set of data is moved from a target location in the cache to one of the candidate locations, and the address of the one of the candidate locations is associated with the existing set of data. Data in this candidate location may, for example, thus be evicted. The fetched data is stored in the target location and the address of the target location is associated with the fetched data.

Подробнее
18-04-2013 дата публикации

Data prefetching method for distributed hash table dht storage system, node, and system

Номер: US20130097402A1
Автор: Deping Yang, Dong Bao
Принадлежит: Huawei Technologies Co Ltd

Embodiments of the present disclosure provide a data prefetching method, a node, and a system. The method includes: a first storage node receives a read request sent by a client, determines a to-be-prefetched data block and a second storage node where the to-be-prefetched data block resides according to a read data block and a set to-be-prefetched data block threshold, and sends a prefetching request to the second storage node, the prefetching request includes identification information of the to-be-prefetched data block, and the identification information is used to identify the to-be-prefetched data block; and the second storage node reads the to-be-prefetched data block from a disk according to the prefetching request, and stores the to-be-prefetched data block in a local buffer, so that the client reads the to-be-prefetched data block from the local buffer of the second storage node.

Подробнее
25-04-2013 дата публикации

METHOD AND APPARATUS FOR IMPLEMENTING PROTECTION OF REDUNDANT ARRAY OF INDEPENDENT DISKS IN FILE SYSTEM

Номер: US20130103902A1
Автор: WEI Mingchang, ZHANG Wei
Принадлежит: Huawei Technologies Co., Ltd.

Embodiments of the present invention disclose a method and an apparatus for implementing protection of RAID in a file system, and are applied in the field of communications technologies. In the embodiments of the present invention, after receiving a file operation request, the file system needs to determine the type of a file to be operated as requested by the file operation request, and perform file operations in a hard disk drive of the file system directly according to a file operation method corresponding to the determined file type, that is, a RAID data protection method. Therefore, corresponding file operations may be performed in a proper operation method according to each different file types, and data of an important file type is primarily protected, thereby improving reliability of data storage. 1. In a file system of one or more computers , a method for implementing protection of a redundant array of independent disks (RAID) , comprising:receiving a file operation request;determining a type of a file to be operated as requested by the file operation request, wherein the type of the file comprises at least one of file metadata and file data;selecting a file operation method according to the determined file type, wherein the file operation method is a RAID data protection method; andperforming file operations on one or more hard disk drives according to the selected file operation method.2. The method according to claim 1 , wherein the file operation method selected according to the determined file type is a multi-mirroring redundant algorithm if the determined file type is the file metadata claim 1 , and the file metadata is backed up with multiple copies and storing the multiple copies in at least two hard disks according to the multi-mirroring redundant algorithm.35. The method according to claim 1 , wherein the file operation method selected according to the determined file type is a data protection method of RAID if the type of the file is the file ...

Подробнее
02-05-2013 дата публикации

Capacitor save energy verification

Номер: US20130111110A1
Автор: Ronald H. Sartore
Принадлежит: Agiga Tech Inc

A memory subsystem includes a volatile memory, a nonvolatile memory, and a controller including logic to interface the volatile memory to an external system. The volatile memory is addressable for reading and writing by the external system. The memory subsystem includes a power controller with logic to detect when power from the external system to at least one of the volatile and nonvolatile memories and to the controller fails. When external system power fails, backup power is provided to at least one of the volatile and nonvolatile memories and to the controller for long enough to enable the controller to back up data from the volatile memory to the nonvolatile memory.

Подробнее
02-05-2013 дата публикации

Dynamically adjusted threshold for population of secondary cache

Номер: US20130111133A1
Принадлежит: International Business Machines Corp

The population of data to be inserted into secondary data storage cache is controlled by determining a heat metric of candidate data; adjusting a heat metric threshold; rejecting candidate data provided to the secondary data storage cache whose heat metric is less than the threshold; and admitting candidate data whose heat metric is equal to or greater than the heat metric threshold. The adjustment of the heat metric threshold is determined by comparing a reference metric related to hits of data most recently inserted into the secondary data storage cache, to a reference metric related to hits of data most recently evicted from the secondary data storage cache; if the most recently inserted reference metric is greater than the most recently evicted reference metric, decrementing the threshold; and if the most recently inserted reference metric is less than the most recently evicted reference metric, incrementing the threshold.

Подробнее
16-05-2013 дата публикации

PREFETCHING SOURCE TRACKS FOR DESTAGING UPDATED TRACKS IN A COPY RELATIONSHIP

Номер: US20130124803A1

A point-in-time copy relationship associates tracks in a source storage with tracks in a target storage. The target storage stores the tracks in the source storage as of a point-in-time. A write request is received including an updated source track for a point-in-time source track in the source storage in the point-in-time copy relationship. The point-in-time source track was in the source storage at the point-in-time the copy relationship was established. The updated source track is stored in a first cache device. A prefetch request is sent to the source storage to prefetch the point-in-time source track in the source storage subject to the write request to a second cache device. A read request is generated to read the source track in the source storage following the sending of the prefetch request. The read source track is copied to a corresponding target track in the target storage. 119-. (canceled)20. A method , comprising:maintaining a point-in-time copy relationship associating tracks in a source storage with tracks in a target storage, wherein the target storage stores the tracks in the source storage as of a point-in-time;receiving a write request including an updated source track for a point-in-time source track in the source storage in the point-in-time copy relationship, wherein the point-in-time source track was in the source storage at the point-in-time the copy relationship was established;storing the updated source track in a first cache device;sending a prefetch request to the source storage to prefetch the point-in-time source track in the source storage subject to the write request to a second cache device;generating a read request to read the source track in the source storage following the sending of the prefetch request; andcopying the read source track to a corresponding target track in the target storage.21. The method of claim 20 , further comprising:destaging the updated source track to the source track in the source volume in response to ...

Подробнее
23-05-2013 дата публикации

Storage system, storage apparatus and method of controlling storage system

Номер: US20130132673A1
Принадлежит: HITACHI LTD

A storage system enables a core storage apparatus to execute processing requiring securing of data consistency, while providing high write performance to a host computer. A storage system includes an edge storage apparatus 20 configured to communicate with a host computer 10 and including a cache memory 25 , and a core storage apparatus 30 that communicates with the edge storage apparatus 20 and perform I/O processing on a storage device 39 . When receiving a write request from the host computer 10 , the edge storage apparatus 20 processes the write request by writeback. When about to execute storage function control processing, on condition that data consistency is be secured, such as pair split processing of a local copy function, the core storage apparatus 30 requests the edge storage apparatus 20 to perform forced destage of dirty data in the cache memory 25 and then executes the storage function control processing after the completion of the forced destage.

Подробнее
23-05-2013 дата публикации

OPTIMIZING DATA CACHE WHEN APPLYING USER-BASED SECURITY

Номер: US20130132677A1

A secure caching system and caching method include receiving a user request for data, the request containing a security context, and searching a cache for the requested data based on the user request and the received security context. If the requested data is found in cache, returning the cached data in response to the user request. If the requested data is not found in cache, obtaining the requested data from a data source, storing the obtained data in the cache and associating the obtained data with the security context, and returning the requested data in response to the user request. The search for the requested data can include searching for a security list that has the security context as a key, the security list including an address in the cache of the requested data. 1. A secure caching method comprising:receiving a user request for data and a security context comprising a profile of at least one user and corresponding at least one dimension of data;searching a cache for the requested data based on the user request and the received security context;if the requested data is found in the cache, returning the cached data in response to the user request, andif the requested data is not found in the cache, obtaining the requested data from a data source, storing the obtained data in the cache and associating the obtained data with the security context, and returning the requested data in response to the user request.2. The method according to claim 1 , wherein the cached data is stored with a corresponding list that associates the cached data with the security context and wherein if the requested data is not found in the cache claim 1 , generating a new list for associating the obtained data stored in the cache with the security context by storing references to the obtained data stored in the cache.3. The method according to claim 2 , wherein the new list is stored in a list storage unit in which number of stored lists increases incrementally based on received ...

Подробнее
23-05-2013 дата публикации

Method and apparatus for allocating erasure coded data to disk storage

Номер: US20130132800A1
Принадлежит: SimpliVity Corp

Allocation process that allows erasure coded data to be stored on any of a plurality of disk drives, in a pool of drives, so that the allocation is not tied to a fixed group of drives. Still further, the encoded data can be generated by any of multiple different erasure coding algorithms, where again storage of the encoded data is not restricted to a single group of drives based on the erasure algorithm being utilized to encode the data. In another embodiment, the encoded data can be “stacked” (aligned) on select drives to reduce the number of head seeks required to access the data. As a result of these improvements, the system can dynamically determine which one of multiple erasure coding algorithms to utilize for a given incoming data block, without being tied to one particular algorithm and one particular group of storage devices as in the prior art.

Подробнее
23-05-2013 дата публикации

Optimizing distributed data analytics for shared storage

Номер: US20130132967A1
Принадлежит: NetApp Inc

Methods, systems, and computer executable instructions for performing distributed data analytics are provided. In one exemplary embodiment, a method of performing a distributed data analytics job includes collecting application-specific information in a processing node assigned to perform a task to identify data necessary to perform the task. The method also includes requesting a chunk of the necessary data from a storage server based on location information indicating one or more locations of the data chunk and prioritizing the request relative to other data requests associated with the job. The method also includes receiving the data chunk from the storage server in response to the request and storing the data chunk in a memory cache of the processing node which uses a same file system as the storage server.

Подробнее
30-05-2013 дата публикации

Systems, methods, and devices for running multiple cache processes in parallel

Номер: US20130138865A1
Принадлежит: SEAGATE TECHNOLOGY LLC

Certain embodiments of the present disclosure related to systems, methods, and devices for increasing data access speeds. In certain embodiments, a method includes running multiple cache retrieval processes in parallel, in response to a read command. In certain embodiments, a method includes initiating a first cache retrieval process and a second cache retrieval process to run in parallel, in response to a single read command.

Подробнее
06-06-2013 дата публикации

Management system and management method of storage system that performs control based on required performance assigned to virtual volume

Номер: US20130145092A1
Принадлежит: HITACHI LTD

A storage system manages a pool to which multiple VVOLs (virtual logical volumes conforming to thin provisioning) are associated, assigns a real area (RA) from any tier in an available tier pattern associated with a write-destination VVOL to a write-destination virtual area (VA), and carries out a reassignment process for migrating data inside this RA to an RA of a different tier than the tier having this RA based on the access status of the RA assigned to the VA. A management system assumes that a specified tier has been removed from the available tier pattern of a target VVOL, predicts the performance of the target VVOL and all the other VVOL associated with the pool to which the target VVOL is associated, determines whether or not there is a VVOL for which the predicted performance is lower than a required performance, and when such a VVOL does not exist, instructs the storage system to remove the specified tier from the available tier pattern of the target VVOL.

Подробнее
06-06-2013 дата публикации

Information Processing Apparatus and Driver

Номер: US20130145094A1
Автор: Kurashige Takehiko
Принадлежит:

According to one embodiment, an information processing apparatus includes a memory that comprises a buffer area, a first external storage, a second external storage and a driver. The driver is configured to control the first and second external storages in units of predetermined blocks. The driver comprises a cache reservation module configured to (i) reserve a cache area in the memory, the cache area being logically between the buffer area and the first external storage and between the buffer area and the second external storage and (ii) manage the cache area. The cache area operates as a primary cache for the second external storage and a cache for the first external storage. Part or the entire first external storage is used as a secondary cache for the second external storage. The buffer area is used to transfer data between the driver and a host system that requests data reads/writes. 1. An information processing apparatus comprising:a memory comprising a buffer area;a first external storage separate from the memory;a second external storage separate from the memory; anda driver configured to control the first and second external storages in units of predetermined blocks,wherein the driver comprises a cache reservation module configured to reserve a cache area in the memory, the cache area being logically between the buffer area and the first external storage and between the buffer area and the second external storage, and the cache reservation module is configured to manage the cache area in units of the predetermined blocks, using the cache area, secured on the memory by the cache reservation module, as a primary cache for the second external storage and a cache for the first external storage, and using part or the entire first external storage as a secondary cache for the second external storage, the buffer area being reserved in order to transfer data between the driver and a host system that requests for data writing and data reading.2. A driver stored in a ...

Подробнее
06-06-2013 дата публикации

Melthod and system for integrating the functions of a cache system with a storage tiering system

Номер: US20130145095A1
Принадлежит: LSI Corp

A tiered data storage system having a cache employs a tiering management subsystem to analyze data access patterns over time, and a cache management subsystem to monitor individual input/output operations and replicate data in the cache. The tiering management subsystem determines a distribution of data between tiers and determines what data should be cached while the cache management subsystem moves data into the cache. The tiered data storage system may analyze individual input/output operations to determine if data should be consolidated from multiple regions in one or more data storage tiers into a single region.

Подробнее
13-06-2013 дата публикации

Fast startup hybrid memory module

Номер: US20130148457A1
Принадлежит: Sanmina SCI Corp

A memory device is provided comprising: a volatile memory device, a non-volatile memory device, a memory control circuit volatile memory controller coupled to the volatile memory device and non-volatile memory device, and a backup power source. The backup power source may be arranged to temporarily power the volatile memory devices and the memory control circuit upon a loss of power from the external power source. Additionally, a switch may serve to selectively couple: (a) a host memory bus to either the volatile memory device or non-volatile memory device; and (b) the volatile memory device to the non-volatile memory device. Upon reestablishment of power by an external power source from a power loss event, the memory control circuit is configured to restore data from the non-volatile memory device to the volatile memory device prior to a host system, to which the memory device is coupled, completes boot-up.

Подробнее
13-06-2013 дата публикации

Method and apparatus for caching

Номер: US20130150015A1
Принадлежит: Telefonaktiebolaget LM Ericsson AB

A method and caching server for enabling caching of a portion of a media file in a User Equipment (UE) in a mobile telecommunications network. The caching server selects the media file and determines a size of the portion to be cached in the UE. The size may be determined depending on radio network conditions for the UE and/or characteristics of the media file. The caching server sends an instruction to the UE to cache the determined size of the portion of the media file in the UE.

Подробнее
13-06-2013 дата публикации

Information Processing Apparatus and Driver

Номер: US20130151775A1
Автор: Kurashige Takehiko
Принадлежит:

According to one embodiment, an information processing apparatus includes a memory includes a buffer area, a first storage, a second storage and a driver. Controlling the first and second external storages, the driver comprises a cache reservation module configured to reserve a cache area in the memory. The cache area is logically between the buffer area and the first external storage and between the buffer area and the second external storage. The driver being configured to use the cache area, secured on the memory by the cache reservation module, as a primary cache for the second external storage and a cache for the first external storage, and uses part or the entire first external storage as a secondary cache for the second external storage. The buffer area is reserved in order to transfer data between the driver and a host system that requests for data writing and data reading. 1. An information processing apparatus comprising:a memory comprising a buffer area;a first external storage separate from the memory;a second external storage separate from the memory; anda driver configured to control the first and second external storages,wherein the driver comprises a cache reservation module configured to reserve a cache area in the memory, the cache area being logically between the buffer area and the first external storage and between the buffer area and the second external storage, the driver being configured to use the cache area, secured on the memory by the cache reservation module, as a primary cache for the second external storage and a cache for the first external storage, and uses part or the entire first external storage as a secondary cache for the second external storage, the buffer area being reserved in order to transfer data between the driver and a host system that requests for data writing and data reading.2. A driver stored in a non-transitory computer readable medium which operates in an information processing apparatus comprising a memory ...

Подробнее
13-06-2013 дата публикации

Cache Implementing Multiple Replacement Policies

Номер: US20130151781A1
Принадлежит: Apple Inc.

In an embodiment, a cache stores tags for cache blocks stored in the cache. Each tag may include an indication identifying which of two or more replacement policies supported by the cache is in use for the corresponding cache block, and a replacement record indicating the status of the corresponding cache block in the replacement policy. Requests may include a replacement attribute that identifies the desired replacement policy for the cache block accessed by the request. If the request is a miss in the cache, a cache block storage location may be allocated to store the corresponding cache block. The tag associated with the cache block storage location may be updated to include the indication of the desired replacement policy, and the cache may manage the block in accordance with the policy. For example, in an embodiment, the cache may support both an LRR and an LRU policy. 1. A system comprising:one or more requestors configured to generate requests that each include an address and a replacement policy attribute identifying a selected replacement policy; anda set associative cache configured to support a least recently replaced (LRR) replacement policy and a variation of a least recently used (LRU) replacement policy for cache blocks in a given set, wherein the set associative cache is configured to selectively modify replacement data corresponding to the set accessed by a request responsive to the replacement policy attribute associated with the request, and wherein the LRR replacement policy causes selection of a selected cache block in the set for replacement, wherein the selected cache block is the cache block that has been stored in the cache longer than the other cache blocks in the set.2. The system as recited in wherein the one or more requestors comprise physical components coupled to the cache.3. The system as recited in wherein the one or more requestors comprise logical requestors executing on one or more processors that are coupled to the cache.4. The ...

Подробнее
20-06-2013 дата публикации

Optimized execution of interleaved write operations in solid state drives

Номер: US20130159626A1
Автор: Oren Golov, Shachar Katz
Принадлежит: Apple Inc

A method for data storage includes receiving a plurality of data items for storage in a memory, including at least first data items that are associated with a first data source and second data items that are associated with a second data source, such that the first and second data items are interleaved with one another over time. The first data items are de-interleaved from the second data items, by identifying a respective data source with which each received data item is associated. The de-interleaved first data items and the de-interleaved second data items are stored in the memory.

Подробнее
27-06-2013 дата публикации

Management of Storage System Access Based on Client Performance and Cluser Health

Номер: US20130166727A1
Принадлежит: SolidFire Inc

In one embodiment, a method includes determining a previous client performance value in terms of a performance metric for a volume in a storage system. The previous client performance value is related to previous access for a client to the volume. Also, the storage system is storing data for a plurality of volumes where data for each of the plurality of volumes is striped substantially evenly across drives of the storage system. The method applies criteria to the previous performance value to determine a target performance value. Performance of the client with respect to access to the volume is regulated in terms of the performance metric based on the target performance value.

Подробнее
27-06-2013 дата публикации

DESTAGING OF WRITE AHEAD DATA SET TRACKS

Номер: US20130166837A1

Exemplary methods, computer systems, and computer program products for efficient destaging of a write ahead data set (WADS) track in a volume of a computing storage environment are provided. In one embodiment, the computer environment is configured for preventing destage of a plurality of tracks in cache selected for writing to a storage device. For a track N in a stride Z of the selected plurality of tracks, if the track N is a first WADS track in the stride Z, clearing at least one temporal bit for each track in the cache for the stride Z minus 2 (Z−2), and if the track N is a sequential track, clearing the at least one temporal bit for the track N minus a variable X (N−X). 1. A method for efficient destaging of a write ahead data set (WADS) track in a volume by a processor device in a computing storage environment , comprising:preventing destage of a plurality of tracks in cache selected for writing to a storage device; and if the track N is a first WADS track in the stride Z, clearing at least one temporal bit for each track in the cache for the stride Z minus 2 (Z−2), and', 'if the track N is a sequential track, clearing the at least one temporal bit for the track N minus a variable X (N−X)., 'for a track N in a stride Z of the selected plurality of tracks2. The method of claim 1 , further including prestaging data to the plurality of tracks such that the stride Z includes complete tracks claim 1 , enabling subsequent destage of complete WADS tracks.3. The method of claim 1 , further including incrementing the at least one temporal bit.4. The method of claim 1 , further including taking a track access to the WADS track and completing a write operation on the WADS track.5. The method of claim 1 , further including ending a track access to the WADS track upon a completion of a write operation and adding the WADS track to a wise order writing (WOW) list.6. The method of claim 5 , further including checking the WOW list and examining a left neighbor and a right ...

Подробнее
27-06-2013 дата публикации

VIRTUAL COMPUTER SYSTEM, VIRTUAL COMPUTER CONTROL METHOD, VIRTUAL COMPUTER CONTROL PROGRAM, RECORDING MEDIUM, AND INTEGRATED CIRCUIT

Номер: US20130166848A1
Принадлежит:

A virtual machine system comprises: a processor for executing a secure operating system and a normal operating system; and a cache memory. The cache memory stores data in a manner that allows for identification of whether the data has been read from a secure storage area of an external main memory. The cache memory writes back data to the main memory in a manner that reduces the number of times data is intermittently written back to the secure storage area which occurs when the processor is executing the normal operating system. 1. A virtual machine system including: a processor having a first mode and a second mode; a first operating system executed by the processor in the first mode; and a second operating system executed by the processor in the second mode , the virtual machine system comprising:a write control unit configured to permit writing of data into a predetermined secure storage area in an external main memory, only when the processor is in the first mode; anda cache memory having a plurality of ways for storing data read by the processor from the main memory,the cache memory including:a data storage unit configured to, when the processor has read data from the main memory, store the data into any of the plurality of ways that is ready to newly store data, in a manner that allows for identification of whether the data has been read from the secure storage area; anda write-back unit configured to identify whether data has been read from the secure storage area, and write back data stored in at least one of the ways to the main memory with use of a predetermined algorithm according to a result of the identification, the writing back being performed in a manner that reduces the number of times data stored in each of the ways is intermittently written back to the secure storage area, the writing back to the secure storage area occurring when the processor executing the second operating system accesses the main memory, and being for causing the at least one ...

Подробнее
11-07-2013 дата публикации

DISK ARRAY APPARATUS

Номер: US20130179595A1
Принадлежит: Hitachi, Ltd.

A disk array apparatus using an SAS can transfer data without lowering a transfer efficiency of data even if rates of a plurality of physical links connected to a controller and storage device are different. A plurality of HDDs are connected to a controller through an expander. Data are transferred from the controller to the expander and then to HDD. In this connection, the controller and the expander transfers a set of transfer data in a plurality of the HDD-side physical links. The controller-side physical link integrates the transfer data, and multiplexes them to transfer. A plurality of HDDs-side physical links separates the transfer data to transfer in parallel. 1. A storage system comprising:a storage controller;a expander device coupled to the storage controller; anda plurality of storage devices, each of the plurality of storage devices coupled to the expander device via each of a plurality of physical links, wherein recognize a transmission rate of each of the plurality of physical links;', 'store routing information including addresses of the plurality of storage devices; and', 'send an information including the transmission rate of each of the plurality of physical links to the storage controller according to a command from the storage controller., 'the expander device is configured to2. A storage system according to claim 1 , whereinthe expander device collects the information from the plurality of storage devices.3. A storage system according to claim 2 , whereinthe information includes whether or not the storage device is connected with any one of a plurality of ports of the expander device.4. A storage system according to claim 1 , whereinthe command is based on SMP (Serial Management Protocol).5. A storage system according to claim 1 , whereinthe storage controller executes a discovery according to the information sent from the expander device.6. A storage system according to claim 1 , whereinthe routing information includes an expander address which ...

Подробнее