Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 1501. Отображено 195.
25-09-2013 дата публикации

Object caching for mobile data communication with mobility management

Номер: GB0002500373A
Принадлежит:

Method and system are provided for object caching with mobility management for mobile data communication. The method may include: intercepting and snooping data communications at a base station between a user equipment and a content server 501 without terminating communications; implementing object caching at the base station using snooped data communications 502; implementing object caching at an object cache server in the network 503, wherein the object cache server proxies communications to the content server from the user equipment; and maintaining synchrony between an object cache at the base station and an object cache at the object cache server 504. An object cache may be maintained at each base station in the mobile network which is consistent with an object cache at the object cache server. The object cache at each base station may be of fixed size, with the object cache at the object cache server of a size equal to the sum of all the base station object caches, and the object ...

Подробнее
15-10-2014 дата публикации

Object caching for mobile data communication with mobility management

Номер: GB0201415284D0
Автор:
Принадлежит:

Подробнее
11-01-2023 дата публикации

Circuitry and method

Номер: GB0002608734A
Принадлежит:

Circuitry comprises a set of data handling nodes comprising: two or more master nodes each having respective storage circuitry to hold copies of data items from a main memory, each copy of a data item being associated with indicator information to indicate a coherency state of the respective copy, the indicator information being configured to indicate at least whether that copy has been updated more recently than the data item held by the main memory; a home node to serialise data access operations and to control coherency amongst data items held by the set of data handling nodes so that data written to a memory address is consistent with data read from that memory address in response to a subsequent access request; and one or more slave nodes including the main memory; in which: a requesting node of the set of data handling nodes is configured to communicate a conditional request to a target node of the set of data handling nodes in respect of a copy of a given data item at a given memory ...

Подробнее
15-12-1997 дата публикации

MULTIPROCESSOR SYSTEM WITH HIERARCHICAL CACHE ARRANGEMENT

Номер: AT0000160454T
Принадлежит:

Подробнее
23-09-2003 дата публикации

SYSTEM FOR ACCESSING DISTRIBUTED DATA CACHE CHANNEL AT EACH NETWORK NODE TO PASS REQUESTS AND DATA

Номер: CA0002136727C
Принадлежит: PITTS, WILLIAM M., PITTS WILLIAM M

Network Distributed Caches ("NDCs") (50) permit accessing a named dataset stored at an NDC server terminator site (22) in response to a request submitted to an NDC client terminator site (24 ) by a client workstation (42). In accessing the dataset, the NDCs (50) form an NDC data conduit (62) that provides an active virtual circuit ("AVC") from the NDC client site (24) through intermediate NDC sites (26B, 26A) to the NDC server site (22). Throu gh the AVC provided by the conduit (62), the NDC sites (22, 26A and 26B) project an image of the requested portion of the nam ed dataset into the NDC client site (24). The NDCs (50) maintain absolute consistency between the source dataset and its projections at all NDC client terminator sites (24, 204B and 206) at which client workstations access the dataset. Channels (116) in each NDC (50) accumulate profiling data from the re- quests to access the dataset for which they have been claimed. The NDCs (50) use the profile data stored in channels (116 ...

Подробнее
14-07-2017 дата публикации

Suitable for large data environment of the data stream storage management method and system

Номер: CN0104050100B
Автор:
Принадлежит:

Подробнее
16-04-2015 дата публикации

SYSTEM AND METHOD FOR MANAGING CACHE COHERENCE IN A NETWORK OF PROCESSORS PROVIDED WITH CACHE MEMORIES

Номер: US2015106571A1
Принадлежит:

A cache coherence management system includes: a set of directories distributed between nodes of a network for interconnecting processors including cache memories, each directory including a correspondence table between cache lines and information fields on the cache lines; and a mechanism updating the directories by adding, modifying, or deleting cache lines in the correspondence tables. In each correspondence table and for each cache line identified, at least one field is provided for indicating a possible blocking of a transaction relative to the cache line considered, when the blocking occurs in the node associated with the correspondence table considered. The system further includes a mechanism detecting fields indicating a transaction blocking and restarting each transaction detected as blocked from the node in which it is indicated as blocked.

Подробнее
01-06-2017 дата публикации

ANONYMIZED NETWORK ADDRESSING IN CONTENT DELIVERY NETWORKS

Номер: US20170153980A1
Принадлежит:

Systems, methods, apparatuses, and software for a content delivery network that caches content for delivery to end user devices is presented. In one example, a content delivery network (CDN) is presented having a plurality of cache nodes that cache content for delivery to end user devices. The CDN includes an anonymization node configured to establish anonymized network addresses for transfer of content to cache nodes from one or more origin servers that store the content before caching by the CDN. The anonymization node is configured to provide indications of relationships between the anonymized network addresses and the cache nodes to a routing node of the CDN. The routing node is configured to route the content transferred by the one or more origin servers responsive to content requests of the cache nodes based on the indications of the relationships between the anonymous network addresses to the cache nodes.

Подробнее
23-05-2017 дата публикации

Cache resource manager

Номер: US0009658959B2
Принадлежит: PernixData, Inc., PERNIXDATA INC

A resource manager directs cache operating states of virtual machines based on cache resource latency and by distinguishing between latencies in flash memory and latencies in network communications and by distinguishing between executing read commands and executing different types of write commands. As a result, the resource manager can downgrade the cache operating state of the virtual machines differently based on the type of latency being experienced and the type of command being performed. The resource manager can upgrade and/or reset the cache operating state of the virtual machines, when appropriate, and can give priority to some virtual machines over other virtual machines when operating in a downgraded cache operating state.

Подробнее
05-06-2001 дата публикации

Data-processing system with CC-NUMA (cache-coherent, non-uniform memory access) architecture and remote cache incorporated in local memory

Номер: US0006243794B1

A data-processing system with cc-NUMA architecture including a plurality of nodes each constituted by at least one processor intercommunicating with a DRAM-technology local memory using a local bus, the nodes intercommunicating using remote interface bridges and at least one intercommunication ring. The at least one processor has access to a system memory space defined by memory addresses. Each node includes a unit for configuring the local memory, for uniquely mapping a first portion of the system memory space, which is different for each node, onto a portion of the local memory and for mapping the portion of the system memory space which as a whole is uniquely mapped onto a portion of the local memory of all the other nodes onto the remaining portion of the local memory, and a SRAM-technology memory for storing labels with associated with a block of data stored in the remaining portion of local memory and each comprising an index identifying the block and bits indicating a coherence state ...

Подробнее
05-09-2017 дата публикации

System and method for improving virtual media redirection speed in baseboard management controller (BMC)

Номер: US0009756143B2

Certain aspects of the disclosure relates to a system and method of performing virtual media redirection. The system includes a baseboard management controller (BMC) connected to a host computing device through a communication interface, and a client computing device communicatively connected to the BMC through a network. In operation, the BMC emulates a virtual media for a media device, and establishes a virtual media connection to the client computing device through the network. Then the BMC stores the data from the media device in a host cache at the BMC and in a client cache at the client computing device by sectors. When the BMC receives a request from the host computing device through the communication interface to retrieve sectors from the media device, the BMC redirects the sectors being requested to the host computing device depending on where the requested sectors are stored.

Подробнее
22-09-2016 дата публикации

CACHE AND NON-CACHE USAGE IN A DISTRIBUTED STORAGE SYSTEM

Номер: US20160274806A1
Принадлежит:

According to one configuration, upon receiving data, a respective node in a distributed storage system produces metadata based on the received data. The generated metadata indicates whether or not to bypass storage of the received data in the cache storage resource and store the received data in the non-cache storage resource of the repository. Data storage control logic uses the metadata to control how the received data is stored. A state of the metadata can indicate to prevent storage of the received data in a corresponding cache resource associated with the respective storage node. Thus, the generated metadata can provide guidance to corresponding data storage control logic whether to store the received data in a cache storage resource or non-cache storage resource.

Подробнее
17-12-2019 дата публикации

Systems and methods for reconstructing cache loss

Номер: US0010509724B2

Implementations of this disclosure are directed to systems, methods and media for assessing the status of data being stored in distributed, cached databases that includes retrieving, from a data cache, variables which include a cache loss indicator and a non-null value. The variables are analyzed to determine a state of the cache loss indicator. If the cache loss indicator indicates an intentional cache loss state, the cache loss indicator is removed and the non-null value is provided to an application. Otherwise, a cache restore process is initiated.

Подробнее
06-02-2018 дата публикации

Cache memory bypass in a multi-core processor (MCP)

Номер: US0009886389B2

This invention describes an apparatus, computer architecture, memory structure, memory control, and cache memory operation method for multi-core processor. A logic core bypasses immediate cache memory units with low yield or deadly performance. The core mounts (multiple) cache unit(s) that might already be in use by other logic cores. Selected cache memory units serve multiple logic cores with the same contents. The shared cache memory unit(s) serves all the mounting cores with cache search, hit, miss, and write back functions. The method recovers a logic core whose cache memory block is not operational by sharing cache memory blocks which might already engage other logic cores. The method is used to improve reliability and performance of the remaining system.

Подробнее
16-03-2021 дата публикации

Anonymized network addressing in content delivery networks

Номер: US0010949349B2
Принадлежит: Fastly, Inc., FASTLY INC

Systems, methods, apparatuses, and software for a content delivery network that caches content for delivery to end user devices is presented. In one example, a content delivery network (CDN) is presented having a plurality of cache nodes that cache content for delivery to end user devices. The CDN includes an anonymization node configured to establish anonymized network addresses for transfer of content to cache nodes from one or more origin servers that store the content before caching by the CDN. The anonymization node is configured to provide indications of relationships between the anonymized network addresses and the cache nodes to a routing node of the CDN. The routing node is configured to route the content transferred by the one or more origin servers responsive to content requests of the cache nodes based on the indications of the relationships between the anonymous network addresses to the cache nodes.

Подробнее
04-11-2021 дата публикации

PROVIDING PROCESS DATA TO A DATA RECORDER

Номер: US20210342461A1
Автор: Richard S. Teal
Принадлежит:

A kernel driver on an endpoint uses a process cache to provide a stream of events associated with processes on the endpoint to a data recorder. The process cache can usefully provide related information about processes such as a name, type or path for the process to the data recorder through the kernel driver. Where a tamper protection cache or similarly secured repository is available, this secure information may also be provided to the data recorder for use in threat detection, forensic analysis and so forth.

Подробнее
19-04-2018 дата публикации

COMPUTER CLUSTER SYSTEM

Номер: US20180107599A1
Автор: Ke-Chun CHUANG
Принадлежит:

A method for data transmission within a server that includes a processor, a main memory, a southbridge, a chipset, and a buffer, the chipset including a baseboard management controller (BMC), the method including: obtaining memory information about a segment of the peripheral memory allocated for a peripheral controller included in the chipset; transmitting a notifying command to the BMC indicating a data size of to-be-transmitted data associated with a booting operation of the server; transmitting at least a part of the to-be-transmitted data to the segment, according to the memory information; and transmitting a standby command to the BMC indicating that the part of the to-be-transmitted data has been stored in the segment.

Подробнее
07-12-2017 дата публикации

Caching Framework for Big-Data Engines in the Cloud

Номер: US20170351620A1
Принадлежит:

The present invention is generally directed to a caching framework that provides a common abstraction across one or more big data engines, comprising a cache filesystem including a cache filesystem interface used by applications to access cloud storage through a cache subsystem, the cache filesystem interface in communication with a big data engine extension and a cache manager; the big data engine extension, providing cluster information to the cache filesystem and working with the cache filesystem interface to determine which nodes cache which part of a file; and a cache manager for maintaining metadata about the cache, the metadata comprising the status of blocks for each file. The invention may provide common abstraction across big data engines that does not require changes to the setup of infrastructure or user workloads, allows sharing of cached data and caching only the parts of files that are required, can process columnar format.

Подробнее
09-11-2017 дата публикации

I/O BLENDER COUNTERMEASURES

Номер: US20170322882A1
Принадлежит: Dell Products L.P.

A cache storage method includes providing a storage cache cluster, comprising a plurality of cache storage elements, for caching I/O operations from a plurality of virtual machines associated with a corresponding plurality of virtual hard disks mapped to a logical storage area network volume or LUN. Responsive to a cache flush signal, flush write back operations are performed to flush modified cache blocks to achieve or preserve coherency. The flush write back operations may include accessing current time data indicative of a current time, determining a current time window in accordance with the current time, determining a duration of the current time window, and identifying a current cache storage element corresponding to the current time window. For a duration of the current time window, only those write back blocks stored in the current cache storage element are flushed. In addition, the applicable write back blocks are flushed in accordance with logical block address information associated ...

Подробнее
06-07-2017 дата публикации

INTELLIGENT SLICE CACHING

Номер: US20170192898A1
Принадлежит:

Systems and methods for intelligent slice caching in a dispersed storage network. The methods include determining a minimum slice access rate for encoded data slices to be stored, determining a least access rate of a least accessed encoded data slice stored, determining an estimated access rate for an encoded data slice and determining whether to store the encoded data slice in small fast memory as a cached encoded data slice based on the minimum slice access rate, the least access rate, and the estimated access rate. The method further includes facilitating storage of the encoded data slice in small fast memory. The method may also include updating the minimum slice access rate and transferring an encoded data slice stored in small fast memory to large slow memory when an actual access rate is less than the minimum slice access rate or is less than the least access rate.

Подробнее
17-03-2020 дата публикации

Memory access optimization for an I/O adapter in a processor complex

Номер: US0010592451B2

An aspect includes memory access optimization for an I/O adapter in a processor complex. A memory block distance is determined between the I/O adapter and a memory block location in the processor complex and determining one or more memory movement type criteria between the I/O adapter and the memory block location based on the memory block distance. A memory movement operation type is selected based on a memory movement process parameter and the one or more memory movement type criteria. A memory movement process is initiated between the I/O adapter and the memory block location using the memory movement operation type.

Подробнее
18-10-2022 дата публикации

Gateway fabric ports

Номер: US0011477050B2
Принадлежит: Graphcore Limited

A gateway for interfacing a host with a subsystem for acting as a work accelerator to the host. The gateway enables the transfer of batches of data to the subsystem at precompiled data exchange synchronisation points. The gateway acts to route data between accelerators which are connected in a scaled system of multiple gateways and accelerators using a global address space set up at compile time of an application to run on the computer system.

Подробнее
08-11-2022 дата публикации

Controller with caching and non-caching modes

Номер: US0011494224B2

An apparatus includes a CPU core, a first cache subsystem coupled to the CPU core, and a second memory coupled to the cache subsystem. The first cache subsystem includes a configuration register, a first memory, and a controller. The controller is configured to: receive a request directed to an address in the second memory and, in response to the configuration register having a first value, operate in a non-caching mode. In the non-caching mode, the controller is configured to provide the request to the second memory without caching data returned by the request in the first memory. In response to the configuration register having a second value, the controller is configured to operate in a caching mode. In the caching mode the controller is configured to provide the request to the second memory and cache data returned by the request in the first memory.

Подробнее
28-03-2023 дата публикации

Data through gateway

Номер: US0011615038B2
Принадлежит: Graphcore Limited

A gateway for use in a computing system to interface a host with the subsystem for acting as a work accelerator to the host, the gateway having: an accelerator interface for connection to the subsystem to enable transfer of batches of data between the subsystem and the gateway; a data connection interface for connection to external storage for exchanging data between the gateway and storage; a gateway interface for connection to at least one second gateway; a memory interface connected to a local memory associated with the gateway; and a streaming engine for controlling the streaming of batches of data into and out of the gateway in response to pre-compiled data exchange synchronisation points attained by the subsystem, wherein the streaming of batches of data are selectively via at least one of the accelerator interface, data connection interface, gateway interface and memory interface.

Подробнее
17-10-2023 дата публикации

Real-time message delivery and update service in a proxy server network

Номер: US0011792295B2
Принадлежит: Akamai Technologies, Inc.

This patent document describes technology for providing real-time messaging and entity update services in a distributed proxy server network, such as a CDN. Uses include distributing real-time notifications about updates to data stored in and delivered by the network, with both high efficiency and locality of latency. The technology can be integrated into conventional caching proxy servers providing HTTP services, thereby leveraging their existing footprint in the Internet, their existing overlay network topologies and architectures, and their integration with existing traffic management components.

Подробнее
13-06-2024 дата публикации

Storage System and Method for Accessing Same

Номер: US20240193084A1
Автор: Sehat Sutardja
Принадлежит:

A data access system including a processor and a storage system including a main memory and a cache module. The cache module includes a FLC controller and a cache. The cache is configured as a FLC to be accessed prior to accessing the main memory. The processor is coupled to levels of cache separate from the FLC. The processor generates, in response to data required by the processor not being in the levels of cache, a physical address corresponding to a physical location in the storage system. The FLC controller generates a virtual address based on the physical address. The virtual address corresponds to a physical location within the FLC or the main memory. The cache module causes, in response to the virtual address not corresponding to the physical location within the FLC, the data required by the processor to be retrieved from the main memory.

Подробнее
27-06-2014 дата публикации

СПОСОБ ОПТИМИЗАЦИИ УПРАВЛЕНИЯ КЭШ-ПАМЯТЬЮ И СООТВЕТСТВУЮЩАЯ АППАРАТНАЯ СИСТЕМА

Номер: RU2012154297A
Принадлежит:

... 1. Способ управления кэш-памятью, реализованный в пользовательском приемном устройстве, отличающийся тем, что он содержит:- прием (401) запроса для добавления данных в кэш-память;- постадийное исключение добавления данных в кэш-память, когда уровень заполнения кэш-памяти повышается, и постадийное исключение добавления определяется, для каждой последующей стадии уровня заполнения кэш-памяти, согласно правилам исключения добавления данных в кэш-память, которые являются все более ограничивающими.2. Способ по п.1, отличающийся тем, что правила исключения добавления данных в кэш-память являются все более ограничивающими как функция по меньшей мере одного из источника данных и типа данных.3. Способ по любому из пп.1 и 2, отличающийся тем, что дополнительно содержит этап:- исключения (313) добавления данных в кэш-память, если уровень заполнения кэш-памяти выше, чем первая стадия уровня (304) заполнения кэш-памяти, которая ниже, чем максимальная стадия уровня (305) заполнения кэш-памяти.4. Способ ...

Подробнее
05-10-2011 дата публикации

Accelerator coherency port for multi-processor memory coherency

Номер: GB0002479267A
Принадлежит:

A method for implementing multi processor memory coherency includes: a Level-2 (L2) cache of a first cluster receives a control signal of the first cluster for reading first data; the L2 cache of the first cluster reads the first data in a Level-1 (L1) cache of a second cluster through an Accelerator Coherency Port (ACP) of the L1 cache of the second cluster, if the first data is currently maintained by the second cluster, where the L2 cache of the first cluster is connected to the ACP of the L1 cache of the second cluster; and the L2 cache of the first cluster provides the first data read to the first cluster for processing. The invention is suitable for implementing memory coherency between clusters in the ARM Cortex (RTM) A9 architecture.

Подробнее
03-05-2023 дата публикации

A high-performance computing system.

Номер: GB0002612514A
Автор: ROMAIN DOLBEAU [FR]
Принадлежит:

A high-performance computing system (HPC) comprises: - at least one computational group of at least one core (C1, C2, C3, C4, CD), each computational group being associated with a computational memory (CPM), arranged to form a computational resource (CR) being utilized for performing computations; - a concierge module (CC, CCD) comprising at least one concierge group of at least one core associated with a concierge memory (CCM) arranged to form a reserved support resource (SR) being utilized for performing support functions to said computational resource (CR); wherein the computational resource (CR) is coupled to the concierge module (CC, CCD) through a cache coherent interconnection (CCL) to maintain uniformity of shared resource data that are stored in the computational memory (CPM) and concierge memory (CCM) so that the high-performance computing system is functionally transparent to a software code runs on the computational group, and wherein the cores in the computation and concierge ...

Подробнее
15-02-2003 дата публикации

MULTIPROCESSOR SYSTEM WITH A VERY MUCH LARGE NUMBER OF MICROPROCESSORS

Номер: AT0000232318T
Принадлежит:

Подробнее
05-06-2018 дата публикации

METHOD OF OPTIMIZATION OF CACHE MEMORY MANAGEMENT AND CORRESPONDING APPARATUS.

Номер: CA0002797435C
Принадлежит: THOMSON LICENSING

In order to optimize cache memory management, the invention proposes a method and corresponding apparatus that comprises application of different cache memory management policies according to data origin and possibly to data type and the use of increasing levels of exclusion from adding to cache of data, the exclusion levels being increasingly restrictive with regard to adding data to cache as the cache memory fill level increases. The method and device allows among others to keep important information in cache memory and reduce time spent in swapping information in- and out of cache memory.

Подробнее
02-10-2018 дата публикации

Coherence protocol tables

Номер: CN0108614783A
Принадлежит:

Подробнее
10-12-2019 дата публикации

Efficient incremental backup and restoration of file system hierarchies with cloud object storage

Номер: US0010503771B2

Techniques described herein relate to systems and methods of data storage, and more particularly to providing layering of file system functionality on an object interface. In certain embodiments, file system functionality may be layered on cloud object interfaces to provide cloud-based storage while allowing for functionality expected from a legacy applications. For instance, POSIX interfaces and semantics may be layered on cloud-based storage, while providing access to data in a manner consistent with file-based access with data organization in name hierarchies. Various embodiments also may provide for memory mapping of data so that memory map changes are reflected in persistent storage while ensuring consistency between memory map changes and writes. For example, by transforming a ZFS file system disk-based storage into ZFS cloud-based storage, the ZFS file system gains the elastic nature of cloud storage.

Подробнее
17-12-2019 дата публикации

Mutual exclusion in a non-coherent memory hierarchy

Номер: US0010509740B2

Methods and systems for mutual exclusion in a non-coherent memory hierarchy may include a non-coherent memory system with a shared system memory. Multiple processors and a memory connect interface may be configured to provide an interface for the processors to the shared memory. The memory connect interface may include an arbiter for atomic memory operations from the processors. In response to an atomic memory operation, the arbiter may perform an atomic memory operation procedure including setting a busy flag for an address of the atomic memory operation, blocking subsequent memory operations from any of the processors to the address while the busy flag is set, issuing the atomic memory operation to the shared memory, and in response to an acknowledgement of the atomic memory operation from the shared memory, clearing the busy flag and allowing subsequent memory operations from the processors for the address to proceed to the shared memory.

Подробнее
01-09-2022 дата публикации

SYSTEMS AND METHODS FOR MANAGING DIGITAL RIGHTS

Номер: US20220278851A1
Автор: Ross Gilson
Принадлежит:

Systems and methods are described for managing digital rights. A transaction may be generated and may comprise an identifier and a decryption key. The decryption key may be configured to decrypt at least a portion of an encrypted content asset accessible by one or more user devices. The transaction may be caused to be stored in a distributed database.

Подробнее
05-05-2022 дата публикации

RESOLVING CACHE SLOT LOCKING CONFLICTS FOR REMOTE REPLICATION

Номер: US20220138105A1
Принадлежит: EMC IP Holding Company LLC

Cache slots on a storage system may be shared between entities processing write operations for logical storage unit (LSU) tracks and entities performing remote replication for write operations for the LSU tracks. If a new write operation is received on a first storage system (S1) for a track of an LSU (R1) when the cache slot mapped to the R1 track is locked by a process currently transmitting data of the cache slot to a second storage system (S2), a new cache slot may be allocated to the R1 track, the data of the original cache slot copied to the new cache slot, and the new write operation for the R1 track initiated on S1 using the new cache slot; while the data of the original cache slot is independently, and perhaps concurrently, transmitted to S2 to be replicated in R2, the LSU on S2 that is paired with R1.

Подробнее
16-02-2023 дата публикации

Streaming Network Monitoring Caching Infrastructure

Номер: US20230048726A1
Принадлежит:

Systems and methods of network telemetry caching and distribution are provided. The system can receive network telemetry data and store it as a plurality of data nodes. The system can maintain a node pointer map and a node pointer queue. If the system receives an update to a data node having a corresponding node pointer not already present in the node pointer map, the system can add the node pointer to the node pointer queue and to the node pointer map with a count of zero. If the node pointer is already present in the node pointer map, the system can increment the node count for the node pointer in the node pointer map and not add the node pointer to the node pointer queue. The system can transmit data values and node counts to the client device for each node pointer in the node pointer queue.

Подробнее
06-06-2023 дата публикации

Data prefetching method and apparatus

Номер: US0011669453B2
Автор: Tao Liu
Принадлежит: HUAWEI TECHNOLOGIES CO., LTD.

This application discloses a data prefetching method, including: receiving, by a home node, a write request sent by a first cache node after the first cache node processes received data; performing, by the home node, an action of determining whether the second cache node needs to perform a data prefetching operation on the to-be-written data; and when determining that the second cache node needs to perform a data prefetching operation on the to-be-written data, sending, by the home node, the to-be-written data to the second cache node. Embodiments of this application help improve accuracy and certainty of a data prefetching time point, and reduce a data prefetching delay.

Подробнее
10-10-2016 дата публикации

СПОСОБ ОПТИМИЗАЦИИ УПРАВЛЕНИЯ КЭШ-ПАМЯТЬЮ И СООТВЕТСТВУЮЩАЯ АППАРАТНАЯ СИСТЕМА

Номер: RU2599537C2
Принадлежит: ТОМСОН ЛАЙСЕНСИНГ (FR)

Изобретение относится к вычислительной технике. Технический результат заключается в оптимизации управления кэш-памятью. Способ управления кэш-памятью, реализуемый в приемном устройстве, в котором постадийно исключают добавление данных в кэш-память по мере повышения уровня заполнения кэш-памяти, причем каждая последующая стадия представляет пороговое значение уровня заполнения кэш-памяти и каждая последующая стадия содержит более ограничивающее правило для исключения добавления данных в кэш-память по мере повышения уровня заполнения кэш-памяти. 2 н. и 12 з.п. ф-лы, 4 ил.

Подробнее
14-11-2022 дата публикации

Устройство для поиска минимального значения интенсивности размещения в многопроцессорных гиперкубических системах при направленной передаче информации

Номер: RU2783489C1

Изобретение относится к области вычислительной техники. Технический результат заключается в обеспечении обнаружения минимального значения интенсивности размещения в многопроцессорных гирперкубических системах по критерию минимизации интенсивности взаимодействия процессов и данных. Технический результат достигается за счет того, что в известное устройство, содержащее первый регистр сдвига, второй регистр сдвига, блок формирования перестановок, блок постоянной памяти, блок запоминания лучшего варианта, коммутатор, арифметико–логическое устройство, дешифратор выбора дуги, реверсивный счетчик ячеек, блок оперативной памяти, первый элемент сравнения, триггер начала счета, триггер режима, счетчик дуг, дешифратор блокировки дуги, регистр номера дуги, регистр минимального веса, электронную модель графа, группу элементов ИЛИ, группу элементов И, первый элемент задержки введен блок поиска минимального значения, содержащий группу сумматоров, первую группу регистров номера вершины, вторую группу регистров ...

Подробнее
04-01-2023 дата публикации

A data processing apparatus and method for handling stalled data

Номер: GB0002608430A
Принадлежит:

There is provided a data processing apparatus and method. The data processing apparatus comprises a plurality of processing elements connected via a network arranged on a single chip to form a spatial architecture. Each processing element comprising processing circuitry to perform processing operations and memory control circuitry to perform data transfer operations and to issue data transfer requests for requested data to the network. The memory control circuitry is configured to monitor the network to retrieve the requested data from the network. Each processing element is further provided with local storage circuitry comprising a plurality of local storage sectors to store data associated with the processing operations, and auxiliary memory control circuitry to monitor the network to detect stalled data. The auxiliary memory control circuitry is configured to transfer the stalled data from the network to an auxiliary storage buffer dynamically selected from amongst the plurality of local ...

Подробнее
02-02-2018 дата публикации

In the GPDSP multi-level in coordination with and the shared memory device and access method

Номер: CN0104699631B
Автор:
Принадлежит:

Подробнее
16-08-2002 дата публикации

CONTROLLER OF COHERENCE FOR MULTIPROCESSOR UNIT, MODULE AND MULTIPROCESSOR UNIT HAS ARCHITECTURE MULTIMODULE INTEGRATING SUCH A CONTROLLER

Номер: FR0002820850A1
Принадлежит:

Le serveur multiprocesseur à architecture multi-module symétrique de grande taille comporte N modules identiques 50, 51, 52, 53 de multiprocesseurs. Le module 50 comporte une pluralité de multiprocesseurs 60, 61, 62, 63 équipés d'une mémoire cache et au moins une mémoire principale connectés à un contrôleur de cohérence 64 comportant un port externe 99 connecté à au moins un des modules de multiprocesseurs 51, 52, 53 externes au module 50 et un répertoire de filtrage de caches 84 SF/ ED destiné à assurer la cohérence entre la mémoire de masse et les mémoires caches des modules, le répertoire de filtrage de caches 84 comportant un vecteur de présence locale 86 gardant trace des lignes ou blocs mémoire copiés dans les mémoires caches du module 50 et une extension 88 gardant trace des coordonnées des lignes ou blocs mémoire copiées du module local 50 vers un module externe 51, 52, 53.

Подробнее
01-09-2020 дата публикации

Managing replica caching in a distributed storage system

Номер: US0010764389B2
Принадлежит: Intel Corporation

Technologies for managing replica caching in a distributed storage system include a storage manager device. The storage manager device is configured to receive a data write request to store replicas of data. Additionally, the storage manager device is configured to designate one of the replicas as a primary replica, select a first storage node to store the primary replica of the data in a cache storage and at least a second storage node to store a non-primary replica of the data in a non-cache storage. The storage manager device is further configured to include a hint in a first replication request to the first storage node that the data is to be stored in the cache storage of the first storage node as the primary replica. Further, the storage manager device is configured to transmit replication requests to the respective storage nodes. Other embodiments are described and claimed.

Подробнее
24-03-2022 дата публикации

MECHANISM TO EFFICIENTLY RINSE MEMORY-SIDE CACHE OF DIRTY DATA

Номер: US20220091991A1
Принадлежит:

A method includes, in response to each write request of a plurality of write requests received at a memory-side cache device coupled with a memory device, writing payload data specified by the write request to the memory-side cache device, and when a first bandwidth availability condition is satisfied, performing a cache write-through by writing the payload data to the memory device, and recording an indication that the payload data written to the memory-side cache device matches the payload data written to the memory device. 1. A method , comprising:{'claim-text': ['writing payload data specified by the write request to the memory-side cache device, and', {'claim-text': ['writing the payload data to the memory device, and', 'recording an indication that the payload data written to the memory-side cache device matches the payload data written to the memory device.'], '#text': 'when a first bandwidth availability condition is satisfied, performing a cache write-through by:'}], '#text': 'in response to each write request of a plurality of write requests received at a memory-side cache device coupled with a memory device,'}2. The method of claim 1 , wherein for each write request of the plurality of write requests:writing the payload data specified by the write request comprises storing the data in an entry of the memory-side cache device; andrecording the indication comprises deasserting a dirty bit in a tag associated with the entry.3. The method of claim 1 , further comprising:receiving the plurality of write requests at the memory-side cache according to a write sequence, wherein writing the payload data to the memory device is performed in an order corresponding to the write sequence.4. The method of claim 1 , further comprising:{'claim-text': ['updating the backing data to match the cached data; and', 'recording an indication that the cached data in the memory-side cache matches the backing data in the memory device.'], '#text': 'in response to each read request ...

Подробнее
08-11-2018 дата публикации

ASYNCHRONOUS DATA STORE OPERATIONS

Номер: US20180322055A1
Принадлежит:

A processing system server and methods with the server to provide data values. The server maintains a cache of objects. The server executes an asynchronous computation to determine the value of an object. In response to a request for the object before the asynchronous computation has determined the value of the object, returning a value of the object from the cache. In response to a request for the object after the asynchronous computation has determined the value of the object, returning a value of the object determined by the asynchronous computation.

Подробнее
06-10-2020 дата публикации

Method and apparatus for performing error handling operations using error signals

Номер: US0010795755B2
Принадлежит: INTEL CORPORATION, INTEL CORP

Provided are a method and apparatus for performing error handling operations using error signals A first error signal is asserted on an error pin on a bus to signal to a host memory controller that error handling operations are being performed by a memory module controller in response to detecting an error. Error handling operations are performed to return the bus to an initial state in response to detecting the error. A second error signal is asserted on the error pin on the bus to signal that error handling operations have completed and the bus is returned to the initial state.

Подробнее
15-08-2017 дата публикации

Scale-out non-uniform memory access

Номер: US0009734063B2

A computing system that uses a Scale-Out NUMA (“soNUMA”) architecture, programming model, and/or communication protocol provides for low-latency, distributed in-memory processing. Using soNUMA, a programming model is layered directly on top of a NUMA memory fabric via a stateless messaging protocol. To facilitate interactions between the application, OS, and the fabric, soNUMA uses a remote memory controller—an architecturally-exposed hardware block integrated into the node's local coherence hierarchy.

Подробнее
22-09-2020 дата публикации

Server side data cache system

Номер: US0010785322B2
Принадлежит: PayPal, Inc., PAYPAL INC

In an example embodiment, a system and method to store and retrieve application data from a database are provided. In an example embodiment, location data comprising a database identifier is received. A location of a database is derived based on the database identifier, the database being one of a plurality of databases, each database of the plurality of databases comprising application data, and application data is requested from the database based on the derived location.

Подробнее
10-02-2015 дата публикации

Fault tolerance of multi-processor system with distributed cache

Номер: US8954790B2

A semiconductor chip is described having different instances of cache agent logic circuitry for respective cache slices of a distributed cache. The semiconductor chip further includes hash engine logic circuitry comprising: hash logic circuitry to determine, based on an address, that a particular one of the cache slices is to receive a request having the address, and, a first input to receive notice of a failure event for the particular cache slice. The semiconductor chip also includes first circuitry to assign the address to another cache slice of the cache slices in response to the notice.

Подробнее
30-01-2018 дата публикации

Cache management in a multi-threaded environment

Номер: US0009880943B2
Принадлежит: Facebook, Inc., FACEBOOK INC

Disclosed here are methods, systems, paradigms and structures for deleting shared resources from a cache in a multi-threaded system. The shared resources can be used by a plurality of requests belonging to multiple threads executing in the system. When requests, such as requests for executing script code, and work items, such as work items for deleting a shared resource, are created, a global sequence number is assigned to each of them. The sequence number indicates the order in which the requests and work items are created. A particular work item can be executed to delete the shared resource if there are no requests having a sequence number lesser than that of the particular work item executing in the system. However, if there is at least one request with a sequence number lesser than that of the particular work item executing, the work item is ignored until the request completes executing.

Подробнее
30-09-2021 дата публикации

MANAGEMENT OF DISTRIBUTED SHARED MEMORY

Номер: US20210303477A1
Принадлежит:

Examples described herein relate to a network interface device. In some examples, the network interface device includes a device interface; input/output circuitry to receive Ethernet compliant packets and output Ethernet compliant packets; circuitry to monitor a particular page for a rate of data copying among nodes within a group of two or more nodes; and circuitry to perform one or more actions based, at least in part, on the rate of data copying among the nodes within the group of two or more nodes to attempt to reduce a number of copy operations of the data among the nodes within the group of two or more nodes, wherein the group of two or more nodes are part of a distributed shared memory (DSM).

Подробнее
25-08-2020 дата публикации

Systems and methods for implementing coherent memory in a multiprocessor system

Номер: US0010754777B2
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

Data units are stored in private caches in nodes of a multiprocessor system, each node containing at feast one processor (CPU), at least one cache private to the node and at least one cache location buffer {CLB} private to the node. In each CLB location information values are stored, each location information value indicating a location associated with a respective data unit, wherein each location information value stored in a given CLB indicates the location to be either a location within the private cache disposed in the same node as the given CLB, to be a location in one of the other nodes, or to be a location in a main memory. Coherence of values of the data units is maintained using a cache coherence protocol The location information values stored in the CLBs are updated by the cache coherence protocol in accordance with movements of their respective data units.

Подробнее
04-02-2021 дата публикации

SYSTEM AND METHOD FOR CACHING DATA IN PERSISTENT MEMORY OF A NON-VOLATILE MEMORY EXPRESS STORAGE ARRAY ENCLOSURE

Номер: US20210034258A1
Принадлежит:

A method, computer program product, and computing system for receiving, via a storage processor of a storage system, a write request for writing a data portion to a storage array enclosure of non-volatile memory express (NVMe) drives communicatively coupled to the storage processor, where the write request may be received from a host. The data portion may be written to a persistent memory write cache within the storage array enclosure. 1. A computer-implemented method comprising:receiving, via a storage processor of a storage system, a write request for writing a data portion to a storage array enclosure of non-volatile memory express (NVMe) drives communicatively coupled to the storage processor, wherein the write request is received from a host; and providing the write request to a first storage controller within the storage array enclosure, wherein the first storage controller is communicatively coupled to a first persistent memory device of the persistent memory write cache;', 'multicasting, via the first storage controller, the write request to at least a second storage controller, wherein the second storage controller is communicatively coupled to a second persistent memory device of the persistent memory write cache;', 'writing, via the first storage controller, the data portion to the first persistent memory device of the persistent memory write cache; and', 'writing, via the second storage controller, the data portion to the second persistent memory device of the persistent memory write cache., 'writing the data portion to a persistent memory write cache within the storage array enclosure, wherein writing the data portion to the persistent memory write cache includes2. The computer implemented method of claim 1 , wherein the storage system includes a plurality of storage processors configured to receive a plurality of write requests.3. The computer implemented method of claim 2 , further comprising:accessing, via a first storage processor, write cache data ...

Подробнее
20-07-2021 дата публикации

Cached volumes at storage gateways

Номер: US0011068395B2
Принадлежит: Amazon Technologies, Inc., AMAZON TECH INC

Methods and apparatus for supporting cached volumes at storage gateways are disclosed. A storage gateway appliance is configured to cache at least a portion of a storage object of a remote storage service at local storage devices. In response to a client's write request, directed to at least a portion of a data chunk of the storage object, the appliance stores a data modification indicated in the write request at a storage device, and asynchronously uploads the modification to the storage service. In response to a client's read request, directed to a different portion of the data chunk, the appliance downloads the requested data from the storage service to the storage device, and provides the requested data to the client.

Подробнее
08-09-2021 дата публикации

ADDRESS CACHING IN SWITCHES

Номер: EP3329378B1
Автор: SEREBRIN, Benjamin C.
Принадлежит: Google LLC

Подробнее
31-08-2022 дата публикации

Data Processors

Номер: GB0002604153A
Принадлежит:

In a data processing system in which varying numbers of channels for accessing a memory can be configured, the communications channel to use for an access to the memory is determined by mapping 451 a memory address associated with the memory access to an intermediate address within an intermediate address space, selecting 452, based on the number of channels configured for use to access the memory, a mapping operation to use to determine from the intermediate address which channel to use for the memory access, and using 452 the selected mapping operation to determine from the intermediate address which channel to use for the memory access.

Подробнее
21-04-2005 дата публикации

Caching process data of a slow network in a fast network environment

Номер: AU2003261261A1
Принадлежит:

Подробнее
18-08-2020 дата публикации

Apparatus and system for controlling messaging in a multi-slot link layer microchip

Номер: CN0106681938B
Автор:
Принадлежит:

Подробнее
02-08-2018 дата публикации

THERMAL AND RELIABILITY BASED CACHE SLICE MIGRATION

Номер: WO2018140228A1
Принадлежит:

A multi-core processing chip where the last-level cache is implemented by multiple last-level caches (a.k.a. cache slices) that are physically and logically distributed. The various processors of the chip decide which last-level cache is to hold a given data block by applying a temperature or reliability dependent hash function to the physical address. While the system is running, a last-level cache that is overheating, or is being overused, is no longer used by changing the hash function. Before accesses to the overheating cache are prevented, the contents of that cache are migrated to other last-level caches per the changed hash function. When a core processor associated with a last-level cache is shut down, or processes/threads are removed from that core, or when the core is overheating, use of the associated last-level cache can be prevented by changing the hash function and the contents migrated to other caches.

Подробнее
22-03-2018 дата публикации

EFFICIENT DUAL-OBJECTIVE CACHE

Номер: US20180081959A1
Принадлежит: Oracle International Corp

Techniques are described herein for effectively managing usage of a shared object cache in a container database management system (DBMS). The shared object cache maintains shared objects belonging to a set of pluggable databases (PDBs) hosted by the container DBMS. In an embodiment, a shared object metadata extension structure (SOMEX) is maintained for each PDB. The SOMEX stores metadata for each shared object of the PDB that resides in the shared object cache. In an embodiment, a share of the shared object cache is maintained for shared objects from each PDB in the set of PDBs based on entries in the SOMEX for the PDB.

Подробнее
16-07-2019 дата публикации

Short-circuiting normal grace-period computations in the presence of expedited grace periods

Номер: US0010353748B2

A technique for short-circuiting normal read-copy update (RCU) grace period computations in the presence of expedited RCU grace periods. The technique may include determining during normal RCU grace period processing whether at least one expedited RCU grace period elapsed during a normal RCU grace period. If so, the normal RCU grace period is ended. If not, the normal RCU grace period processing is continued. Expedited RCU grace periods may be implemented by expedited RCU grace period processing that periodically awakens a kernel thread that implements the normal RCU grace period processing. The expedited RCU grace period processing may conditionally throttle wakeups to the kernel thread based on CPU utilization.

Подробнее
13-08-2019 дата публикации

Multicopy atomic store operation in a data processing system

Номер: US0010379856B2

A data processing system implementing a weak memory model includes a plurality of processing units coupled to an interconnect fabric. In response execution of a multicopy atomic store instruction, an initiating processing unit broadcasts a store request on the interconnect fabric to obtain coherence ownership of a target cache line. The initiating processing unit posts a kill request to at least one of the plurality of processing units to request invalidation of a copy of the target cache line. In response to successful posting of the kill request, the initiating processing unit broadcasts a store complete request on the interconnect fabric to enforce completion of the invalidation of the copy of the target cache line. In response to the store complete request receiving a coherence response indicating success, the initiating processing unit permits an update to the target cache line requested by the multicopy atomic store instruction to be atomically visible.

Подробнее
22-08-2017 дата публикации

Migrating workloads across host computing systems based on remote cache content usage characteristics

Номер: US0009740402B2
Принадлежит: VMware, Inc., VMWARE INC, VMWARE, INC.

Techniques for migrating workloads across host computing systems in a virtual computing environment are described. In one embodiment, a workload executing on a first host computing system that accesses contents cached in a cache of a second host computing system via a remote memory channel for a predetermined number of times is identified. Further, migration of the identified workload to the second host computing system is recommended, thereby allowing the identified workload to access the contents from the second host computing system after migration in accordance with the recommendation.

Подробнее
04-02-2020 дата публикации

File system hierarchy mirroring across cloud data stores

Номер: US0010552469B2

Techniques described herein relate to systems and methods of data storage, and more particularly to providing layering of file system functionality on an object interface. In certain embodiments, file system functionality may be layered on cloud object interfaces to provide cloud-based storage while allowing for functionality expected from a legacy applications. For instance, POSIX interfaces and semantics may be layered on cloud-based storage, while providing access to data in a manner consistent with file-based access with data organization in name hierarchies. Various embodiments also may provide for memory mapping of data so that memory map changes are reflected in persistent storage while ensuring consistency between memory map changes and writes. For example, by transforming a ZFS file system disk-based storage into ZFS cloud-based storage, the ZFS file system gains the elastic nature of cloud storage.

Подробнее
29-03-2012 дата публикации

METHOD AND APPARATUS FOR IMPLEMENTING MULTI-PROCESSOR MEMORY COHERENCY

Номер: US20120079209A1
Принадлежит: Huawei Technologies Co., Ltd.

A method and an apparatus for implementing multi-processor memory coherency are disclosed. The method includes: a Level-2 (L2) cache of a first cluster receives a control signal of the first cluster for reading first data; the L2 cache of the first cluster reads the first data in a Level-1 (L1) cache of a second cluster through an Accelerator Coherency Port (ACP) of the L1 cache of the second cluster if the first data is currently maintained by the second cluster, where the L2 cache of the first cluster is connected to the ACP of the L1 cache of the second cluster; and the L2 cache of the first cluster provides the first data read to the first cluster for processing. The technical solution under the present invention implements memory coherency between clusters in the ARM Cortex-A9 architecture.

Подробнее
07-08-2018 дата публикации

Control system and method for cache coherency

Номер: US0010044829B2

Control systems and methods for cache coherency are provided. One control method includes steps of transmitting a link-connect request to a second electrical device when the first electrical device is coupled to the second electrical device by a cache coherency (CC) interface by a first electrical device, establishing a link between the first electrical device and second electrical device according to the link-connect request by the CC interface, and operating a first operating system of the first electrical device by a second processing unit of the second electrical device after establishing the link.

Подробнее
06-09-2022 дата публикации

Guaranteed file system hierarchy data integrity in cloud object stores

Номер: US0011436195B2

Techniques described herein relate to systems and methods of data storage, and more particularly to providing layering of file system functionality on an object interface. In certain embodiments, file system functionality may be layered on cloud object interfaces to provide cloud-based storage while allowing for functionality expected from a legacy applications. For instance, POSIX interfaces and semantics may be layered on cloud-based storage, while providing access to data in a manner consistent with file-based access with data organization in name hierarchies. Various embodiments also may provide for memory mapping of data so that memory map changes are reflected in persistent storage while ensuring consistency between memory map changes and writes.

Подробнее
12-09-2023 дата публикации

Consistent file system semantics with cloud object storage

Номер: US0011755535B2
Принадлежит: Oracle International Corporation

Techniques described herein relate to systems and methods of data storage, and more particularly to providing layering of file system functionality on an object interface. In certain embodiments, file system functionality may be layered on cloud object interfaces to provide cloud-based storage while allowing for functionality expected from a legacy applications. For instance, POSIX interfaces and semantics may be layered on cloud-based storage, while providing access to data in a manner consistent with file-based access with data organization in name hierarchies. Various embodiments also may provide for memory mapping of data so that memory map changes are reflected in persistent storage while ensuring consistency between memory map changes and writes. For example, by transforming a ZFS file system disk-based storage into ZFS cloud-based storage, the ZFS file system gains the elastic nature of cloud storage.

Подробнее
29-11-2023 дата публикации

CACHE MANAGEMENT METHOD AND DEVICE

Номер: EP4024213B1
Автор: SONG, Chang
Принадлежит: Huawei Technologies Co., Ltd.

Подробнее
28-08-1997 дата публикации

Mehrprozessor-Zentraleinheit

Номер: DE0019606629A1
Принадлежит:

The invention concerns central processing units having two or more groups each comprising a processor, a memory and a coupler, the processor and memory being interconnected by precisely one coupler and the couplers being interconnected. By interleaving addresses, a memory area uniformly distributed over the address space is associated disjunctively with each group. Each coupler carries out access to the memory area associated with its group itself and other accesses via the connection to the related coupler. The central processing units are provided with interfaces for constructing multiple systems.

Подробнее
24-11-2011 дата публикации

METHOD OF OPTIMIZATION OF CACHE MEMORY MANAGEMENT AND CORRESPONDING APPARATUS

Номер: CA0002797435A1
Принадлежит:

In order to optimize cache memory management, the invention proposes a method and corresponding apparatus that comprises application of different cache memory management policies according to data origin and possibly to data type and the use of increasing levels of exclusion from adding to cache of data, the exclusion levels being increasingly restrictive with regard to adding data to cache as the cache memory fill level increases. The method and device allows among others to keep important information in cache memory and reduce time spent in swapping information in- and out of cache memory.

Подробнее
16-03-2009 дата публикации

System including a fine-grained memory and a less-fine-grained memory

Номер: TW0200912643A
Принадлежит:

A data processing system includes one or more nodes, each node including a memory sub-system. The sub-system includes a fine-grained, memory, and a less-fine-grained (e. g. , page-based) memory. The fine-grained memory optionally serves as a cache and/or as a write buffer for the page-based memory. Software executing on the system uses a node address space which enables access to the page-based memories of all nodes. Each node optionally provides ACID memory properties for at least a portion of the space. In at least a portion of the space, memory elements are mapped to locations in the page-based memory. In various embodiments, some of the elements are compressed, the compressed elements are packed into pages, the pages are written into available locations in the page-based memory, and a map maintains an association between the some of the elements and the locations.

Подробнее
02-02-2017 дата публикации

ADDRESS CACHING IN SWITCHES

Номер: WO2017019216A1
Автор: SEREBRIN, Benjamin C.
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for storing an address in a memory of a switch. One of the systems includes a switch that receives packets from and delivers packets to devices connected to a bus without any components on the bus between the switch and each of the devices, a memory integrated into the switch to store a mapping of virtual addresses to physical addresses, and a storage medium integrated into the switch storing instructions executable by the switch to cause the switch to perform operations including receiving a response to an address translation request for a device connected to the switch by the bus, the response including a mapping of a virtual address to a physical address, and storing, in the memory, the mapping of the virtual address to the physical address in response to receiving the response.

Подробнее
06-10-2020 дата публикации

Cache system for live broadcast streaming

Номер: US0010798205B2
Принадлежит: Facebook, Inc., FACEBOOK INC

Several embodiments include a cache system in a media distribution network. The cache system can coalesce content requests that specify the same URL. The cache system can select one or more representative content requests from the coalesced content requests. The cache system can send one or more lookup requests corresponding to the representative content requests while delaying further processing of the coalesced content requests other than the representative content requests. The cache system can receive a content object associated with the URL in response to sending the lookup requests. The cache system can respond to a delayed content request after the content object is cached by sending the cached content object to a requesting device.

Подробнее
15-09-2016 дата публикации

STORAGE SYSTEM, NODE APPARATUS, CACHE CONTROL METHOD AND PROGRAM

Номер: US20160266844A1
Принадлежит: NEC Corporation

Each node includes a cache to store data of the storage shared by the plurality nodes. Time information when a process accessing to data migrates from one node to another node is recorded. The one node, after migration of the process to the other node, selectively invalidates data held in the cache of the one node with a time of last access thereto by the process on the one node being older than a time of migration of the process from the one node to the other node.

Подробнее
12-07-2018 дата публикации

GUARANTEED FILE SYSTEM HIERARCHY DATA INTEGRITY IN CLOUD OBJECT STORES

Номер: US20180196818A1
Принадлежит: Oracle International Corporation

Techniques described herein relate to systems and methods of data storage, and more particularly to providing layering of file system functionality on an object interface. In certain embodiments, file system functionality may be layered on cloud object interfaces to provide cloud-based storage while allowing for functionality expected from a legacy applications. For instance, POSIX interfaces and semantics may be layered on cloud-based storage, while providing access to data in a manner consistent with file-based access with data organization in name hierarchies. Various embodiments also may provide for memory mapping of data so that memory map changes are reflected in persistent storage while ensuring consistency between memory map changes and writes. For example, by transforming a ZFS file system disk-based storage into ZFS cloud-based storage, the ZFS file system gains the elastic nature of cloud storage.

Подробнее
16-11-2017 дата публикации

DETERMINISTIC MULTIFACTOR CACHE REPLACEMENT

Номер: US20170329720A1
Принадлежит:

Some embodiments modify caching server operation to evict cached content based on a deterministic and multifactor modeling of the cached content. The modeling produces eviction scores for the cached items. The eviction scores are derived from two or more factors of age, size, cost, and content type. The eviction scores determine what content is to be evicted based on the two or more factors included in the eviction score derivation. The eviction scores modify caching server eviction operation for specific traffic or content patterns. The eviction scores further modify caching server eviction operation for granular control over an item's lifetime on cache.

Подробнее
28-05-2020 дата публикации

Dynamic Caching and Eviction

Номер: US20200167281A1
Принадлежит: Verizon Digital Media Services Inc.

Dynamic caching policies and/or dynamic purging policies are provided for modifying the entry and eviction of content to the cache (e.g., storage and/or memory) of a caching server based on the current and past cache performance and/or demand. The caching server may modify or replace a configured policy when cache performance is below one or more thresholds. Modifying the caching policy may change caching behavior of the caching server by changing the conditions that control the content that is entered into cache or the content that is deferred and not entered into cache after a request. This may include assigning different probabilities for entering the same content into cache based on different caching policies. Modifying the purging policy may change eviction behavior of the caching server by changing the conditions that control the cached content that is selected and removed from cache.

Подробнее
29-08-2023 дата публикации

High performance interconnect

Номер: US0011741030B2
Принадлежит: Intel Corporation

A physical layer (PHY) is coupled to a serial, differential link that is to include a number of lanes. The PHY includes a transmitter and a receiver to be coupled to each lane of the number of lanes. The transmitter coupled to each lane is configured to embed a clock with data to be transmitted over the lane, and the PHY periodically issues a blocking link state (BLS) request to cause an agent to enter a BLS to hold off link layer flit transmission for a duration. The PHY utilizes the serial, differential link during the duration for a PHY associated task selected from a group including an in-band reset, an entry into low power state, and an entry into partial width state.

Подробнее
23-06-2022 дата публикации

SYSTEM, APPARATUS AND METHOD FOR PROVIDING A PLACEHOLDER STATE IN A CACHE MEMORY

Номер: US20220197803A1
Принадлежит:

In one embodiment, a system includes an (input/output) I/O domain and a compute domain. The I/O domain includes an I/O agent and a I/O domain caching agent. The compute domain includes a compute domain caching agent and a compute domain cache hierarchy. The I/O agent issues an ownership request to the compute domain caching agent to obtain ownership of a cache line in the compute domain cache hierarchy. In response to the ownership request, the compute domain caching agent places the cache line in the compute domain cache hierarchy in a placeholder state. The placeholder state reserves the cache line for performance of a write operation by the I/O agent. The compute domain caching agent writes data received from the I/O agent to the cache line in the compute domain cache hierarchy and transitions the state of the cache line out of the placeholder state.

Подробнее
02-02-2018 дата публикации

ИСПОЛЬЗОВАНИЕ КЭШ-ПАМЯТИ И ПАМЯТИ ДРУГОГО ТИПА В РАСПРЕДЕЛЁННОЙ ЗАПОМИНАЮЩЕЙ СИСТЕМЕ

Номер: RU2643642C2
Принадлежит: ИНТЕЛ КОРПОРЕЙШН (US)

Группа изобретений относится к распределенным запоминающим системам и может быть использована для сохранения данных в кэш-памяти или памяти другого типа. Техническим результатом является обеспечение улучшенного использования ресурсов кэш-памяти и ресурсов других типов памяти. Система содержит множество взаимосвязанных узлов, которые коллективно управляют сохранением данных, содержащее первый узел, имеющий процессор выполнения логики, при этом логика выполнена с возможностью: приема данных от ресурса, имеющего сообщение с множеством взаимосвязанных узлов, при этом принятые данные предназначены для сохранения в репозитории для сохранения данных, который включает в себя ресурс энергонезависимой кэш-памяти и ресурс энергонезависимой памяти, не являющейся кэш-памятью; создания первых метаданных, основанных на принятых данных; и пересылки первых метаданных управляющей логике сохранения данных для побуждения управляющей логики сохранить принятые данные в репозитории на основании информации, включенной ...

Подробнее
15-03-2022 дата публикации

ВЫСОКОПЛОТНЫЙ ВЫЧИСЛИТЕЛЬНЫЙ УЗЕЛ

Номер: RU209333U1

Полезная модель относится к области вычислительной техники и может быть использована при создании высокопроизводительных вычислительных систем (ВВС). Высокоплотный вычислительный узел содержит корпус высотой 4U, блейд-сервера, коммуникационные устройства и блоки питания. Корпус разделен на переднюю и заднюю секции информационной и электрической объединительными платами, блейд-сервера, устанавливаются в передней части корпуса, а коммуникационные устройства и блоки питания устанавливаются в задней части корпуса. Блейд-сервера и коммуникационные устройства с разных сторон подключаются к объединительным платам. Каждое коммуникационное устройство имеет три группы портов. Первыми группами портов коммуникационные устройства объединяются между собой по полносвязной топологии. Вторые группы портов коммуникационных устройств являются внешними выводами высокоплотного вычислительного узла, которые предназначены для объединения высокоплотных вычислительных узлов в единую высокопроизводительную вычислительную систему неограниченной производительности. К третьей группе портов каждого коммуникационного устройства подключены до двух соответствующих блейд-серверов. Блейд-сервера и коммуникационные устройства оборудованы контактной системой жидкостного охлаждения. Технический результат заключается в увеличении эффективности высокоплотного вычислительного узла. 5 ил. РОССИЙСКАЯ ФЕДЕРАЦИЯ (19) RU (11) (13) 209 333 U1 (51) МПК G06F 12/0813 (2016.01) ФЕДЕРАЛЬНАЯ СЛУЖБА ПО ИНТЕЛЛЕКТУАЛЬНОЙ СОБСТВЕННОСТИ (12) ОПИСАНИЕ ПОЛЕЗНОЙ МОДЕЛИ К ПАТЕНТУ (52) СПК G06F 12/0813 (2021.08) (21)(22) Заявка: 2021128505, 27.09.2021 (24) Дата начала отсчета срока действия патента: Дата регистрации: 15.03.2022 (45) Опубликовано: 15.03.2022 Бюл. № 8 2 0 9 3 3 3 R U (56) Список документов, цитированных в отчете о поиске: Tiffany Trader "Sugon VP on Global Market Strategy, the VMware Venture and Robotic Immersive Cooling", опубл. 18.11.2015 на 2 страницах [прототип], размещено в Интернет по адресу URL:https://www ...

Подробнее
11-05-2011 дата публикации

Method and apparatus for implenmenting multi-processor memory coherency

Номер: GB0201105414D0
Автор:
Принадлежит:

Подробнее
19-07-2012 дата публикации

Computer architectures using shared storage

Номер: US20120185725A1
Принадлежит: Boeing Co

A method includes providing a persistent common view of a virtual shared storage system. The virtual shared storage system includes a first shared storage system and a second shared storage system, and the persistent common view includes information associated with data and instructions stored at the first shared storage system and the second shared storage system. The method includes automatically updating the persistent common view to include third information associated with other data and other instructions stored at a third shared storage system in response to adding the third shared storage system to the virtual shared storage system.

Подробнее
20-09-2012 дата публикации

Resource sharing to reduce implementation costs in a multicore processor

Номер: US20120239883A1
Принадлежит: Individual

A processor may include several processor cores, each including a respective higher-level cache; a lower-level cache including several tag units each including several controllers, where each controller corresponds to a respective cache bank configured to store data, and where the controllers are concurrently operable to access their respective cache banks; and an interconnect network configured to convey data between the cores and the lower-level cache. The controllers in a given tag unit may share access to a resource that may include one or more of an interconnect egress port coupled to the interconnect network, an interconnect ingress port coupled to the interconnect network, a test controller, or a data storage structure.

Подробнее
06-12-2012 дата публикации

Multiprocessor and image processing system using the same

Номер: US20120311266A1
Автор: Hirokazu Takata
Принадлежит: Renesas Electronics Corp

To provide a multiprocessor capable of easily sharing data and buffering data to be transferred. Each of a plurality of shared local memories is connected to two processors of a plurality of processor units, and the processor units and the shared local memories are connected in a ring. Consequently, it becomes possible to easily share data and buffer data to be transferred.

Подробнее
09-05-2013 дата публикации

Managing Chip Multi-Processors Through Virtual Domains

Номер: US20130117521A1
Принадлежит: Hewlett Packard Development Co LP

A chip multi-processor (CMP) with virtual domain management. The CMP has a plurality of tiles each including a core and a cache, a mapping storage, a plurality of memory controllers, a communication bus interconnecting the tiles and the memory controllers, and machine-executable instructions. The tiles and memory controllers are responsive to the instructions to group the tiles into a plurality of virtual domains, each virtual domain associated with at least one memory controller, and to store a mapping unique to each virtual domain in the mapping storage.

Подробнее
01-08-2013 дата публикации

METHOD OF OPTIMIZATION OF CACHE MEMORY MANAGEMENT AND CORRESPONDING APPARATUS

Номер: US20130198314A1
Принадлежит: THOMSON LICENSING

In order to optimize cache memory management, the invention proposes a method and corresponding apparatus that comprises application of different cache memory management policies according to data origin and possibly to data type and the use of increasing levels of exclusion from adding to cache of data the exclusion levels being increasingly restrictive with regard to adding data to cache as the cache memory fill level increases. The method and device allows among others to keep important information in cache memory and reduce time spent in swapping information in- and out of cache memory. 1. A Method of cache memory management implemented in a user receiver device , wherein it comprises the following steps:reception of a request for adding of data to said cache memory;stagewise exclusion of adding of data to said cache memory as cache memory fill level increases, and said stagewise exclusion of adding being determined, for each successive stage of cache memory fill level, according to rules of exclusion of adding of data to said cache memory that are increasingly restrictive.2. The Method according to claim 1 , wherein said method further comprises a step of:exclusion of adding of said data to said cache memory if cache memory fill level is higher than a first stage of cache memory fill level that is lower than a maximum stage of cache memory fill level.3. The Method according to claim 13 , wherein said method further comprises a step of:maintaining a list of preferred service offering providers in said user receiver device, and determination if a service provider from which said data to add originates is in said list; andif it is determined that said service offering provider is not in said list and cache memory fill level is under a second stage of cache memory fill level that is lower than said first stage of cache memory fill level, and if it is determined that a type of said data is in a list of preferred data types, said list of preferred data types being ...

Подробнее
19-09-2013 дата публикации

INFORMATION PROCESSING SYSTEM AND DATA-STORAGE CONTROL METHOD

Номер: US20130246690A1
Автор: Haneda Terumasa
Принадлежит: FUJITSU LIMITED

In an information processing system, a processor requests a first transfer control circuit to transfer data to a first memory. In response to the request from the processor, the first transfer control circuit sends the data to a second transfer control circuit. The second transfer control circuit stores in a second memory the data received from the first transfer control circuit, and also stores the data in the first memory through the first transfer control circuit. 1. An information processing system comprising:a processor;a first memory;a second memory;a first transfer control circuit connected to the processor and the first memory; anda second transfer control circuit connected to the first transfer control circuit and the second memory;wherein: the first transfer control circuit sends data to the second transfer control circuit when the first transfer control circuit receives from the processor a request for transfer of the data and the data is addressed to the first memory; and when the second transfer control circuit receives the data sent from the first transfer control circuit, the second transfer control circuit stores the received data in the second memory, and also stores the received data in the first memory through the first transfer control circuit.2. The information processing system according to claim 1 , wherein: when the first transfer control circuit receives from the processor the data addressed to the first memory claim 1 , the first transfer control circuit sends to the second transfer control circuit a write-request packet containing the data and designating the first memory as a destination of the data; when the second transfer control circuit receives the write-request packet from the first transfer control circuit claim 1 , the second transfer control circuit writes in the second memory the data contained in the write-request packet claim 1 , and transfers the write-request packet to the first transfer control circuit; and when the first ...

Подробнее
26-09-2013 дата публикации

System and Method for Conditionally Sending a Request for Data to a Home Node

Номер: US20130254484A1
Автор: Garg Gaurav, Hass David T.
Принадлежит: NetLogic Microsystems, Inc.

A system, method, and computer program product are provided for conditionally sending a request for data to a home node. In operation, a first request for data is sent to a first cache of a node. Additionally, if the data does not exist in the first cache, a second request for the data is sent to a second cache of the node. Furthermore, a third request for the data is conditionally sent to a home node 1. A method of querying a plurality of nodes for data requested by a processor , comprising:determining that the requested data does not exist in a first cache;determining that the requested data does not exist in a second cache;sending a request for the requested data to a node determined to be a home node for a particular memory address associated with the requested data.2. The method of claim 1 , further comprising determining that the request can be satisfied by the home node.3. The method of claim 2 , further comprising supplying the requested data to the processor.4. The method of claim 1 , further comprising claim 1 , determining that the request cannot be satisfied by the home node.5. The method of claim 4 , further comprising sending a snoop request to an additional node.6. The method of claim 5 , further comprising determining that the additional node potentially includes a copy of the requested data.7. The method of claim 5 , further comprising updating a cache state of the additional node based on the snoop request.8. The method of claim 5 , further comprising receiving a response to the snoop request at the home node.9. The method of claim 8 , further comprising supplying the requested data to the processor based on the response received from the snoop request.10. The method of claim 4 , further comprising sending a snoop request to all local caches in the home node.11. A system capable of querying a plurality of nodes for requested data claim 4 , comprising claim 4 ,a home node associated with a particular memory address associated with the requested data ...

Подробнее
26-09-2013 дата публикации

ACCESS REQUESTS WITH CACHE INTENTIONS

Номер: US20130254492A1
Принадлежит: MICROSOFT CORPORATION

A lease system is described herein that allows clients to request a lease to a remote file, wherein the lease permits access to the file across multiple applications using multiple handles without extra round trips to a server. When multiple applications on the same client (or multiple components of the same application) request access to the same file, the client specifies the same lease identifier to the server for each open request or may handle the request from the cache based on the existing lease. Because the server identifies the client's cache at the client level rather than the individual file request level, the client receives fewer break notifications and is able to cache remote files in more circumstances. Thus, by providing the ability to cache data in more circumstances common with modern applications, the lease system reduces bandwidth, improves server scalability, and provides faster access to data. 120-. (canceled)21. A computer-readable storage medium comprising instructions for controlling a computer system to cache information at a client related to remote files stored at a server , wherein the instructions , when executed at a client , cause at least one processor to perform actions comprising:receiving access information from a software application indicating a remote file to open in an open request;determining whether a lease exists on the client for the indicated remote file, wherein the lease provides cache coherency information associated with the remote file;if no lease exists, sending a lease request to open the file and request a lease that identifies one or more cache intentions of the client based on the access information received from the application; andreceiving a lease response indicating whether the requested lease was granted.22. The computer-readable storage medium of wherein the open request comprises a Uniform Naming Convention (UNC) path to the remote file accessible over a network.23. The computer-readable storage medium of ...

Подробнее
03-10-2013 дата публикации

OPTIMIZED RING PROTOCOLS AND TECHNIQUES

Номер: US20130262781A1
Принадлежит:

Methods and apparatus relating to optimized ring protocols and techniques are described. In one embodiment, a first agent generates a request to write to a cache line of a cache over a first ring of a computing platform. A second agent that receives the write request forwards it to a third agent over the first ring of the computing platform. In turn, a third agent (e.g., a home agent) receives data corresponding to the write request over a second, different ring of the computing platform and writes the data to the cache. Other embodiments are also disclosed. 1a first agent to generate a request to write to a cache line of a cache over a first ring of a computing platform;a second agent, coupled to the first agent, to receive the write request and forward the write request to a third agent over the first ring of the computing platform; anda third agent, coupled to the cache and the second agent, to write the cache line to the cache in response to the write request, wherein the third agent is to receive data, corresponding to the write request, over a second ring of the computing platform.. An apparatus comprising: The present disclosure generally relates to the field of electronics. More particularly, some embodiments relate to optimized ring protocols and techniques.High Performance Computing (HPC) platforms may be frequency constrained, e.g., due to the need for accommodating a large number of cores and for example an equal number of VPUs (Vector Processing Units) to meet the performance requirements within a fixed power budget. Due to the large number of cores, some processors within some platforms are designed to be operating at less than 2 GHz. This is a significantly lower frequency compared to current and next generation of server processor cores (e.g., at 3.2+ GHz). Lower frequency adds more pressure on the ring (formed by the cores and processing units), as the ring throughput is generally proportional to the frequency.Thus, HPC platforms may require a large ...

Подробнее
26-12-2013 дата публикации

Scalable cloud storage architecture

Номер: US20130346557A1
Принадлежит: International Business Machines Corp

A virtual storage module operable to run in a virtual machine monitor may include a wait-queue operable to store incoming block-level data requests from one or more virtual machines. In-memory metadata may store information associated with data stored in local persistent storage that is local to a host computer hosting the virtual machines. The data stored in local persistent storage replicates a subset of data in one or more virtual disks provided to the virtual machines. The virtual disks are mapped to remote storage accessible via a network connecting the virtual machines and the remote storage. A cache handling logic may be operable to handle the block-level data requests by obtaining the information in the in-memory metadata and making I/O requests to the local persistent storage or the remote storage or combination of the local persistent storage and the remote storage to service the block-level data requests.

Подробнее
02-01-2014 дата публикации

Cache Collaboration in Tiled Processor Systems

Номер: US20140006713A1
Принадлежит: Intel Corp

The present invention may provide a computer system including a plurality of tiles divided into multiple virtual domains. Each tile may include a router to communicate with others of said tiles, a private cache to store data, and a spill table to record pointers for data evicted from the private cache to a remote host, wherein the remote host and the respective tile are provided in the same virtual domain. The spill tables may allow for faster retrieval of previously evicted data because the home registry does not need to be referenced if requested data is listed in the spill table. Therefore, embodiments of the present invention may provide a distance-aware cache collaboration architecture without incurring extraneous overhead expenses.

Подробнее
09-01-2014 дата публикации

Ensuring causality of transactional storage accesses interacting with non-transactional storage accesses

Номер: US20140013060A1
Принадлежит: International Business Machines Corp

A data processing system implements a weak consistency memory model for a distributed shared memory system. The data processing system concurrently executes, on a plurality of processor cores, one or more transactional memory instructions within a memory transaction and one or more non-transactional memory instructions. The one or more non-transactional memory instructions include a non-transactional store instruction. The data processing system commits the memory transaction to the distributed shared memory system only in response to enforcement of causality of the non-transactional store instruction with respect to the memory transaction.

Подробнее
06-03-2014 дата публикации

TRANSACTIONAL MEMORY PROXY

Номер: US20140068201A1
Автор: Fromm Eric
Принадлежит: Silicon Graphics International Corp.

Processors in a compute node offload transactional memory accesses addressing shared memory to a transactional memory agent. The transactional memory agent typically resides near the processors in a particular compute node. The transactional memory agent acts as a proxy for those processors. A first benefit of the invention includes decoupling the processor from the direct effects of remote system failures. Other benefits of the invention includes freeing the processor from having to be aware of transactional memory semantics, and allowing the processor to address a memory space larger than the processor's native hardware addressing capabilities. The invention also enables computer system transactional capabilities to scale well beyond the transactional capabilities of those found computer systems today. 1. A method for optimizing processor performance in a compute node for transactional memory accesses , the method comprising:allocating a transactional memory agent to one or more processors in a compute node;performing global memory transactions for a processor in a compute node by the allocated transactional memory agent ;copying data from one or more globally shared memory addresses identified by a transactional request for data into a private region of memory associated with a requesting processor during shared memory read operations by the transactional memory agent; andsending data from the private region of memory associated with the requesting processor by the transactional memory agent targeting globally shared memory during a transactional memory write operation.2. The method of claim 1 , wherein the transactional memory agent is configured to handle one memory transaction with the global memory at a time3. The method of claim 2 , wherein the global memory is a globally addressable distributed memory system.4. The method of claim 1 , wherein the memory associated with the requesting processor is a private memory space exclusive to the processor.5. The ...

Подробнее
27-03-2014 дата публикации

DATA STORAGE DEVICE

Номер: US20140089605A1
Принадлежит: GOOGLE INC.

A data storage device may include an interface that is arranged and configured to interface with a host, a command bus, multiple memory devices that are operably coupled to the command bus and a controller that is operably coupled to the interface and to the command bus. The controller may be arranged and configured to receive a read metadata command for a specified one of the memory devices from the host using the interface, read metadata from the specified memory device and communicate the metadata to the host using the interface. 1an interface that is arranged and configured to interface with a host;a command bus;multiple memory devices that are operably coupled to the command bus; and receive a read metadata command for a specified one of the memory devices from the host using the interface;', 'read metadata from the specified memory device; and', 'communicate the metadata to the host using the interface., 'a controller that is operably coupled to the interface and to the command bus, wherein the controller is arranged and configured to. A data storage device comprising: This application is a continuation of application Ser. No. 12/756,007, filed Apr. 7, 2010, and titled “Data Storage Device With Metadata Command,” now U.S. Pat. No. ______, which claims the benefit of U.S. Provisional Application No. 61/167,709, filed Apr. 8, 2009, and titled “Data Storage Device”, U.S. Provisional Application No. 61/187,835, filed Jun. 17, 2009, and titled “Partitioning and Striping in a Flash Memory Data Storage Device,” U.S. Provisional Application No. 61/304,469, filed Feb. 14, 2010, and titled “Data Storage Device,” U.S. Provisional Patent Application No. 61/304,468, filed Feb. 14, 2010, and titled “Data Storage Device,” and U.S. Provisional Patent Application No. 61/304,475, filed Feb. 14, 2010, and titled “Data Storage Device,” all of which are hereby incorporated by reference in their entirety.This description relates to a data storage device and managing multiple memory ...

Подробнее
10-04-2014 дата публикации

Writing memory blocks using codewords

Номер: US20140101366A1
Принадлежит: Microsoft Corp

A generator matrix is provided to generate codewords from messages of write operations. Rather than generate a codeword using the entire generator matrix, some number of bits of the codeword are determined to be, or designated as, stuck bits. One or more submatrices of the generator matrix are determined based on the columns of the generator matrix that correspond to the stuck bits. The submatrices are used to generate the codeword from the message, and only the bits of the codeword that are not the stuck bits are written to a memory block. By designating one or more bits as stuck bits, the operating life of the bits is increased. Some of the submatrices of the generator matrix may be pre-computed for different stuck bit combinations. The pre-computed submatrices may be used to generate the codewords, thereby increasing the performance of write operations.

Подробнее
05-01-2017 дата публикации

METHODS FOR HOST-SIDE CACHING AND APPLICATION CONSISTENT WRITEBACK RESTORE AND DEVICES THEREOF

Номер: US20170004082A1
Автор: Basu Sourav, Sehgal Priya
Принадлежит:

A method, non-transitory computer readable medium, and device that assists with file-based host-side caching and application consistent write back includes receiving a write operation on a file from a client computing device. When the file for which the write operation has been received is determined when the file is present in the cache. An acknowledgement is sent back to the client computing device indicating the acceptance of the write operation when the file for which the write operation has been received is determined to be present within the cache. The write-back operation is completed for data present in the cache of the storage management computing device to one of the plurality of servers upon sending the acknowledgement. 1. A method for file-based host-side caching and application consistent write back , the method comprising:receiving, by a storage management computing device, a write operation on a file from a client computing device;determining, by the storage management computing device, when the file for which the write operation has been received is present within a cache of the storage management computing device;sending, by the storage management computing device, an acknowledgement indicating the acceptance of the write operation back to the client computing device when the file for which the write operation has been received is determined to be present within the cache; andcompleting, by the storage management computing device, a write-back operation for data present in the cache of the storage management computing device to one of the plurality of servers upon sending the acknowledgement.2. The method as set forth in wherein the determining further comprises claim 1 , obtaining and caching claim 1 , by the storage management computing device claim 1 , the file to the cache of the storage management computing device when the file for which the write operation has been received is not determined to be present within the cache.3. The method as set ...

Подробнее
05-01-2017 дата публикации

System, method and mechanism to efficiently coordinate cache sharing between cluster nodes operating on the same regions of a file or the file system blocks shared among multiple files

Номер: US20170004083A1
Принадлежит: Veritas Technologies LLC

Various systems, methods and apparatuses for coordinating the sharing of cache data between cluster nodes operating on the same data objects. One embodiment involves a first node in a cluster receiving a request for a data object, querying a global lock manager to determine if a second node in the cluster is the lock owner of the data object, receiving an indication identifying the second node as the lock owner and indicating that the data object is available in the second node's local cache, requesting the data object from the second node, and then receiving the data object from the second node's local cache. Other embodiments include determining whether the lock is a shared lock or an exclusive lock, and either pulling the data object from the local node of the second cache or receiving the data object that is pushed from the second node, as appropriate.

Подробнее
07-01-2021 дата публикации

HARDWARE/SOFTWARE CO-OPTIMIZATION TO IMPROVE PERFORMANCE AND ENERGY FOR INTER-VM COMMUNICATION FOR NFVS AND OTHER PRODUCER-CONSUMER WORKLOADS

Номер: US20210004328A1
Принадлежит:

Methods and apparatus implementing Hardware/Software co-optimization to improve performance and energy for inter-VM communication for NFVs and other producer-consumer workloads. The apparatus include multi-core processors with multi-level cache hierarchies including and L1 and L2 cache for each core and a shared last-level cache (LLC). One or more machine-level instructions are provided for proactively demoting cachelines from lower cache levels to higher cache levels, including demoting cachelines from L1/L2 caches to an LLC. Techniques are also provided for implementing hardware/software co-optimization in multi-socket NUMA architecture system, wherein cachelines may be selectively demoted and pushed to an LLC in a remote socket. In addition, techniques are disclosure for implementing early snooping in multi-socket systems to reduce latency when accessing cachelines on remote sockets. 1a plurality of cores, each having at least one associated cache occupying a respective level in a cache hierarchy;a last level cache (LLC), communicatively coupled to the plurality of cores; anda memory controller, communicatively coupled to the plurality of cores, configured to support access to external system memory when the processor is installed in the computer system;wherein each of the caches associated with a core, and the LLC include a plurality of cacheline slots for storing cacheline data, and wherein the processor is further is configured to support a machine instruction that when executed causes the processor to demote a cacheline from a lower-level cache to a higher-level cache.. A processor, configured to be implemented in a computer system, comprising: The present application claims priority to U.S. patent application Ser. No. 14/583,389, entitled “HARDWARE/SOFTWARE CO-OPTIMIZATION TO IMPROVE PERFORMANCE AND ENERGY FOR INTER-VM COMMUNICATION FOR NFVS AND OTHER PRODUCER-CONSUMER WORKLOADS,” and filed on Dec. 26, 2014, the entirety of which is incorporated by reference ...

Подробнее
02-01-2020 дата публикации

TRANSFER TRACK FORMAT INFORMATION FOR TRACKS IN CACHE AT A PRIMARY STORAGE SYSTEM TO A SECONDARY STORAGE SYSTEM TO WHICH TRACKS ARE MIRRORED TO USE AFTER A FAILOVER OR FAILBACK

Номер: US20200004649A1
Принадлежит:

Provided are a computer program product, system, and method to transfer track format information for tracks in cache at a primary storage system to a secondary storage system to which tracks are mirrored to use after a failover or failback. In response to a failover from the primary storage system to the secondary storage system, the primary storage system adds a track identifier of the track and track format information indicating a layout of data in the track, indicated in track metadata for the track in the primary storage, to a cache transfer list. The primary storage system transfers the cache transfer list to the secondary storage system to use the track format information in the cache transfer list for a track staged into the secondary cache having a track identifier in the cache transfer list. 123-. (canceled)24. A computer program product for performing a failover from a primary storage system having a primary cache and a primary storage to a secondary storage system having a secondary cache and a secondary storage , the computer program product comprising a computer readable storage medium having computer readable program code executed in the primary storage system to perform operations , the operations comprising:in response to failover from the primary storage system to the secondary storage system, for each track in the primary cache, adding a track identifier of the track and track format information indicating metadata of the track, including a layout of data in the track, to a cache transfer list; andtransferring the cache transfer list to the secondary storage system to cause the secondary storage system to use the track format information transferred with the cache transfer list for a track staged into the secondary cache from the secondary storage having a track identifier in the cache transfer list after the failover.25. The computer program product of claim 24 , wherein prior to the failover while the primary storage system comprises an active ...

Подробнее
02-01-2020 дата публикации

PROVIDING DATA VALUES USING ASYNCHRONOUS OPERATIONS AND QUERYING A PLURALITY OF SERVERS

Номер: US20200004681A1
Автор: IYENGAR Arun
Принадлежит:

A processing system server, computer program product, and methods for performing asynchronous data store operations. The server includes a processor which maintains a cache of objects in memory of the server. The processor executes an asynchronous computation to determine the value of an object. In response to receiving a request for the object occurring before the asynchronous computation has determined the value of the object, a value of the object is returned from the cache. In response to receiving a request for the object occurring after the asynchronous computation has determined the value of the object, a value of the object determined by the asynchronous computation is returned. The asynchronous computation may comprise at least one future, such as a ListenableFuture, or a process or thread. The asynchronous computation may receive different values from at least two servers and may determine the value of an object based on time stamps. The asynchronous computation may determine the value of the object by querying at least one additional server, and the asynchronous computation may receive different values from at least two servers of a plurality of servers. 1. A computer program product for a processing system comprised of a first server to provide data values , the computer program product comprising a computer readable storage medium having computer readable program code embodied therewith , the computer readable program code including computer instructions , wherein a processor of the first server , responsive to executing the computer instructions , performs operations in the processing system comprising:maintaining a cache of objects communicatively coupled with the first server;executing an asynchronous computation to determine the value of a first object;returning a value of the first object from the cache of objects, in response to a request for the first object occurring before the asynchronous computation has determined the value of the first ...

Подробнее
02-01-2020 дата публикации

PROACTIVE DATA PREFETCH WITH APPLIED QUALITY OF SERVICE

Номер: US20200004685A1
Принадлежит:

Examples described herein relate to prefetching content from a remote memory device to a memory tier local to a higher level cache or memory. An application or device can indicate a time availability for data to be available in a higher level cache or memory. A prefetcher used by a network interface can allocate resources in any intermediary network device in a data path from the remote memory device to the memory tier local to the higher level cache. Memory access bandwidth, egress bandwidth, memory space in any intermediary network device can be allocated for prefetch of content. In some examples, proactive prefetch can occur for content expected to be prefetched but not requested to be prefetched. 1. A network interface comprising:a memory;an interface to a communications medium; and the associated information comprises a time limit and', 'the prefetcher to determine whether a resource allocation is available to complete at least a portion of the prefetch within the time limit based on the command and associated information., 'a prefetcher communicatively coupled to the interface and to receive a command to perform a prefetch of content from a remote memory with associated information, wherein'}2. The network interface of claim 1 , wherein the associated information includes one or more of: (1) base virtual address to be fetched from remote memory claim 1 , (2) amount of content to be fetched from remote memory claim 1 , (3) the remote memory storing a region to be fetched claim 1 , (4) priority of prefetch claim 1 , (5) indication if resources in an end-to-end path are to be reserved for a response claim 1 , or (6) a length of time of validity of the prefetch and unit of time.3. The network interface of claim 1 , wherein the prefetcher is to cause copying of content from the remote memory to one or more memory tiers including level 1 cache claim 1 , level 2 cache claim 1 , last level cache claim 1 , local memory claim 1 , persistent memory claim 1 , or memory of ...

Подробнее
04-01-2018 дата публикации

Translation Cache for Firewall Configuration

Номер: US20180007000A1
Принадлежит:

Some embodiments provide a method for distributing firewall configuration in a datacenter comprising multiple host machines. The method retrieves a rule in the firewall configuration for distribution to the host machines. The firewall rule is associated with a minimum required version number. The method identifies a high-level construct in the firewall rule. The method queries a translation cache for the identified high-level construct. The translation cache stores previous translation results for different high-level constructs. Each stored translation result is associated with a version number. When the translation cache has a stored previous translation result for the identified high-level construct that is associated with a version number that is equal to or newer than the minimum required version number, the method uses the previous translation result stored in the cache to translate the identified high-level construct to a low-level construct. 1. A method for distributing firewall configuration in a datacenter comprising a plurality of host machines , the method comprising:retrieving a rule in the firewall configuration for distribution to the plurality of host machines, the firewall rule associated with a minimum required version number;identifying a high-level construct in the firewall rule;querying a translation cache for the identified high-level construct, the translation cache storing previous translation results for different high-level constructs, each stored translation result associated with a version number; andwhen the translation cache has a stored previous translation result for the identified high-level construct that is associated with a version number that is equal to or newer than the minimum required version number, using the previous translation result stored in the cache to translate the identified high-level construct to a low-level construct.2. The method of claim 1 , wherein the high-level construct is a container that represents a ...

Подробнее
12-01-2017 дата публикации

Power Management of Cache Duplicate Tags

Номер: US20170010655A1
Принадлежит:

A method and apparatus for power management of cache duplicate tags is disclosed. An IC includes a cache, a coherence circuit, and a duplicate tags memory that may store duplicates of the tags stored in the cache. The cache includes a number of ways that are separately and independently power controllable. The duplicate tags memory may be similarly organized, with portions that are power controllable separately and independently of others. The coherence circuit is also power controllable, and may be placed into a sleep mode when idle. The IC also includes a power management circuit. During operation, the cache may change power states and provide a corresponding indication to the power management circuit. Responsive to the indication, the power management circuit may awaken the coherence circuit if in a sleep state. The coherence circuit may then power manage the duplicate tags in accordance with the change in power state. 1. An integrated circuit comprising:a power management circuit;at least one cache memory having a plurality of ways that are separately and independently power managed; and the cache memory is configured to indicate a power state of the plurality of ways to the power management circuit;', 'the power management circuit is configured to transmit the power state to the coherence circuit;', 'the coherence circuit is configured to power manage the duplicate tags to correspond to the power state; and', 'during a time that the coherence circuit is powered down, the power management circuit is configured to detect a change in the power state of the cache memory and to wake the coherence circuit to power mange the duplicate tags., 'a coherence circuit coupled to the cache via an interconnect network, wherein the coherence circuit is configured to maintain coherency of the at least one cache memory, wherein the coherence circuit is coupled to a duplicate tags memory configured to store duplicates of tags stored in the at least one cache memory, wherein2. The ...

Подробнее
12-01-2017 дата публикации

Previewing File Information Over a Network

Номер: US20170010824A1
Автор: Hrishikesh A. Vidwans
Принадлежит: Symantec Corp

An example embodiment of the present invention provides a process for opening and reading a file over a network, including a WAN. In the example embodiment, an edge file gateway appliance (or server) receives a request from an application such as a tile manager to open a file cached with the edge file gateway appliance at one point on a network and stored on a file server connected to a central server appliance (or server) at another point on the network. The edge file gateway appliance then forwards the request to open the file to the central server appliance, along with any offsets and lengths stored from any previous requests to read the file. The central server appliance responds by sending any file data described in the stored offsets and lengths to the edge file gateway appliance. Then when the edge file gateway appliance receives a read request from an application, the edge file gateway appliance stores the offset and length for the request, if a predefined storage limit is not exceeded, and attempts to satisfy the request from cached file data. The edge file gateway appliance fetches the entire file from the CS appliance if a predefined file limit is exceeded.

Подробнее
09-01-2020 дата публикации

Performance Manager and Method Performed Thereby for Managing the Performance of a Logical Server of a Data Center

Номер: US20200012508A1
Принадлежит: Telefonaktiebolaget LM Ericsson AB

A performance manager ( 400, 500 ) and a method ( 200 ) performed thereby are provided, for managing the performance of a logical server of a data center. The data center comprises at least one memory pool in which a memory block has been allocated to the logical server. The method ( 200 ) comprises determining ( 230 ) performance characteristics associated with a first portion of the memory block, comprised in a first memory unit of the at least one memory pool; and identifying ( 240 ) a second portion of the memory block, comprised in a second memory unit of the at least one memory pool, to which data of the first portion of the memory block may be migrated to apply performance characteristics associated with the second portion. The method ( 200 ) further comprises initiating migration ( 250 ) of the data to the second portion of the memory block.

Подробнее
09-01-2020 дата публикации

DATA THROUGH GATEWAY

Номер: US20200012609A1
Принадлежит: Graphcore Limited

A gateway for use in a computing system to interface a host with the subsystem for acting as a work accelerator to the host, the gateway having: an accelerator interface for connection to the subsystem to enable transfer of batches of data between the subsystem and the gateway; a data connection interface for connection to external storage for exchanging data between the gateway and storage; a gateway interface for connection to at least one second gateway; a memory interface connected to a local memory associated with the gateway; and a streaming engine for controlling the streaming of batches of data into and out of the gateway in response to pre-compiled data exchange synchronisation points attained by the subsystem, wherein the streaming of batches of data are selectively via at least one of the accelerator interface, data connection interface, gateway interface and memory interface. 1. A gateway for use in a computing system to interface a host with the subsystem for acting as a work accelerator to the host , the gateway having:an accelerator interface for connection to the subsystem to enable transfer of batches of data between the subsystem and the gateway;a data connection interface for connection to external storage for exchanging data between the gateway and storage;a gateway interface for connection to at least one second gateway;a memory interface connected to a local memory associated with the gateway; anda streaming engine for controlling the streaming of batches of data into and out of the gateway in response to pre-compiled data exchange synchronisation points attained by the subsystem, wherein the streaming of batches of data are selectively via at least one of the accelerator interface, data connection interface, gateway interface and memory interface.2. A gateway as claimed in claim 1 , wherein one or more of the batches of data are transferred from the local memory to the subsystem in response to a pre-compiled data exchange synchronisation point ...

Подробнее
15-01-2015 дата публикации

Methods and Systems for Caching Content at Multiple Levels

Номер: US20150019678A1
Принадлежит:

A cache includes an object cache layer and a byte cache layer, each configured to store information to storage devices included in the cache appliance. An application proxy layer may also be included. In addition, the object cache layer may be configured to identify content that should not be cached by the byte cache layer, which itself may be configured to compress contents of the object cache layer. In some cases the contents of the byte cache layer may be stored as objects within the object cache. 1. A cache , comprising:an object cache layer and a byte cache layer, each configured to store information to storage devices included in the cache, the object cache layer and the byte cache layer being configured to communicate with one another through a socketpair and wherein contents of the byte cache layer are stored as objects within the object cache; andan application proxy layer configured to identify content the should not be cached by one or more of the object cache layer and the byte cache layer and to pass content not cacheable at the object cache layer to the byte cache layer.24-. (canceled)5. The cache of claim 1 , wherein the application proxy layer is configured to determine whether the content is compressible or not compressible and the byte cache layer is configured to compress contents of the object cache layer.6. The cache of claim 5 , wherein the object cache layer enables or disables compression at the byte cache layer based on whether the content is known to be compressible or not compressible.78-. (canceled)9. A method claim 5 , comprising receiving content from a content source at a cache having an object cache layer and a byte cache layer claim 5 , the object cache layer and the byte cache layer being configured to communicate with one another through a socketpair claim 5 , and caching said content first at the object cache layer of the cache and next at the byte cache layer of the cache so as to eliminate repeated strings present within the ...

Подробнее
17-01-2019 дата публикации

Predicting page migration granularity for heterogeneous memory systems

Номер: US20190018705A1
Принадлежит: Advanced Micro Devices Inc

Systems, apparatuses, and methods for predicting page migration granularities for phases of an application executing on a non-uniform memory access (NUMA) system architecture are disclosed herein. A system with a plurality of processing units and memory devices executes a software application. The system identifies a plurality of phases of the application based on one or more characteristics (e.g., memory access pattern) of the application. The system predicts which page migration granularity will maximize performance for each phase of the application. The system performs a page migration at a first page migration granularity during a first phase of the application based on a first prediction. The system performs a page migration at a second page migration granularity during a second phase of the application based on a second prediction, wherein the second page migration granularity is different from the first page migration granularity.

Подробнее
28-01-2016 дата публикации

Sas expander based persistent connections

Номер: US20160026591A1
Автор: Marc Timothy Jones

Embodiments of the present invention provide for creating and using persistent connections in SAS networks. A persistent connection may be a connection that persists for longer than the usual SAS connection. More specifically, it is a connection that is not subject to periodic tear downs by SAS devices according to existing SAS protocols (such as, by using CLOSE or BREAK primitives). Instead, persistent connections may be removable bra link reset Persistent connections may be used in situations in which the overhead associated with the usual tear down and re-establishment of connections in a SAS network may be considered too high in comparison with its intended benefits. Persistent connections way also be used to provide virtual direct attachment between two different SAS connected devices or between a SAS connected device and an expander.

Подробнее
24-01-2019 дата публикации

Systems and methods for managing digital rights

Номер: US20190028278A1
Автор: Ross Gilson
Принадлежит: COMCAST CABLE COMMUNICATIONS LLC

Systems and methods are described for managing digital rights. Methods may comprise causing an encrypted content asset to be stored at a storage location. The encrypted content asset at the storage location may be accessible by one or more user devices. A transaction may be generated and may comprise an identifier and a decryption key, wherein the decryption key is configured to decrypt at least a portion of the encrypted content asset. The transaction may be caused to be stored in a distributed database, wherein the distributed database is accessible by the one or more user devices using at least the identifier.

Подробнее
23-01-2020 дата публикации

NETWORK INTERFACE DEVICE AND HOST PROCESSING DEVICE

Номер: US20200028930A1
Принадлежит: Solarflare Communications, Inc.

A network interface device has an input configured to receive data from a network. The data is for one of a plurality of different applications. The applications may be supported by a host system. The network interface device is configured to determine which of a plurality of available different caches in a host the data is to be injected. The network interface device will then inject the determined cached with the received data. 1. A network interface device comprising:an input configured to receive data from a network, said data being for one of a plurality of different applications; and determine which of a plurality of available different caches in a host system said data is to be injected; and', 'cause said data to be injected to the determined cache in said host system., 'at least one processor configured to2. The network interface device as claimed in claim 1 , wherein at least two of said caches are associated with different CPU cores.3. The network interface device as claimed in claim 1 , wherein at least two of said caches are associated with different physical dies.4. The network interface device as claimed in claim 1 , wherein said plurality of caches are arranged according to a topology claim 1 , said topology defining at least one or more of: relationships between said caches; inclusiveness; association; and a respective size of a cache.5. The network interface and claimed in claim 4 , wherein said topology is defined by a directed acyclic graph structure.6. The network interface device as claimed in claim 1 , wherein said at least one processor is configured to determine which of said plurality of caches in a host system is to be injected in dependence on cache information provided by an application thread of said application.7. The network interface device as claimed in claim 6 , wherein said at least one processor is configured to use mapping information and said cache information to determine a cache line where data is to be injected.8. The network ...

Подробнее
29-01-2015 дата публикации

OBJECT CACHING FOR MOBILE DATA COMMUNICATION WITH MOBILITY MANAGEMENT

Номер: US20150032974A1
Принадлежит:

Method and system are provided for object caching with mobility management for mobile data communication. The method may include: intercepting and snooping data communications at a base station between a user equipment and a content server without terminating communications; implementing object caching at the base station using snooped data communications; implementing object caching at an object cache server in the network, wherein the object cache server proxies communications to the content server from the user equipment; and maintaining synchrony between an object cache at the base station and an object cache at the object cache server. 1. A method for object caching with mobility management for mobile data communication , comprising:intercepting and snooping data communications at a base station between a user equipment and a content server without terminating communications;implementing object caching at the base station using snooped data communications;implementing object caching at an object cache server in the network, wherein the object cache server proxies communications to the content server from the user equipment; andmaintaining synchrony between an object cache at the base station and an object cache at the object cache server.2. The method as claimed in claim 1 , further comprising:providing a data response to the user equipment from the base station providing a cached object, wherein the data response mimics a response from the object cache server.3. The method as claimed in claim 2 , wherein providing a data response comprises creating a sequence of bytes.4. The method as claimed in claim 1 , further comprising:providing a notification to the object cache server if a cache hit is made at the base station for a data communication.5. The method as claimed in claim 1 , further comprising:in response to a cache hit at the base station for a data communication, serving the cached object to the user equipment in data packets; andsnooping, by the base ...

Подробнее
04-02-2016 дата публикации

CACHE MOBILITY

Номер: US20160034193A1
Принадлежит:

A method and system of selecting and migrating relevant data from among data associated with a workload of a virtual machine and stored in source storage cache memory in a dynamic computing environment is described. The method includes selecting one or more policies, the one or more policies including a size policy defining a default maximum size for the relevant data. The method also includes selecting the relevant data from among the data based on the one or more policies in a default mode, and migrating the relevant data from the source storage cache memory to target storage cache memory. 1. A system to select and migrate relevant data from among data associated with a workload of a virtual machine and stored in a source storage cache memory in a dynamic computing environment , the system comprising:a source virtual machine monitor, executed on a source node, configured to select one or more policies, the one or more policies including a size policy defining a default maximum size for the relevant data, and select the relevant data from among the data based on the one or more policies in a default mode; anda target storage cache memory, implemented on a target node, configured to receive and store the relevant data from the source node.2. The system according to claim 1 , wherein the source virtual machine monitor selects the one or more policies based on a type of the workload.3. The system according to claim 1 , wherein the source virtual machine monitor selects claim 1 , as the one or more policies claim 1 , a time policy that defines a default maximum time within which the workload accessed the data selected as the relevant data or a frequency policy that defines a default minimum frequency of access of the data selected as the relevant data.4. The system according to claim 1 , wherein the source virtual machine monitor is further configured to override at least one of the one or more policies in a customization mode.5. The system according to claim 4 , ...

Подробнее
04-02-2016 дата публикации

Processor with Messaging Network Technology

Номер: US20160036696A1
Принадлежит: BROADCOM CORPORATION

An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner. 1. A processor , comprising:a plurality of processor cores;a cache;a main memory;a memory bridge; anda data switch interconnect ring configured to pass data among the plurality of processor cores, the cache, and the memory bridge to enable access to the main memory.2. The processor of claim 1 , further comprising:a messaging network ring;an interface switch interconnect ring coupled to the messaging network ring; anda plurality of communication ports coupled to the messaging network ring via the interface switch interconnect ring, the messaging network ring configured to transfer information between the plurality of processor cores and at least one of the plurality of communication ports,wherein the data switch interconnect ring is directly coupled to each of the plurality of processor cores,wherein the messaging network ring is directly coupled to each of the plurality of processor cores, andwherein the data switch interconnect ring, the messaging network ring, and the interface switch interconnect ring are separate and distinct from each other.3. The processor of claim 1 , further comprising:a plurality of data caches, each data cache of the plurality of data caches being associated with a respective ...

Подробнее
01-02-2018 дата публикации

TRENDING TOPIC DRIVEN CACHE EVICTION MANAGEMENT

Номер: US20180034931A1
Принадлежит:

A content serving data processing system is configured for trending topic cache eviction management. The system includes a computing system communicatively coupled to different sources of content objects over a computer communications network. The system also includes a cache storing different cached content objects retrieved from the different content sources. The system yet further includes a cache eviction module. The module includes program code enabled to manage cache eviction of the content objects in the cache by marking selected ones of the content objects as invalid in accordance with a specified cache eviction strategy, detect a trending topic amongst the retrieved content objects, and override the marking of one of the selected ones of the content objects as invalid and keeping the one of the selected ones of the content objects in the cache when the one of the selected ones of the content objects relates to the trending topic. 1. A method for trending topic cache eviction management , the method comprising:receiving in memory of a server different requests for content objects from different client computers communicatively coupled to the server over a computer communications network;retrieving in the memory of the server from different sources of content objects from over the computer communications network, content objects requested in the different requests and forwarding the retrieved content objects to corresponding requesting ones of the different client computers;caching the retrieved content objects in a cache coupled to the server; and,managing cache eviction of the content objects in the cache by marking selected ones of the content objects as invalid in accordance with a specified cache eviction strategy, detecting a trending topic amongst the retrieved content objects, and overriding the marking of one of the selected ones of the content objects as invalid and keeping the one of the selected ones of the content objects in the cache when the ...

Подробнее
31-01-2019 дата публикации

TRANSFER TRACK FORMAT INFORMATION FOR TRACKS IN CACHE AT A PRIMARY STORAGE SYSTEM TO A SECONDARY STORAGE SYSTEM TO WHICH TRACKS ARE MIRRORED TO USE AFTER A FAILOVER OR FAILBACK

Номер: US20190034302A1
Принадлежит:

Provided are a computer program product, system, and method to transfer track format information for tracks in cache at a primary storage system to a secondary storage system to which tracks are mirrored to use after a failover or failback. In response to a failover from the primary storage system to the secondary storage system, the primary storage system adds a track identifier of the track and track format information indicating a layout of data in the track, indicated in track metadata for the track in the primary storage, to a cache transfer list. The primary storage system transfers the cache transfer list to the secondary storage system to use the track format information in the cache transfer list for a track staged into the secondary cache having a track identifier in the cache transfer list. 1. A computer program product for performing a failover from a primary storage system having a primary cache and a primary storage to a secondary storage system having a secondary cache and a secondary storage , the computer program product comprising a computer readable storage medium having computer readable program code executed in the primary and the secondary storage systems to perform operations , the operations comprising:mirroring data from the primary storage system to the secondary storage system;initiating a failover from the primary storage system to the secondary storage system;in response to the failover, for each track indicated in a cache list of tracks in the primary cache, adding, by the primary storage system, a track identifier of the track and track format information indicating a layout of data in the track, indicated in track metadata for the track in the primary storage, to a cache transfer list;transferring, by the primary storage system, the cache transfer list to the secondary storage system; andusing, by the secondary storage system after the failover, the track format information transferred with the cache transfer list for a track staged ...

Подробнее
30-01-2020 дата публикации

DISTRIBUTED STORAGE SYSTEM AND DISTRIBUTED STORAGE CONTROL METHOD

Номер: US20200034263A1
Принадлежит:

A distributed storage system, which receives a write request from a client, includes a plurality of computers which receive power supply from a plurality of power supply units. A first computer, among the plurality of computers, which is a computer that receives the write request from the client, is configured to: cache updated data which is at least apart of data accompanying the write request; select n second computers which are n computers (n is a natural number) among computers each receiving power from a power supply unit different from a power supply unit of the first computer as transfer destinations of the updated data; and transfer the updated data to the selected n second computers, respectively. At least one of the n second computers, when caching the updated data from the first computer, is configured to return a result to the first computer. 1. A distributed storage system which receives a write request from a client , comprising:a plurality of computers which receive power from a plurality of power supply units, wherein (A) cache updated data which is at least a part of data accompanying the write request;', '(B) select n second computers, which are n computers (n is a natural number) among computers each receiving power from a power supply unit different from a power supply unit of the first computer, as transfer destinations of the updated data; and', '(C) transfer the updated data to the selected n second computers, respectively, and wherein, 'a first computer, among the plurality of computers, which is a computer that receives the write request from the client, is configured to '(D) when caching the updated data from the first computer, return a result to the first computer.', 'at least one of the n second computers is configured to2. The distributed storage system according to claim 1 , wherein the first computer is configured to:(E) transfer old data of the updated data to a parity second computer which is a second computer storing a parity ...

Подробнее
30-01-2020 дата публикации

TWO LEVEL COMPUTE MEMOING FOR LARGE SCALE ENTITY RESOLUTION

Номер: US20200034293A1
Принадлежит:

One embodiment provides for a method that includes performing, by a processor, active learning of large scale entity resolution using a distributed compute memoing cache to eliminate redundant computation. Link feature vector tables are determined for intermediate results of the active learning of large scale entity resolution. The link feature vector tables are managed by a two-level cache hierarchy. 1. A method comprising:performing, by a processor, active learning of large scale entity resolution using a distributed compute memoing cache to eliminate redundant computation;determining link feature vector tables for intermediate results of the active learning of the large scale entity resolution; andmanaging the link feature vector tables by a two-level cache hierarchy.2. The method of claim 1 , wherein determining the link feature vector tables comprises one of pre-computing the link feature vector tables using a union of all blocking functions or computing the link feature vector tables dynamically upon a change of matching functions claim 1 , and the two-level cache hierarchy comprises distributed memory cache and distributed disk cache.3. The method of claim 2 , wherein the distributed memory cache manages the link feature vector tables based on frequency and storage usage.4. The method of claim 2 , wherein the distributed disk cache manages the link feature vector tables based on frequency claim 2 , storage usage claim 2 , processing bandwidth and coverage.5. The method of claim 2 , wherein pre-computing the link feature vector tables comprises populating memory caches of the distributed memory cache claim 2 , and upon a determination that the memory caches are full claim 2 , caching the link vector tables into at least one disk cache of the distributed disk cache.6. The method of claim 2 , further comprising:updating caches of the two-level cache hierarchy upon a determination that the matching functions are changed and the link feature vectors are no longer ...

Подробнее
11-02-2016 дата публикации

CLIENT-SIDE DEDUPLICATION WITH LOCAL CHUNK CACHING

Номер: US20160041777A1
Принадлежит: DELL PRODUCTS L.P.

Techniques and mechanisms described herein facilitate the transmission of a data stream from a client device to a networked storage system. According to various embodiments, a fingerprint for a data chunk may be identified by applying a hash function to the data chunk via a processor. The data chunk may be determined by parsing a data stream at the client device. A determination may be made as to whether the data chunk is stored in a chunk file repository at the client device. A block map update request message including information for updating a block map may be transmitted to a networked storage system via a network. The block map may identify a designated memory location at which the chunk is stored at the networked storage system. 1. A method comprising:at a client device comprising a processor and memory, identifying a fingerprint for a data chunk by applying a hash function to the data chunk via a processor, the data chunk determined by parsing a data stream at the client device; anddetermining whether the data chunk is stored in a chunk file repository at the client device; andtransmitting a block map update request message to a networked storage system via a network, the block map update request message including information for updating a block map at the networked storage system, the block map identifying a designated memory location at which the chunk is stored at the networked storage system.2. The method recited in claim 1 , the method further comprising:when it is determined that the data chunk is not stored in the local chunk cache, determining whether the data chunk is stored at the networked storage system by transmitting the fingerprint to the networked storage system via the network.3. The method recited in claim 2 , the method further comprising:when it is determined that the data chunk is not stored at the networked storage system, transmitting the data chunk to the networked storage system for storage.4. The method recited in claim 2 , wherein ...

Подробнее
11-02-2016 дата публикации

MOVING DATA BETWEEN CACHES IN A HETEROGENEOUS PROCESSOR SYSTEM

Номер: US20160041909A1
Принадлежит: Advanced Micro Devices, Inc.

Apparatus, computer readable medium, integrated circuit, and method of moving a plurality of data items to a first cache or a second cache are presented. The method includes receiving an indication that the first cache requested the plurality of data items. The method includes storing information indicating that the first cache requested the plurality of data items. The information may include an address for each of the plurality of data items. The method includes determining based at least on the stored information to move the plurality of data items to the second cache. The method includes moving the plurality of data items to the second cache. The method may include determining a time interval between receiving the indication that the first cache requested the plurality of data items and moving the plurality of data items to the second cache. A scratch pad memory is disclosed. 1. A method of moving a plurality of data items to a first cache or a second cache , the method comprising:receiving an indication that the first cache requested the plurality of data items;storing information indicating that the first cache requested the plurality of data items, wherein the information includes an address for each of the plurality of data items;determining, based at least on the stored information, to move the plurality of data items to the second cache; andmoving the plurality of data items to the second cache.2. The method of claim 1 , further comprising:determining to move the plurality of data items to the first cache based on receiving an indication that the first cache requested at least one item of the plurality of items in the second cache; andmoving the plurality of data items to the first cache.3. The method of claim 2 , further comprising:determining a time interval between receiving the indication that the first cache requested the plurality of data items and moving the plurality of data items to the second cache; anddetermining, based on the time interval, to ...

Подробнее
09-02-2017 дата публикации

MEMCACHED SYSTEMS HAVING LOCAL CACHES

Номер: US20170039145A1
Автор: Wu Gansha, WU Xiangbin
Принадлежит:

Apparatuses, methods and storage medium associated with a memcached system are disclosed herewith. In embodiments, a server apparatus may include memory; one or more processors; a network interface card to support remote memory direct access of the memory, by a client device, for a value of a key using an address that is a morph address of a physical address of a storage location of the memory having the value; and server side memcached logic operated by the one or more processors. Other embodiments may be described and/or claimed. 1. A server apparatus of a memcached system , comprising:memory;one or more processors coupled with the memory;a network interface card coupled to the memory to support remote memory direct access of the memory, by a client device, for a value of a key using an address that is a morph address of a physical address of a storage location of the memory having the value; andserver side memcached logic operated by the one or more processors.2. The server apparatus of claim 1 , wherein the network interface is to support unmorph of the morph address to recover the physical address.3. The server apparatus of claim 2 , wherein the morph address is generated through encryption of an intermediate address generated from the physical address; and wherein the network interface is to unmoprh the morph address to recover the physical address through decryption of the morph address to recover the intermediate address.4. The server apparatus of claim 3 , wherein the intermediate address is generated from concatenation of the physical address with a random number and a cyclic redundancy check value; and wherein the network interface is to recover the physical address through computation of a cyclic redundancy check value to compare against the cyclic redundancy check portion of the intermediate address claim 3 , and unmask a portion of the intermediate address to recover the physical address on successful cycle redundancy check comparison.5. The server ...

Подробнее
24-02-2022 дата публикации

ADDRESSING SCHEME FOR LOCAL MEMORY ORGANIZATION

Номер: US20220058126A1
Принадлежит:

A memory tile, in a local memory, may be considered to be a unit of memory structure that carries multiple memory elements, wherein each memory element is a one-dimensional memory structure. Multiple memory tiles make up a memory segment. By structuring the memory tiles, and a mapping matrix to the memory tiles, within a memory segment, non-blocking, concurrent write and read accesses to the local memory for multiple requestors may be achieved with relatively high throughput. The accesses may be either row-major or column-major for a two-dimensional memory array. 1. A method of memory access , the method comprising: [ a memory bank among a plurality of memory banks; and', 'a memory sub-bank among a plurality of memory sub-banks;, 'a plurality of memory tiles, each memory tile among the plurality of memory tiles designated as belonging to, 'a plurality of memory entries, each memory entry among the plurality of memory entries extending across the plurality of memory tiles;', 'each memory tile among the plurality of memory tiles having plurality of memory lines that are associated with a respective memory entry of the plurality of memory entries; and', 'each memory line among the plurality of memory lines having a plurality of memory elements, wherein each memory element is a one-dimensional memory structure;, 'establishing an addressing scheme for a memory segment, the addressing scheme definingselecting, using the addressing scheme, a memory element among the plurality of memory elements in a first memory line among the plurality of memory lines, in a first entry of the plurality of memory entries, of a first memory tile in a first memory bank and a first memory sub-bank, thereby establishing a first selected memory element;selecting, using the addressing scheme, a memory element among the plurality of memory elements in a second memory line among the plurality of memory lines, in the first entry, of a second memory tile in a second memory bank, thereby establishing ...

Подробнее
12-02-2015 дата публикации

MANAGING CACHING OF EXTENTS OF TRACKS IN A FIRST CACHE, SECOND CACHE AND STORAGE

Номер: US20150046649A1
Принадлежит:

Provided are a computer program product, system, and method for managing caching of extents of tracks in a first cache, second cache and storage device. A determination is made of an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks. A determination is made of an extent including the eligible track and whether second cache caching for the determined extent is enabled or disabled. The eligible track is demoted from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled. Selection is made not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled. 112-. (canceled)13. A computer program product for managing data in a first cache , a second cache , and a storage device , the computer program product consisting of a non-transitory computer readable storage medium having computer readable program code embodied therein that executes to perform operations , the operations comprising:migrating an extent from the storage device to the second cache; andindicating that second cache caching is disabled for the migrated extent in the second cache, wherein tracks in the extent in the first cache cannot be demoted to the second cache while the second cache caching for the extent including the tracks is disabled.14. The computer program product of claim 13 , wherein tracks in the extent in the second cache cannot be demoted to the storage device while the second cache caching for the extent is disabled.15. The computer program product of claim 13 , wherein the operations further comprise:determining whether activity with respect to the extent in the storage device exceeds a threshold level of activity, wherein the extent is migrated from the storage device to the second cache in response to ...

Подробнее
12-02-2015 дата публикации

Flexible Configuration Hardware Streaming Unit

Номер: US20150046650A1
Принадлежит: ORACLE INTERNATIONAL CORPORATION

A processor having a streaming unit is disclosed. In one embodiment, a processor includes a streaming unit configured to load one or more input data streams from a memory coupled to the processor. The streaming unit includes an internal network having a plurality of queues configured to store streams of data. The streaming unit further includes a plurality of operations circuits configured to perform operations on the streams of data. The streaming unit is software programmable to operatively couple two or more of the plurality of operations circuits together via one or more of the plurality of queues. The operations circuits may perform operations on multiple streams of data, resulting in corresponding output streams of data. 1. A processor comprising:a streaming unit configured to load one or more input data streams, wherein the streaming unit includes an internal network having a plurality of queues configured to store streams of data and a plurality of operations circuits configured to perform operations on the streams of data, wherein the streaming unit is programmable to operatively couple two or more of the plurality of operations circuits together via one or more of the plurality of queues.2. The processor as recited in claim 1 , wherein each queue of a first subset of the plurality of queues is internal to a corresponding one of the plurality of operations circuits claim 1 , wherein at least one of the plurality of operations circuits includes a first input queue claim 1 , a second input queue claim 1 , and an output queue claim 1 , the plurality of queues including the first and second input queues and the output queue.3. The processor as recited in claim 2 , wherein the at least one of the plurality of operations circuits is configured to receive a first data stream in its first input queue claim 2 , a second data stream in its second input queue claim 2 , and is configured to produce an output data stream by performing an operation on the first and ...

Подробнее
12-02-2015 дата публикации

Controlling a dynamically instantiated cache

Номер: US20150046654A1
Принадлежит: NetApp Inc

A change in workload characteristics detected at one tier of a multi-tiered cache is communicated to another tier of the multi-tiered cache. Multiple caching elements exist at different tiers, and at least one tier includes a cache element that is dynamically resizable. The communicated change in workload characteristics causes the receiving tier to adjust at least one aspect of cache performance in the multi-tiered cache. In one aspect, at least one dynamically resizable element in the multi-tiered cache is resized responsive to the change in workload characteristics.

Подробнее
07-02-2019 дата публикации

SYSTEMS, METHODS, AND APPARATUSES UTILIZING CPU STORAGE WITH A MEMORY REFERENCE

Номер: US20190042448A1
Принадлежит:

Implementations of using tiles for caching are detailed In some implementations, an instruction execution circuitry executes one or more instructions, a register state cache coupled to the instruction execution circuitry holds thread register state in a plurality of registers, and backing storage pointer storage stores a backing storage pointer, wherein the backing storage pointer is to reference a state backing storage area in external memory to store the thread register state stored in the register state cache. 1. An apparatus comprising:instruction execution circuitry to execute one or more instructions;a register state cache coupled to the instruction execution circuitry, the register state cache to hold thread register state in a plurality of registers; andbacking storage pointer storage to store a backing storage pointer, wherein the backing storage pointer is to reference a state backing storage area in external memory to store the thread register state stored in the register state cache.2. The apparatus of claim 1 , wherein the thread register state includes data and state information for the data.3. The apparatus of claim 2 , wherein the state information for the data is one of invalid claim 2 , valid claim 2 , and dirty.4. The apparatus of claim 1 , wherein the plurality of registers of the register state cache comprises two-dimensional registers.5. The apparatus of claim 1 , wherein the backing storage pointer and a size of the state backing storage area is set by an execution of an instruction to configure the register state cache by the instruction execution circuitry.6. The apparatus of claim 5 , wherein the execution of an instruction to configure the register state cache by the instruction execution circuitry comprises retrieving a configuration from memory and configuring usage of the register state cache claim 5 , and the size of the state backing storage area is a field in the configuration.7. The apparatus of claim 5 , wherein the execution of an ...

Подробнее
07-02-2019 дата публикации

Techniques for command arbitation in symmetric multiprocessor systems

Номер: US20190042486A1
Принадлежит: International Business Machines Corp

A technique for operating a data processing system includes determining, by an arbiter of a processing unit of the data processing system, whether an over-commit has occurred. In response to determining that the over-commit has occurred, the arbiter selects a broadcast command to be dropped based on a number of hops traversed through the data processing system by the broadcast command.

Подробнее
06-02-2020 дата публикации

USING STORAGE CLASS MEMORY AS A PERSISTENT OPERATING SYSTEM FILE/BLOCK CACHE

Номер: US20200042447A1
Принадлежит: EMC IP Holding Company LLC

A host server in a server cluster has a memory allocator that creates a dedicated host application data cache in storage class memory. A background routine destages host application data from the dedicated cache in accordance with a destaging plan. For example, a newly written extent may be destaged based on aging. All extents may be flushed from the dedicated cache following host server reboot. All extents associated with a particular production volume may be flushed from the dedicated cache in response to a sync message from a storage array. 1. An apparatus comprising:a computing device comprising a processor that runs a host application, a non-volatile cache that is directly accessed by the processor, a memory allocator that allocates a portion of the non-volatile cache as a dedicated host application data cache, and a destaging program stored on non-transitory memory that destages host application data from the host application data cache in accordance with a destaging plan, wherein the computing device is responsive to a dirty page message from another computing device to refrain from accessing corresponding host application data.2. The apparatus of wherein the destaging program sends a dirty page message to another computing device in response to a newly written host data extent being written into the dedicated host application data cache.3. The apparatus of wherein the destaging program is responsive to a newly written host data extent being written into the dedicated host application data cache to start an aging timer associated with that newly written host data extent.4. The apparatus of wherein the destaging program destages the newly written host data extent from the dedicated host application data cache when the associated aging timer expires.5. The apparatus of wherein the destaging program destages all extents of host application data from the dedicated host application data cache in response to reboot of the computing device.6. The apparatus of ...

Подробнее
06-02-2020 дата публикации

Using storage class memory as a persistent operating system file/block cache

Номер: US20200042448A1
Принадлежит: EMC IP Holding Co LLC

A host server in a server cluster has a memory allocator that creates a dedicated host application data cache in storage class memory. A background routine destages host application data from the dedicated cache in accordance with a destaging plan. For example, a newly written extent may be destaged based on aging. All extents may be flushed from the dedicated cache following host server reboot. All extents associated with a particular production volume may be flushed from the dedicated cache in response to a sync message from a storage array.

Подробнее
18-02-2021 дата публикации

MEMORY MODULE DATA OBJECT PROCESSING SYSTEMS AND METHODS

Номер: US20210049110A1
Автор: Murphy Richard C.
Принадлежит:

The present disclosure provides methods, apparatus, and systems for implementing and operating a memory module, for example, in a computing that includes a network interface, which may be coupled to a network to enable communication with a client device, and host processing circuitry, which may be coupled to the network interface via a system bus and programmed to perform first data processing operations based on user inputs received from the client device. The memory module may be coupled to the system bus and include memory devices and a memory controller coupled to the memory devices via an internal bus. The memory controller may include memory processing circuitry programmed to perform a second data processing operation that facilitates performance of the first data processing operations by the host processing circuitry based on context of the data block indicated by the metadata. 1. A computing system comprising one or more host computing devices , wherein the one or more host computing devices comprise:a network interface communicatively coupled to a system bus, wherein the network interface is configured to communicatively couple the one or more host computing devices to a first client computing device via a communication network to enable the one or more host computing devices to provide the first client computing device a first virtual machine; the memory sub-system is configured to store a plurality of data objects; and', 'a first data object of the plurality of data objects comprises a first copy of first virtual machine data and first tag metadata that indicates that the first data object is associated with the first virtual machine; and, 'a memory sub-system communicatively coupled to the system bus, whereinhost processing circuitry communicatively to the system bus, wherein the host processing circuitry is configured to process the first data object to provide the first virtual machine; anda service processor communicatively coupled to the memory sub- ...

Подробнее
19-02-2015 дата публикации

CENTRALIZED MEMORY ALLOCATION WITH WRITE POINTER DRIFT CORRECTION

Номер: US20150052316A1
Принадлежит:

A system for writing data includes a memory, at least one memory controller and control logic. The memory stores data units. The memory controller receives a write request associated with a data unit and stores the data unit in the memory. The memory controller also transmits a reply that includes an address where the data unit is stored. The control logic receives the reply and determines whether the address in the reply differs from an address included in replies associated with other memory controllers by a threshold amount. When this occurs, the control logic performs a corrective action to bring an address associated with the memory controller back within a defined range. 1. A system for writing data , comprising:a memory configured to store data units; receive a first write request associated with a data unit,', 'store the data unit in the memory, and', 'transmit a first reply including a first address where the data unit is stored; and, 'at least one memory controller configured to receive the first reply, and', 'determine whether the first address differs from an address included in at least one other reply by at least a first value., 'control logic configured to2. The system of claim 1 , wherein the control logic is further configured to:initiate a corrective action when the first address differs from the address included in the at least one other reply by at least the first value.3. The system of claim 2 , wherein the control logic is further configured to:suspend the corrective action when a second address included in a second reply differs from an address included in at least one other reply by less than a second value.4. The system of claim 1 , wherein the memory comprises a plurality of memory devices and the at least one memory controller comprises a plurality of memory controllers claim 1 , the system further comprising: generate the first write request, and', 'transmit the first write request to a first one of the memory controllers; and, 'a request ...

Подробнее
19-02-2015 дата публикации

Systems, devices, memory controllers, and methods for memory initialization

Номер: US20150052317A1
Автор: Terry M. Grunzke
Принадлежит: Micron Technology Inc

Systems, devices, memory controllers, and methods for initializing memory are described. Initializing memory can include configuring memory devices in parallel. The memory devices can receive a shared enable signal. A unique volume address can be assigned to each of the memory devices.

Подробнее
16-02-2017 дата публикации

High-Speed WAN to Wireless LAN Gateway

Номер: US20170048778A1
Автор: Evans Gregory Morgan
Принадлежит:

A gateway interconnecting a high speed Wide Area Network (WAN) and a lower speed Wireless Local Area Network (WLAN) is provided. The high speed WAN is preferably connected to the gateway via a Fiber-to-the Home (FTTH) connection and associated FTTH modem. In general, the gateway includes an adaptable cross-layer offload engine operating to manage bandwidth between the high speed WAN and the lower speed WLAN. As data enters the gateway from the WAN at the high speed data rate of the WAN, the offload engine stores the data in a non-secure data cache. A rule check engine performs a stateless or stateful inspection of the data in the non-secure data cache. Thereafter, the data is moved from the non-secure data cache to a secure data cache and thereafter transmitted to an appropriate user device in the WLAN at the lower data rate of the WLAN. 1. A residential gateway within a customer premises interconnecting a Wide Area Network (WAN) external to the customer premises to a lower speed Wireless Local Area Network (WLAN) within the customer premises , the residential gateway comprising:an adaptable cross-layer offload engine;a data cache associated with the offload engine;a network interface communicatively coupling the offload engine to the WAN and providing a first data rate; anda wireless interface associated with the offload engine and adapted to communicate with a plurality of user devices within the WLAN, the interface providing a second data rate that is less than the first data rate of the network interface; receive incoming data from the WAN via the network interface at the first data rate;', 'store the incoming data in the data cache; and', 'transmit the incoming data from the data cache to a corresponding one of the plurality of user devices in the WLAN via the wireless interface at the second data rate;, 'wherein the offload engine is adapted tofurther wherein the gateway further comprises:a rule check engine adapted to inspect the incoming data from the WAN ...

Подробнее
03-03-2022 дата публикации

METHOD FOR EXECUTING ATOMIC MEMORY OPERATIONS WHEN CONTESTED

Номер: US20220066936A1
Принадлежит:

Described are methods and a system for atomic memory operations with contended cache lines. A processing system includes at least two cores, each core having a local cache, and a lower level cache in communication with each local cache. One local cache configured to request a cache line to execute an atomic memory operation (AMO) instruction, receive the cache line via the lower level cache, receive a probe downgrade due to other local cache requesting the cache line prior to execution of the AMO, and send the AMO instruction to the lower level cache for remote execution in response to the probe downgrade. 1. A processing system comprising:at least two cores, each core having a local cache;a lower level cache in communication with each local cache; request a cache line to execute an atomic memory operation (AMO) instruction;', 'receive the cache line via the lower level cache;', 'receive a probe downgrade due to other local cache requesting the cache line prior to execution of the AMO; and', 'send the AMO instruction to the lower level cache for remote execution in response to the probe downgrade., 'one local cache configured to2. The processing system of claim 1 , wherein the request is for the cache line in an event of a cache miss at the one local cache.3. The processing system of claim 1 , wherein the request is for a cache coherence state upgrade in an event of a cache hit at the one local cache.4. The processing system of claim 1 , wherein the lower level cache configured to:determine an availability of the cache line based on a variety of factors.5. The processing system of claim 4 , wherein the variety of factors includes at least a Least Recently Used (LRU) algorithm claim 4 , latency claim 4 , input from other caches or memory structures associated with the cache line claim 4 , inclusive cache presence bits claim 4 , matching a transaction in flight or buffered from another cache claim 4 , whether the lower level cache has the cache line at all claim 4 , ...

Подробнее
14-02-2019 дата публикации

SELECTIVE PAGE TRACKING FOR PROCESS CONTROLLER REDUNDANCY

Номер: US20190050342A1
Автор: DRAYTON GARY
Принадлежит:

A redundant process controller includes a primary and secondary process controller each with memory management unit (MMU) hardware and associated writeable memory including a tracked region having MMU pages for a control database. The primary and secondary process controller each have and an associated MMU tracker algorithm including an exception handler and process control algorithm. At a beginning of a first control algorithm cycle the primary MMU tracker algorithm sets all of primary MMU pages to read-only. The MMU tracker algorithm generates a page fault exception responsive to sensing a first primary MMU pages being written. During or upon an end of a control algorithm cycle, the primary processor controller transfers process data associated with only the first primary MMU page to the secondary process controller, wherein the process data is stored in a secondary MMU page in the control database in the secondary tracked region. 1. A method of maintaining process control data redundancy , comprising: at a beginning of a first control algorithm cycle setting all of said primary MMU pages to read-only;', 'generating a page fault exception responsive to sensing at least a first of said primary MMU pages being written to;', 'during or upon an end of first control algorithm cycle, said primary process controller transferring process data associated with only said first primary MMU page to said secondary process controller, wherein said process data is stored in one of said secondary MMU pages in said control database in said secondary tracked region, and', 'for a new control algorithm cycle repeating said setting, sensing, tracking and said transferring., 'providing a fault-tolerant industrial process control system including processing equipment and field devices including a redundant process controller comprising a primary process controller comprising a primary processor having memory management unit (MMU) hardware and an associated primary writeable memory ...

Подробнее
25-02-2021 дата публикации

Hybrid Memory Systems with Cache Management

Номер: US20210056029A1
Автор: Sharovar Igor
Принадлежит: Truememorytechnology, LLC

In a general aspect, a hybrid memory system with cache management is disclosed. In some aspects, a memory access request is transmitted by operation of a host memory controller to a memory module via a memory interface. Whether to execute the memory access request is determined by operation of the memory module according to one or more specifications of the memory interface. In response to determining the memory access request cannot be executed according to the one or more specifications of the memory interface, the host memory controller is notified by the memory module and halted. Respective actions are performed by operation of the memory module based on the memory access request and a type of the memory module. 1. A communication method comprising:transmitting, by operation of a host memory controller, a memory access request to a memory module via a memory interface;determining, by operation of the memory module, whether to execute the memory access request according to one or more specifications of the memory interface;in response to determining the memory access request cannot be executed according to the one or more specifications of the memory interface, notifying, by operation of the memory module, the host memory controller and halting the host memory controller; andperforming, by operation of the memory module, respective actions based on the memory access request and a type of the memory module.2. The method of claim 1 , comprising:in response to determining the memory access request can be executed according to one or more specifications of the memory interface, completing the memory access request.3. The method of claim 1 , wherein the memory interface comprises a dual data rate (DDR) memory interface claim 1 , the memory module comprises a dual in-line memory module (DIMM) claim 1 , notifying the host memory controller comprising:transmitting a signal to the host memory controller, by operation of the DIMM to inform the host memory controller the ...

Подробнее
22-02-2018 дата публикации

SEMICONDUCTOR DEVICE AND ELECTRONIC DEVICE

Номер: US20180052784A1
Принадлежит:

A semiconductor device includes a first memory controller configured to output a first control signal to first and second external memories through a first memory interface, a second memory controller configured to output a second control signal to the second external memory through a second memory interface, an inter-device interface for communicating with another semiconductor device, terminals configured to output the second control signal that has passed through the second memory interface, and a first selector configured to select between the second memory interface and the inter-device interface in accordance with an operation mode of the semiconductor device and to couple the selected interface to the terminals. 113-. (canceled)14. A semiconductor device comprising:a first and a second semiconductor chip, each of the first and second semiconductor chips being configured to be coupled to respective first and second external memories, and each of the first and second semiconductor chips including:(a) a first memory interface configured to be coupled to a corresponding first external memory;(b) a second memory interface configured to be coupled to a corresponding second external memory;(c) a first memory controller coupled to the first memory interface such that a first control signal sent from the first memory controller passes through the first memory interface before reaching the first and second external memories;(d) a second memory controller coupled to the second memory interface such that a second control signal sent from the second memory controller passes through the second memory interface before reaching the second external memory;(e) an inter-device interface being configured to establish communication between the first semiconductor chip and the second semiconductor chip;(f) terminals configured to output the second control signal that has passed through the second memory interface; and(g) a first selector configured to select between the second ...

Подробнее
13-02-2020 дата публикации

PROGRAMMABLE CACHE COHERENT NODE CONTROLLER

Номер: US20200050547A1
Принадлежит:

A computer system includes a first group of CPU modules operatively coupled to at least one first Programmable ASIC Node Controller configured to execute transactions directly or through a first interconnect switch to at least one second Programmable ASIC Node Controller connected to a second group of CPU modules running a single instance of an operating system. 116.-. (canceled)17. A computer system comprising a first group of CPU modules operatively coupled to at least one first Programmable ASIC Node Controller configured to execute transactions directly or through a first interconnect switch to at least one second Programmable ASIC Node Controller connected to a second group of CPU modules running a single instance of an operating system.18. The computer system according to claim 17 , further comprising a Programmable ASIC Node Controller routing mechanism to perform direct and indirect connection to other Programmable ASIC Node Controllers within the computer system.19. The computer system according to claim 18 , wherein the routing mechanism is a Programmable ASIC Node Controller internal programmable crossbar switch.20. The computer system according to claim 17 , wherein the at least one first Programmable ASIC Node Controller is operatively coupled to at least one second Programmable Node Controller in a torus topology.21. The computer system according to claim 17 , wherein the at least one first Programmable ASIC Node Controller is operatively coupled to at least one second Programmable Node Controller in a Dragonfly topology.22. The computer system according to claim 17 , wherein the Programmable ASIC Node Controllers are operatively interconnected through an Ethernet switch.23. The computer system according to claim 17 , wherein the Programmable ASIC Node Controllers are operatively interconnected through an Omnipathswitch.24. The computer system according to claim 17 , wherein the Programmable ASIC Node Controllers are operatively interconnected through ...

Подробнее
01-03-2018 дата публикации

Short-Circuiting Normal Grace-Period Computations In The Presence Of Expedited Grace Periods

Номер: US20180060086A1
Автор: Paul E. McKenney
Принадлежит: International Business Machines Corp

A technique for short-circuiting normal read-copy update (RCU) grace period computations in the presence of expedited RCU grace periods. The technique may include determining during normal RCU grace period processing whether at least one expedited RCU grace period elapsed during a normal RCU grace period. If so, the normal RCU grace period is ended. If not, the normal RCU grace period processing is continued. Expedited RCU grace periods may be implemented by expedited RCU grace period processing that periodically awakens a kernel thread that implements the normal RCU grace period processing. The expedited RCU grace period processing may conditionally throttle wakeups to the kernel thread based on CPU utilization.

Подробнее
02-03-2017 дата публикации

WORKLOAD MANAGEMENT IN A GLOBAL RECYCLE QUEUE INFRASTRUCTURE

Номер: US20170060753A1
Принадлежит:

Presented herein are methods, non-transitory computer readable media, and devices for integrating a workload management scheme for a file system buffer cache with a global recycle queue infrastructure. Methods for allocating a certain portion of the buffer cache without physically partitioning the buffer resources are disclosed which include: identifying a workload from a plurality of workloads; allocating the buffer cache in the data storage network for usage by the identified workload; tagging a buffer from within the buffer cache with a workload identifier and track each buffer; determining if the workload is exceeding its allocated buffer cache; and wherein determining the workload is exceeding its allocated percentage of buffer cache, enabling the workload's exceeded buffer to be available to scavenge; determining if the workload is exceeding a soft-limit on the allowable usage of the buffer cache, and wherein determining the workload is exceeding its soft-limit, degrading the prioritization of subsequent buffers, preventing the workload from thrashing out buffers of other workloads. 1. A method for integrating a workload management scheme for a buffer cache in a data storage system with a recycle queue infrastructure , the method comprising:identifying, by a storage server, a workload from a plurality of workloads;allocating at least a portion of the buffer cache in the data storage system to the identified workload;establishing a soft-limit on allowable usage above which additional usage is degraded;determining if the identified workload is exceeding its allocated buffer cache; andif the identified workload is exceeding its allocated buffer cache, by an excess, making at least a portion of the excess amount available for scavenging.2. The method of claim 1 , further comprising storing the workload identifier within data of each buffer.3. The method of claim 1 , further comprising tagging a buffer from within the buffer cache with a workload identifier and ...

Подробнее
02-03-2017 дата публикации

SYSTEM AND METHOD FOR SHARING DATA SECURELY

Номер: US20170063544A1
Принадлежит:

Embodiments of systems and methods disclosed herein provide simple and effective methods for secure processes to share selected data with other processes and other memory locations, either secure or not, in a safe and secure manner. More specifically, in certain embodiments, systems and methods are disclosed that enable a secure data cache system to use one or more virtual machines to securely generate encryption keys based on information from multiple independent sources. In some embodiments, systems and methods are disclosed that provide protection from replay attacks by selectively changing the generated encryption keys. 1. A method of providing secure operation of a device that is being managed by one or more external services comprising:the device receiving a first token from a first external service;the device receiving a second token from a second external service;generating a first intermediate token derived from the first token and a first key relating to the first external service;generating a second intermediate token derived from the second token and a second key relating to the second external service;generating a third intermediate token derived from the first intermediate token and the second key;generating a fourth intermediate token derived from the second intermediate token and the first key;combining the third intermediate token and fourth intermediate token to generate a first encryption key; andusing the generated first encryption key to symmetrically encrypt and decrypt data used by the device.2. The method of claim 1 , wherein the data used by the device is encrypted using the generated first encryption key when it is evicted from a secure data cache and subsequently decrypted using the generated first encryption key when it is reloaded into the secure data cache from an external memory.3. The method of claim 1 , further comprising combining a counter with one or both of the first token or second token when generating the first intermediate ...

Подробнее
02-03-2017 дата публикации

COMMUNICATION METHOD, COMMUNICATION DEVICE, AND RECORDING MEDIUM

Номер: US20170063955A1
Автор: TSUBOUCHI MASATOSHI
Принадлежит:

A mobile information device includes a first obtaining unit, a second obtaining unit, a reproduction unit and an issuing unit. The first obtaining unit obtains data by using a first line. The second obtaining unit obtains data by using a second line. The reproducing unit conducts streaming reproduction of the data obtained by one selected from the first obtaining unit and the second obtaining unit. The issuing unit divides unobtained data in neighborhood of a reproduction position for streaming reproduction of the obtained data and issues a task of executing obtainment of data to one selected from the first obtaining unit and the second obtaining unit, for each of the positions at which the data is divided. 1. A communication method implemented by a communication device , the communication method comprising:obtaining data by using one selected from a first line and a second line;conducting streaming reproduction of the obtained data; anddividing unobtained data in neighborhood of a reproduction position for streaming reproduction of the obtained data and issuing a task of executing obtainment of data to one selected from the first line and the second line, for each of positions at which the data is divided.2. The communication method according to claim 1 , further comprising:issuing the task of executing obtainment of head data of the data to the first line and the second line, when the obtainment of the data is started.3. The communication method according to claim 2 , further comprising:determining one of the first and the second lines to be a main line and another line to be a sub line; andissuing a part of an unexecuted part of a task that is currently being executed by the main line to the sub line as a new task, on a basis of a communication state of the main line.4. The communication method according to claim 3 , wherein the communication state of the main line is detected on a basis of a degree of progress of a process performed on each of tasks currently ...

Подробнее
02-03-2017 дата публикации

DATA CACHING IN A COLLABORATIVE FILE SHARING SYSTEM

Номер: US20170064027A1
Автор: Grenader Denis
Принадлежит: BOX, INC.

A system and method for facilitating cache alignment in a cross-enterprise file collaboration system. The example method includes maintaining a plurality of messages in a cache, each message associated with a message offset; determining a message batch size; receiving a first request for a message characterized by a first offset; responding to the first request at least in part by sending an amount of data equal to the batch size starting at the first offset; receiving a second request for a second message of characterized by a second offset; and if the second offset plus the data batch size spans across a boundary determined by the first offset plus the data batch size, then responding to the second request by sending an amount of data equal to the first offset plus the data batch size minus the second offset. In a more specific embodiment, the first and second requests are received from different committers. 1. A method for maintaining alignment in a cache , the cache used for distributing log information in a cross-enterprise file collaboration system , the method comprising the following performed by one or more server computers:maintaining a plurality of messages in a cache, each message associated with a topic partition and a message offset;determining a message batch size;receiving a first request for a message characterized by a first offset;responding to the first request at least in part by sending an amount of data equal to the batch size starting at the first offset;receiving a second request for a second message characterized by a second offset;wherein if the second offset plus the data batch size spans across a boundary determined by the first offset plus the data batch size then responding to the second request by sending an amount of data equal to the first offset plus the data batch size minus the second offset.2. The method of claim 1 , wherein each message in the cache is associated with a topic partition provided by a message broker.3. The method ...

Подробнее
08-03-2018 дата публикации

METHOD FOR DETERMINING DATA IN CACHE MEMORY OF CLOUD STORAGE ARCHITECTURE AND CLOUD STORAGE SYSTEM USING THE SAME

Номер: US20180067858A1
Принадлежит: ProphetStor Data Services, Inc.

A method for determining data in cache memory of a cloud storage architecture and a cloud storage system using the method are disclosed. The method includes the steps of: A. recording transactions from cache memory of a cloud storage during a period of time in the past, wherein each transaction comprises a time of recording, or a time of recording and cached data been accessed during the period of time in the past; B. assigning a specific time in the future; C. calculating a time-associated confidence for every cached data from the transactions based on a reference time; D. ranking the time-associated confidences; and E. providing the cached data with higher time-associated confidence in the catch memory, and removing the cached data in the cache memory with lower time-associated confidence when the cache memory is full before the specific time in the future. 1. A method for determining data in cache memory of a cloud storage system , comprising the steps of:A. recording transactions from cache memory of a cloud storage system during a period of time in the past, wherein each transaction comprises a time of recording, or a time of recording and cached data been accessed during the period of time in the past;B. assigning a specific time in the future;C. calculating a time-associated confidence for every cached data from the transactions based on a reference time;D. ranking the time-associated confidences; andE. providing the cached data with higher time-associated confidence in the catch memory, and removing the cached data in the cache memory with lower time-associated confidence when the cache memory is full before the specific time in the future.2. The method according to claim 1 , wherein the specific time is a specific minute in an hour claim 1 , a specific hour in a day claim 1 , a specific day in a week claim 1 , a specific day in a month claim 1 , a specific day in a season claim 1 , a specific day in a year claim 1 , a specific week in a month claim 1 , a ...

Подробнее
27-02-2020 дата публикации

Methods for providing data values using asynchronous operations and querying a plurality of servers

Номер: US20200065245A1
Автор: Arun Iyengar
Принадлежит: International Business Machines Corp

A processing system server and methods for performing asynchronous data store operations. The server includes a processor which maintains a cache of objects in communication with the server. The processor executes an asynchronous computation to determine the value of a first object. In response to a request for the first object occurring before the asynchronous computation has determined the value of the first object, a value of the first object is returned from the cache. In response to a request for the first object occurring after the asynchronous computation has determined the value of the first object, a value of the first object determined by the asynchronous computation is returned. The asynchronous computation may comprise at least one future, such as a ListenableFuture, or at least one process or thread. Execution of an asynchronous computation may occur with a frequency correlated with how frequently the object changes or how important it is to have a current value of the object. The asynchronous computation may receive different values from at least two servers and may determine the value of an object based on time stamps.

Подробнее
09-03-2017 дата публикации

METHOD AND APPARATUS FOR ADAPTIVE CACHE MANAGEMENT

Номер: US20170070516A1
Принадлежит:

An apparatus and a method for processing data are provided. The method for processing data by a terminal. The method includes identifying a plurality of inspection types for a packet; determining at least one inspection type from the plurality of inspection types for the packet based on a predetermined criterion; and processing the determined at least one inspection type for the packet. 1. A method for processing data by a terminal , the method comprising:identifying a plurality of inspection types for a packet;determining at least one inspection type from the plurality of inspection types for the packet based on a predetermined criterion; andprocessing the determined at least one inspection type for the packet.2. The method of claim 1 , wherein the predetermined criterion comprises at least one of a network type for transmitting or receiving the packet claim 1 , a type of an application being executed in the terminal claim 1 , and a configuration of the executed application or information on the packet.3. The method of claim 1 , wherein claim 1 , if the at least one inspection type is determined based on the network type for transmitting or receiving the packet claim 1 , determining the at least one inspection type further comprises:determining to process Internet Protocol version 4 (IPv4) address inspection for the packet, if the packet is transmitted or received using a Wi-Fi network; anddetermining to process Internet Protocol version 6 (IPv6) address inspection for the packet, if the packet and the packet is transmitted or received using a long-term evolution (LTE) network.4. The method of claim 1 , wherein determining the at least one inspection type comprises:determining to process security inspection for the packet, if the packet is transmitted or received using a public network.5. The method of claim 1 , wherein determining the at least one inspection type comprises:determining, if at least one packet is transmitted or received through an application being ...

Подробнее
09-03-2017 дата публикации

SYSTEM AND METHOD FOR IMPROVING VIRTUAL MEDIA REDIRECTION SPEED IN BASEBOARD MANAGEMENT CONTROLLER (BMC)

Номер: US20170070590A1
Принадлежит:

Certain aspects of the disclosure relates to a system and method of performing virtual media redirection. The system includes a baseboard management controller (BMC) connected to a host computing device through a communication interface, and a client computing device communicatively connected to the BMC through a network. In operation, the BMC emulates a virtual media for a media device, and establishes a virtual media connection to the client computing device through the network. Then the BMC stores the data from the media device in a host cache at the BMC and in a client cache at the client computing device by sectors. When the BMC receives a request from the host computing device through the communication interface to retrieve sectors from the media device, the BMC redirects the sectors being requested to the host computing device depending on where the requested sectors are stored. 1. A system , comprising:a host computing device;a baseboard management controller (BMC) communicatively connected to the host computing device through a communication interface, the BMC having a processor and a non-volatile memory, wherein the non-volatile memory stores a virtual media redirection module; anda client computing device communicatively connected to the BMC through a network, the client computing device having a memory; emulate a virtual media for a media device, wherein the media device stores data of one or more media files;', 'establish a virtual media connection to the client computing device through the network, wherein a client cache is created at the memory of the client computing device;', 'store the data from the media device in a host cache by sectors;', 'instruct the client computing device to store the data from the media device in the client cache by sectors;', 'receive a request from the host computing device through the communication interface to retrieve one or more sectors from the media device; and', performing a local virtual media redirection when one ...

Подробнее
07-03-2019 дата публикации

Redundant, fault tolerant, distributed remote procedure call cache in a storage system

Номер: US20190073282A1
Принадлежит: Pure Storage Inc

A method of operating a remote procedure call cache in a storage cluster is provided. The method includes receiving a remote procedure call at a first storage node having solid-state memory and writing information, relating to the remote procedure call, to a remote procedure call cache of the first storage node. The method includes mirroring the remote procedure call cache of the first storage node in a mirrored remote procedure call cache of a second storage node. A plurality of storage nodes and a storage cluster are also provided.

Подробнее
15-03-2018 дата публикации

Selective application of interleave based on type of data to be stored in memory

Номер: US20180074961A1
Принадлежит: Intel Corp

Technology for an apparatus is described. The apparatus can include a plurality of cache memories and a cache controller. The cache controller can allocate a cache entry to store data across the plurality of cache memories. The cache entry can include a value in a metadata field indicating an interleave policy. The cache controller can selectively assign the interleave policy to be applied based on a type of data stored in the plurality of cache memories.

Подробнее
15-03-2018 дата публикации

POWER AWARE HASH FUNCTION FOR CACHE MEMORY MAPPING

Номер: US20180074964A1
Принадлежит:

A multi-core processing chip where the last-level cache functionality is implemented by multiple last-level caches (a.k.a. cache slices) that are physically and logically distributed. The hash function used by the processors on the chip is changed according to which of last-level caches are active (e.g., ‘on’) and which are in a lower power consumption mode (e.g., ‘off’.) Thus, a first hash function is used to distribute accesses (i.e., reads and writes of data blocks) to all of the last-level caches when, for example, all of the last-level caches are ‘on.’ A second hash function is used to distribute accesses to the appropriate subset of the last-level caches when, for example, some of the last-level caches are ‘off.’ The chip controls the power consumption by turning on and off cache slices based on power states, and consequently dynamically switches among at least two hash functions. 1. An integrated circuit , comprising:a plurality of last-level caches that can be placed in at least a first high power consumption mode and a first low power consumption mode;a plurality of processor cores to access data in the plurality of last-level caches according to a first hashing function that maps processor access addresses to respective ones of the plurality of last-level caches based at least in part on all of the last-level caches being in the first high power consumption mode, the plurality of processor cores to access data in the plurality of last-level caches according to a second hashing function that maps processor access addresses to a subset of the plurality of last-level caches based at least in part on at least one of the last-level caches being in the first low power consumption mode; and,an interconnect network to receive hashed access addresses from the plurality of processor cores and to couple each of the plurality of processor cores to a respective one of the plurality of last-level caches specified by the hashed access addresses generated by a respective ...

Подробнее
16-03-2017 дата публикации

METHOD AND APPARATUS OF ACCESSING DATA OF VIRTUAL MACHINE

Номер: US20170075718A1
Автор: Quan Xiao Fei
Принадлежит:

A methods and device for accessing virtual machine (VM) data are described. A computing device for accessing virtual machine comprises an access request process module, a data transfer proxy module and a virtual disk. The access request process module receives a data access request sent by a VM and adds the data access request to a request array. The data transfer proxy module obtains the data access request from the request array, maps the obtained data access request to a corresponding virtual storage unit, and maps the virtual storage unit to a corresponding physical storage unit of a distributed storage system. A corresponding data access operation may be performed based on a type of the data access request. 120.-. (canceled)21. A system comprising:one or more processors;memory; receive a data access request from a virtual machine, and', 'add the data access request into a request array in an event that data associated with the data access request does not exist in a cache storage; and, 'an access request process module stored in the memory and executable by the one or more processors to obtain the data access request from the request array, and', 'map the data access request to a virtual storage unit that corresponds to a physical storage unit of a storage system., 'a data transfer proxy module stored in the memory and executable by the one or more processors to22. The system of claim 21 , wherein the data transfer proxy module further performs a data access operation on the data associated with the data access request based at least in part on a request type that is included in the data access request.23. The system of claim 21 , wherein the data access request includes a request type claim 21 , an initial address of data storage claim 21 , and a data size.24. The system of claim 21 , further comprising a cache module stored in the memory and executable by the one or more processors to store the data associated with the data access request in a cache slot of ...

Подробнее
16-03-2017 дата публикации

Input/output processing

Номер: US20170075846A1
Автор: Michael R. Krause

The present disclosure provides an electronic device that includes a lower device configured to process local input/output communications between the electronic device and a host, wherein the lower device is stateless. The electronic device also includes a memory comprising a data flow identifier used to associate a data flow resource of the host with a data flow resource corresponding to the lower device. A data packet sent from the lower device to the host includes the data flow identifier.

Подробнее
18-03-2021 дата публикации

SHARED MEMORY

Номер: US20210081312A1
Принадлежит:

Examples described herein includes a network interface controller comprising a memory interface and a network interface, the network interface controller configurable to provide access to local memory and remote memory to a requester, wherein the network interface controller is configured with an amount of memory of different memory access speeds for allocation to one or more requesters. In some examples, the network interface controller is to grant or deny a memory allocation request from a requester based on a configuration of an amount of memory for different memory access speeds for allocation to the requester. In some examples, the network interface controller is to grant or deny a memory access request from a requester based on a configuration of memory allocated to the requester. In some examples, the network interface controller is to regulate quality of service of memory access requests from requesters. 1. An apparatus comprising:a network interface controller comprising a memory interface and a network interface, the network interface controller configurable to provide access to local memory and remote memory to a requester, wherein the network interface controller is configured with an amount of memory of different memory access speeds for allocation to one or more requesters.2. The apparatus of claim 1 , wherein the network interface controller is to grant or deny a memory allocation request from a requester based on a configuration of an amount of memory for different memory access speeds for allocation to the requester.3. The apparatus of claim 1 , wherein the network interface controller is to grant or deny a memory access request from a requester based on a configuration of memory allocated to the requester.4. The apparatus of claim 1 , wherein the network interface controller is to regulate quality of service of memory access requests from requesters.5. The apparatus of claim 1 , wherein the local memory comprises a lower latency memory technology ...

Подробнее
18-03-2021 дата публикации

METHOD AND APPARATUS FOR PERFORMING PIPELINE-BASED ACCESSING MANAGEMENT IN A STORAGE SERVER

Номер: US20210081321A1
Принадлежит:

A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to select fixed size buffer pool from a plurality of fixed size buffer pools; utilizing the host device to allocate a buffer from the fixed size buffer pool to be a pipeline module of at least one pipeline within the pipeline architecture, for performing buffering for the at least one pipeline; and utilizing the host device to write metadata corresponding to the user data into the allocated buffer. 1. A method for performing pipeline-based accessing management in a storage server , the method being applied to the storage server , the method comprising:in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server, wherein the storage server comprises the host device and the storage device layer, the storage device layer comprises at least one storage device that is coupled to the host device, the host device is arranged to control operations of the storage server, and said at least one storage device is arranged to store information for the storage server;during processing the object write command with the pipeline architecture, utilizing the host device to allocate a buffer from a buffer pool to be a pipeline module of at least one pipeline within the pipeline architecture, for performing ...

Подробнее
22-03-2018 дата публикации

OPTIMIZING REMOTE DIRECT MEMORY ACCESS (RDMA) WITH CACHE ALIGNED OPERATIONS

Номер: US20180081852A1
Принадлежит:

A system for optimizing remote direct memory accesses (RDMA) is provided. The system includes a first computing device and a second computing device disposed in signal communication with the first computing device. The first and second computing devices are respectively configured to exchange RDMA credentials during a setup of a communication link between the first and second computing devices. The exchanged RDMA credentials include cache line size information of the first computing device by which a cache aligned RDMA write operation is executable on a cache of the first computing device in accordance with the cache line size information by the second computing device. 1. A system for optimizing remote direct memory accesses (RDMA) , the system comprising:a first computing device; anda second computing device,RDMA credentials being exchangeable between the first and second computing devices during a first and second computing device communication link setup, andthe exchanged RDMA credentials comprising cache line size information of the first computing device by which a write operation is executable by the second computing device.2. The system according to claim 1 , wherein:the second computing device is configured to issue a link request to the first computing device, andthe first computing device is configured to issue a link response to the second computing device in response to the link request, the link response comprising a first indication that an align write option is unsupported by the first computing device or a second indication that the align write option is supported by the first computing device for a predefined cache size.3. The system according to claim 1 , wherein the write operation is adjustable.4. The system according to claim 1 , wherein the write operation comprises an addition of a trailing pad to write operation data.5. The system according to claim 1 , wherein the write operation comprises a definition of a start address.6. The system ...

Подробнее
25-03-2021 дата публикации

TECHNIQUES TO CONTROL AN INSERTION RATIO FOR A CACHE

Номер: US20210089216A1
Принадлежит:

Examples may include techniques to control an insertion ratio or rate for a cache. Examples include comparing cache miss ratios for different time intervals or windows for a cache to determine whether to adjust a cache insertion ratio that is based on a ratio of cache misses to cache insertions. 1. An apparatus comprising: determine, during a first time interval, a first cache use characteristic for an exact match cache (EMC),', 'determine, during a second time interval, a second cache use characteristic for the EMC, and', 'determine whether to adjust an operational use of the EMC during a subsequent time interval based on the first cache use characteristic and the second cache use characteristic., 'circuitry to;'}2. The apparatus of claim 1 , wherein the first and second cache use characteristics to determine whether to adjust the operational use of the EMC during a subsequent time interval comprises cache insertion data.3. The apparatus of claim 2 , the cache insertion data comprises a cache insertion ratio based on a ratio of cache misses to cache insertions for the EMC claim 2 , wherein the circuitry to adjust the operational use of the EMC includes an adjustment to the cache insertion ratio to a value of 0.4. The apparatus of claim 1 , wherein the first and second cache use characteristics to determine whether to adjust the operational use of the EMC during a subsequent time interval comprises a ratio of cache miss data.5. The apparatus of claim 1 , wherein the first and second cache use characteristics to determine whether to adjust the operational use of the EMC during a subsequent time interval comprises a ratio of cache misses to cache insertions.6. The apparatus of claim 1 , comprising the circuitry included in a classifier for a virtual switch claim 1 , data stored to the EMC to be used by the classifier to match packet headers for packets to be processed by the virtual switch.7. The apparatus of claim 1 , comprising:the first cache use characteristic ...

Подробнее
31-03-2016 дата публикации

Memory write management in a computer system

Номер: US20160092118A1
Принадлежит: Intel Corp

In accordance with the present description, an apparatus for use with a source issuing write operations to a target, wherein the device includes an I/O port, and logic of the target configured to detect a flag issued by the source in association with the issuance of a first plurality of write operations. In response to detection of the flag, the logic of the target ensures that the first plurality of write operations are completed in a memory prior to completion of any of the write operations of the second plurality of write operations. Also described is an apparatus of the source which includes an I/O port, and logic of the source configured to issue the first plurality of write operations and to issue a write fence flag in association with the issuance of a first plurality of write operations. Other aspects are described herein.

Подробнее
31-03-2016 дата публикации

DATABASE APPARATUS, DATABASE MANAGEMENT METHOD PERFORMED IN DATABASE APPARATUS AND STORING THE SAME

Номер: US20160092460A1
Принадлежит:

A database apparatus may include a database unit configured to store first and second data groups being classified based on a data attribute, a first caching unit associated with the first data group and including a first cache architecture and a second caching unit associated with the second data group and including a second cache architecture. 1. A database apparatus comprising:a database unit configured to store first and second data groups classified based on a data attribute;a first caching unit associated with the first data group, and including a first cache architecture; anda second caching unit associated with the second data group, and including a second cache architecture.2. The database apparatus of claim 1 , wherein an average data size of the first data group is equal to or less than an average data size of the second data group claim 1 , and an average transaction frequency of the first data group is equal to or less than an average transaction frequency of the second data group.3. The database apparatus of claim 1 , further comprising:a control unit configured to analyze a database query received from a user and to provide the analyzed database query to the database unit, the first caching unit or the second caching unit.4. The database apparatus of claim 3 , wherein the control unit to analyze a query predicate of the database query and to provide the analyzed database query to the first caching unit when a corresponding table is associated with the first data group.5. The database apparatus of claim 4 , wherein the first caching unit to interpret the database query and to search a cache mapping table to determine effectiveness of a query result for the database query when the interpreted database query is searched in the cache mapping table.6. The database apparatus of claim 5 , wherein the first caching unit to provide a cache key based on a query identifier and the query predicate of the database query to search the cache key in the cache mapping ...

Подробнее
21-03-2019 дата публикации

APPLYING MULTIPLE HASH FUNCTIONS TO GENERATE MULTIPLE MASKED KEYS IN A SECURE SLICE IMPLEMENTATION

Номер: US20190087109A1
Автор: Resch Jason K.
Принадлежит:

Methods and apparatus for efficiently storing and accessing secure data are disclosed. The method of storing includes encrypting data utilizing an encryption key to produce encrypted data, performing deterministic functions on the encrypted data to produce deterministic function values, masking the encryption key utilizing the deterministic function values to produce masked keys and combining the encrypted data and the masked keys to produce a secure package. The method of accessing includes de-combining a secure package to reproduce encrypted data and masked keys, selecting a deterministic function, performing the selected deterministic function on the reproduced encrypted data to reproduce a deterministic function value, de-masking a corresponding masked key utilizing the reproduced deterministic function value to reproduce an encryption key, and decrypting the reproduced encrypted data utilizing the reproduced encryption key to reproduce data. 1. A method for execution by one or more processing modules of one or more computing devices of a dispersed storage network (DSN) , the DSN including a plurality of storage units , the method comprising:encrypting data utilizing an encryption key to produce encrypted data;performing a plurality of deterministic functions on the encrypted data to produce a plurality of deterministic function values;masking the encryption key utilizing the plurality of deterministic function values to produce a plurality of masked keys;combining the encrypted data and the plurality of masked keys to produce a secure package; anddispersed storage error encoding the secure package.2. The method of claim 1 , wherein each of the plurality of deterministic function values includes a first number of bits that is substantially the same as a second number of bits of the encryption key.3. The method of claim 1 , wherein masking the encryption key utilizing the plurality of deterministic function values includes performing an exclusive OR function on ...

Подробнее
02-04-2015 дата публикации

NETWORK STORAGE SYSTEM AND METHOD FOR FILE CACHING

Номер: US20150095442A1
Принадлежит:

A network storage system and a method for file caching are provided. The network storage system includes a first electronic apparatus and a server. The first electronic apparatus has a first storage space. The server has a network storage space larger than the first storage space. When the first electronic apparatus sends an access request to the server for accessing a first file within the network storage space, the server broadcasts a cache list in response to the access request. The cache list includes the first file and a plurality of neighboring file neighboring to the first file. After receiving the cache list, the first electronic apparatus accesses the first file according to the cache list, and caches at least one of the neighboring files according to a first cache space size of the first storage space. 1. A network storage system , comprising:a first electronic apparatus, having a first storage space; anda server, connected with the first electronic apparatus and having a network storage space larger than the first storage space, whereinwhen the first electronic apparatus sends an access request to the server for accessing a first file within the network storage space, the server broadcasts a cache list in response to the access request, wherein the cache list comprises the first file and a plurality of neighboring files neighboring to the first file, andafter receiving the cache list, the first electronic apparatus accesses the first file according to the cache list and accesses at least one of the plurality of neighboring files according to a first cache space size of the first storage space.2. The network storage system according to claim 1 , whereinthe plurality of neighboring files and the first file belong to a same directory, a same path and/or a same folder.3. The network storage system according to claim 1 , whereina modification time difference between each of the neighboring files and the first file is less than a predetermined threshold.4. The ...

Подробнее
30-03-2017 дата публикации

OPTIMIZING REMOTE DIRECT MEMORY ACCESS (RDMA) WITH CACHE ALIGNED OPERATIONS

Номер: US20170091144A1
Принадлежит:

A system for optimizing remote direct memory accesses (RDMA) is provided. The system includes a first computing device and a second computing device disposed in signal communication with the first computing device. The first and second computing devices are respectively configured to exchange RDMA credentials during a setup of a communication link between the first and second computing devices. The exchanged RDMA credentials include cache line size information of the first computing device by which a cache aligned RDMA write operation is executable on a cache of the first computing device in accordance with the cache line size information by the second computing device. 1. A system for optimizing remote direct memory accesses (RDMA) , the system comprising:a first computing device; anda second computing device disposed in signal communication with the first computing device,the first and second computing devices being respectively configured to exchange RDMA credentials during a setup of a communication link between the first and second computing devices, andthe exchanged RDMA credentials comprising cache line size information of the first computing device by which a cache aligned RDMA write operation is executable on a cache of the first computing device in accordance with the cache line size information by the second computing device.2. The system according to claim 1 , wherein:the second computing device is configured to issue a link request to the first computing device, andthe first computing device is configured to issue a link response to the second computing device in response to the link request, the link response comprising one of:a first indication that an align RDMA write option (ARW) is unsupported by the first computing device; anda second indication that the ARW is supported by the first computing device for a predefined cache size.3. The system according to claim 1 , wherein the cache aligned RDMA write operation comprises an RDMA write operation ...

Подробнее
19-06-2014 дата публикации

RAPID VIRTUAL MACHINE SUSPEND AND RESUME

Номер: US20140173196A1
Принадлежит: VMWARE, INC.

A method of enabling “fast” suspend and “rapid” resume of virtual machines (VMs) employs a cache that is able to perform input/output operations at a faster rate than a storage device provisioned for the VMs. The cache may be local to a computer system that is hosting the VMs or may be shared cache commonly accessible to VMs hosted by different computer systems. The method includes the steps of saving the state of the VM to a checkpoint file stored in the cache and locking the checkpoint file so that data blocks of the checkpoint file are maintained in the cache and are not evicted, and resuming execution of the VM by reading into memory the data blocks of the checkpoint file stored in the cache. 1. In a virtualized computer system including a plurality of host computers , a first storage device accessible by the host computers , and a second storage device accessible by the host computers that has lower input/output latency and higher input/output throughput than the first storage device , a method of managing physical resources of the virtualized computer system including the first storage device and the second storage device , said method comprising:making a reservation against available capacity of the second storage device when a VM is powered on in one of the host computers;upon suspending the VM and receiving acknowledgement that a state of the VM has been saved in the second storage device, decreasing the available capacity of the second storage device by the size of the saved state of the VM; andupon resuming the VM, increasing the available capacity of the second storage device by the size of the saved state of the VM and making a reservation against the available capacity of the second storage device.2. The method of claim 1 , wherein the reservation made against the available capacity of the second storage device when the VM is powered on in one of the host computers is for capacity of the second storage device equal to the size of the saved state of the ...

Подробнее
30-03-2017 дата публикации

INFORMATION TRANSMISSION BASED ON MODAL CHANGE

Номер: US20170094008A1
Принадлежит:

A dual-mode, dual-display shared resource computing (SRC) device is usable to stream SRC content from a host SRC device while in an on-line mode and maintain functionality with the content during an off-line mode. Such remote SRC devices can be used to maintain multiple user-specific caches and to back-up cached content for multi-device systems. 120.-. (canceled)21. One or more computer-readable storage media storing executable instructions that , when executed by one or more processors , cause the one or more processors to perform acts comprising:receiving, at a host device, first cache content transmitted from a first device associated with a first user;storing, in a first user-specific memory cache of the host device, the first cache content of the first device associated with the first user;determining when one or more additional devices are associated with the first user; andtransmitting the cache content to the one or more additional devices associated with the first user.22. The one or more computer-readable storage media of claim 21 , the acts further comprising:receiving second cache content transmitted from a second device associated with a second user, the second user being different from the first user; andstoring, in a second user-specific memory cache of the host device, the second cache content of the second device associated with the second user, the second user-specific memory cache being different from the first user-specific memory cache.23. The one or more computer-readable storage media of claim 21 , wherein the first cache content comprises content previously streamed from the host device to the first device during a last session of the first user on the first device.24. The one or more computer-readable storage media of claim 21 , the acts further comprising:receiving second cache content from the first device;determining that the first device is logged in by a second user different from the first user; andin response to determining that the ...

Подробнее
19-03-2020 дата публикации

Coherent Caching of Data for High Bandwidth Scaling

Номер: US20200089611A1
Принадлежит:

A method, computer readable medium, and system are disclosed for a distributed cache that provides multiple processing units with fast access to a portion of data, which is stored in local memory. The distributed cache is composed of multiple smaller caches, and each of the smaller caches is associated with at least one processing unit. In addition to a shared crossbar network through which data is transferred between processing units and the smaller caches, a dedicated connection is provided between two or more smaller caches that form a partner cache set. Transferring data through the dedicated connections reduces congestion on the shared crossbar network. Reducing congestion on the shared crossbar network increases the available bandwidth and allows the number of processing units to increase. A coherence protocol is defined for accessing data stored in the distributed cache and for transferring data between the smaller caches of a partner cache set. 1. A distributed cache storage , comprising:a first cache storage within a first processor coupled to a first slice of memory, the first cache storage including a first cache line that stores first data from a first location in the first slice of memory and is coherent with the first location, wherein the first cache storage is directly coupled to a second cache storage within a second processor through a dedicated connection and indirectly coupled to the second cache storage through a shared connection; andthe second cache storage coupled to a second slice of the memory and including a second cache line that stores second data from a second location in the second slice of memory and is coherent with the second location, wherein the first cache line is written with the second data through the dedicated connection.2. The distributed cache storage of claim 1 , wherein state information for the first cache line is modified from indicating the first cache line is coherent with the first location to indicate the first ...

Подробнее
19-03-2020 дата публикации

CACHE COHERENT NODE CONTROLLER FOR SCALE-UP SHARED MEMORY SYSTEMS

Номер: US20200089612A1
Принадлежит: Numascale AS

The present invention relates to cache coherent node controllers for scale-up shared memory systems. In particular it is disclosed a computer system at least comprising a first group of CPU modules connected to at least one first FPGA Node Controller configured to execute transactions directly or through a first interconnect switch to at least one second FPGA Node Controller connected to a second group of CPU modules running a single instance of an operating system. 1. A multiprocessor memory sharing system at least comprising two or more nodes , a first node , a second node and optionally an interconnect switch , each node comprises a group of CPU's and an FPGA Node Controller , the first node comprises a first group of CPU's and at one first FPGA Node Controller configured to execute transactions directly or through the optional interconnect switch to at least one second FPGA Node Controller connected to a second group of CPU's running a single instance of an operating system , where at least one of the FPGA Node Controllers at least comprises:a) at least one Coherent Interface configured to connect the FPGA Node Controller to one or more CPU's within the same node;b) a CPU interface unit configured to translate transactions specific to a particular CPU architecture into a global cache coherence protocol;c) a remote memory protocol engine, RMPE, configured to handle memory transactions that are destined to operate on memory connected with CPU's that are located on another side of a Coherent Interface Fabric, the RMPE is controlled by microcode firmware and programmed to be compliant with the cache coherence protocol of the particular CPU architecture;d) a local memory protocol engine, LMPE, specifically designed to handle memory transactions through executing microcode firmware and it is configured to handle all memory transactions that are directed from an external CPU to the memory connected to the CPU's local to the Node Controller; ande) an interconnect fabric ...

Подробнее
19-03-2020 дата публикации

MULTIPLE MEMORY TYPE MEMORY MODULE SYSTEMS AND METHODS

Номер: US20200089626A1
Автор: Murphy Richard C.
Принадлежит:

The present disclosure provides methods, apparatuses, and systems for implementing and operating a memory module, for example, in a computing device that includes a network interface, which is coupled to a network to enable communication with a client device, and processing circuitry, which is coupled to the network interface via a data bus and programmed to perform operations based on user inputs received from the client device. The memory module includes memory devices, which may be non-volatile memory or volatile memory, and a memory controller coupled between the data bus and the of memory devices. The memory controller may be programmed to determine when the processing circuitry is expected to request a data block and control data storage in the memory devices. 1. An apparatus , comprising: a first portion of the plurality of memory devices is implemented to provide volatile memory; and', 'a second portion of the plurality of memory devices is implemented to provide non-volatile memory; and, 'a plurality of memory devices, wherein the buffer memory comprises static random-access memory;', 'the memory controller is communicatively coupled to each of the plurality of memory devices; and', 'the memory controller is configured to deterministically store data blocks in either the volatile memory on the apparatus or the non-volatile memory implemented on the apparatus based at least in part on when the data blocks are expected to be requested from the apparatus., 'a memory controller comprising buffer memory, wherein2. The apparatus of claim 1 , wherein the memory controller is configured to predict when the data blocks will be requested based at least in part on analysis of data access parameters associated with the data blocks using machine learning techniques.3. The apparatus of claim 1 , wherein claim 1 , to deterministically store data blocks in either the volatile memory or the non-volatile memory claim 1 , the memory controller is configured to:store a first ...

Подробнее
05-04-2018 дата публикации

RESTORING DISTRIBUTED SHARED MEMORY DATA CONSISTENCY WITHIN A RECOVERY PROCESS FROM A CLUSTER NODE FAILURE

Номер: US20180095848A1

A DSM component is organized as a matrix of page. The data structure of a set of data structures occupies a column in the matrix of pages. A recovery file is maintained in a persistent storage. The recovery file consists of entries and each one of the entries corresponds to a column in the matrix of pages by a location of each one of the entries. The set of data structures is stored in the DSM component and in the persistent storage. Incorporated into each one of the plurality of entries in the recovery file is an indication if an associated column in the matrix of pages is assigned with the data structure of the set of data structures; and additionally incorporated into each one of the plurality of entries in the recovery file are identifying key properties of the data structure of the set of data structures. 1. A method for restoring distributed shared memory (DSM) data consistency within a recovery process from a failure of a node in a cluster of nodes by a processor device , comprising:organizing a DSM component as a matrix of pages, wherein a data structure of a set of data structures occupies a column in the matrix of pages;maintaining a recovery file in a persistent storage, wherein the recovery file consists of a plurality of entries and each one of the plurality of entries corresponds to a column in the matrix of pages by a location of each one of the plurality of entries;storing the set of data structures in the DSM component and in the persistent storage;incorporating into each one of the plurality of entries in the recovery file an indication if an associated column in the matrix of pages is assigned with the data structure of the set of data structures; andincorporating into each one of the plurality of entries in the recovery file identifying key properties of the data structure of the set of data structures and a specification of the location of the data structure in the persistent storage if the associated column in the matrix of pages is assigned.2. ...

Подробнее
28-03-2019 дата публикации

PROCESSORS, METHODS, AND SYSTEMS FOR A MEMORY FENCE IN A CONFIGURABLE SPATIAL ACCELERATOR

Номер: US20190095369A1
Принадлежит:

Systems, methods, and apparatuses relating to a memory fence mechanism in a configurable spatial accelerator are described. In one embodiment, a processor includes a plurality of processing elements and an interconnect network between the plurality of processing elements to receive an input of a dataflow graph comprising a plurality of nodes, wherein the dataflow graph is to be overlaid into the interconnect network and the plurality of processing elements with each node represented as a dataflow operator in the plurality of processing elements, and the plurality of processing elements are to perform a plurality of operations, each by a respective, incoming operand set arriving at each of the dataflow operators of the plurality of processing elements. The processor also includes a fence manager to manage a memory fence between a first operation and a second operation of the plurality of operations. 1. A processor comprising:a plurality of processing elements;an interconnect network between the plurality of processing elements to receive an input of a dataflow graph comprising a plurality of nodes, wherein the dataflow graph is to be overlaid into the interconnect network and the plurality of processing elements with each node represented as one of a plurality of dataflow operators in the plurality of processing elements, and the plurality of processing elements are to perform a plurality of operations, each by a respective, incoming operand set arriving at each of the dataflow operators of the plurality of processing elements; anda fence manager to manage a memory fence between a first operation and a second operation of the plurality of operations.2. The processor of claim 1 , further comprising a plurality of request address file (RAF) circuits including a first RAF circuit to request the memory fence by sending a fence-request message to the fence manager.3. The processor of claim 2 , wherein claim 2 , in response to the fence-request message claim 2 , the fence ...

Подробнее
16-04-2015 дата публикации

STORAGE SYSTEM AND DATA ACCESS METHOD

Номер: US20150106468A1
Автор: KAN MASAKI, Kobayashi Dai
Принадлежит: NEC Corporation

A distributed storage system which achieves high access performance simultaneously with maintaining the flexibility of allocation of data objects is disclosed. A client terminal includes an asynchronous cache that retains an correspondence relationship between an identifier of object data and an identifier of the storage node that is to handle an access request for the object data, and an access unit that determines the storage node that is to handle the access request on the basis of the correspondence relationship stored in the asynchronous cache, and that transmits the access request to the determined storage node, wherein the storage node includes a determination unit that determines, upon receiving the access request from the client terminal, whether the access request is to be handled by itself, and notifies the client terminal of the determined result, and an update unit that updates the storage node that is to handle the access request, and wherein the asynchronous cache changes the correspondence relationship in accordance with the update, the change being made asynchronous with the update by the storage nodes. 1. A storage system comprising:a client terminal; and wherein the client terminal includes an asynchronous cache that retains an correspondence relationship between an identifier of object data and an identifier of the storage node that is to handle an access request for the object data, and an access unit that determines the storage node that is to handle the access request on the basis of the correspondence relationship stored in the asynchronous cache, and that transmits the access request to the determined storage node,', 'wherein the storage node includes a determination unit that determines, upon receiving the access request from the client terminal, whether the access request is to be handled by itself, and notifies the client terminal of the determined result, and an update unit that updates the storage node that is to handle the access ...

Подробнее
16-04-2015 дата публикации

SYSTEM AND METHOD FOR MANAGING CACHE COHERENCE IN A NETWORK OF PROCESSORS PROVIDED WITH CACHE MEMORIES

Номер: US20150106571A1
Принадлежит:

A cache coherence management system includes: a set of directories distributed between nodes of a network for interconnecting processors including cache memories, each directory including a correspondence table between cache lines and information fields on the cache lines; and a mechanism updating the directories by adding, modifying, or deleting cache lines in the correspondence tables. In each correspondence table and for each cache line identified, at least one field is provided for indicating a possible blocking of a transaction relative to the cache line considered, when the blocking occurs in the node associated with the correspondence table considered. The system further includes a mechanism detecting fields indicating a transaction blocking and restarting each transaction detected as blocked from the node in which it is indicated as blocked. 110-. (canceled)11. A system for managing cache coherence in a network of processors including cache memories , the network including a main memory shared between the processors and a plurality of nodes for access to the main memory interconnected with one another and the processors , the system comprising:a set of directories distributed between the nodes of the network, each directory comprising a correspondence table between cache lines and information fields on the cache lines;means for updating the directories by adding cache lines, modifying information fields of cache lines, or deleting cache lines in the correspondence tables;wherein in each correspondence table and for each cache line identified, at least one field is provided for indicating whether a transaction relative to the cache line considered is blocked in the node associated with the correspondence table considered; andthe system further comprising means for detecting fields indicating a transaction blocking and for restarting each transaction detected as being blocked from the node in which it is indicated as blocked.12. The system for managing cache ...

Подробнее
28-03-2019 дата публикации

DISTRIBUTED KEY CACHING FOR ENCRYPTED KEYS

Номер: US20190097791A1
Принадлежит:

Methods, systems, and devices for distributed caching of encrypted encryption keys are described. Some multi-tenant database systems may support encryption of data records. To efficiently handle multiple encryption keys across multiple application servers, the database system may store the encryption keys in a distributed cache accessible by each of the application servers. To securely cache the encryption keys, the database system may encrypt (e.g., wrap) each data encryption key (DEK) using a second encryption key (e.g., a key encryption key (KEK)). The database system may store the DEKs and KEKs in separate caches to further protect the encryption keys. For example, while the encrypted DEKs may be stored in the distributed cache, the KEKs may be stored locally on application servers. The database system may further support “bring your own key” (BYOK) functionality, where a user may upload a tenant secret or tenant-specific encryption key to the database. 1. A method for data encryption , comprising:receiving, at a distributed cache, a first encryption key parameter associated with a tenant;receiving a second encryption key parameter associated with the first encryption key parameter;transmitting, to a key derivation server, the first encryption key parameter and the second encryption key parameter;receiving, from the key derivation server, an encrypted encryption key associated with the first encryption key parameter and encrypted using an encryption key associated with the second encryption key parameter; andtransmitting, to an application server, the encrypted encryption key.2. The method of claim 1 , further comprising:storing, at the distributed cache, the encrypted encryption key;receiving, from the application server, a request message for the encrypted encryption key; andtransmitting, to the application server, the encrypted encryption key based at least in part on storing the encrypted encryption key at the distributed cache.3. The method of claim 1 , ...

Подробнее
12-04-2018 дата публикации

GRANTING EXCLUSIVE CACHE ACCESS USING LOCALITY CACHE COHERENCY STATE

Номер: US20180101474A1
Принадлежит:

A cache coherency management facility to reduce latency in granting exclusive access to a cache in certain situations. A node requests exclusive access to a cache line of the cache. The node is in one region of nodes of a plurality of regions of nodes. The one region of nodes includes the node requesting exclusive access and another node of the computing environment, in which the node and the another node are local to one another as defined by a predetermined criteria. The node requesting exclusive access checks a locality cache coherency state of the another node, the locality cache coherency state being specific to the another node and indicating whether the another node has access to the cache line. Based on the checking indicating that the another node has access to the cache line, a determination is made that the node requesting exclusive access is to be granted exclusive access to the cache line. The determining being independent of transmission of information relating to the cache line from one or more other nodes of the one or more other regions of nodes. 1. A computer system for managing exclusive access to cache lines of a cache of a computing environment , said computer system comprising:a memory; and requesting, by a node of the computing environment, exclusive access to a selected cache line of the cache, the computing environment including a plurality of regions of nodes and the requesting comprising sending a request for exclusive access to at least multiple regions of nodes of the plurality of regions of nodes, wherein one region of nodes of the plurality of regions of nodes includes a plurality of nodes, the plurality of nodes comprising the node requesting exclusive access and another node of the computing environment, wherein the node requesting exclusive access and the another node are local to one another as defined by a predetermined criteria, and wherein at least one node of the node requesting exclusive access and the another node includes a ...

Подробнее
08-04-2021 дата публикации

REDUNDANT, FAULT-TOLERANT, DISTRIBUTED REMOTE PROCEDURE CALL CACHE IN A STORAGE SYSTEM

Номер: US20210103509A1
Принадлежит:

A method of operating a remote procedure call cache in a storage cluster is provided. The method includes receiving a remote procedure call at a first storage node having solid-state memory and writing information, relating to the remote procedure call, to a remote procedure call cache of the first storage node. The method includes mirroring the remote procedure call cache of the first storage node in a mirrored remote procedure call cache of a second storage node. A plurality of storage nodes and a storage cluster are also provided. 1. A storage system , comprising:a plurality of storage nodes configurable to cooperate as a storage cluster;a remote procedure call cache in a storage node of the plurality of storage nodes; anda mirrored remote procedure call cache in the storage node, the mirrored remote procedure call cache configurable to mirror the first remote procedure call cache.2. The storage system of claim 1 , wherein each of the plurality of storage nodes configured to check the remote procedure call cache and to determine whether a result of a remote procedure call is posted.3. The storage system of claim 1 , wherein the plurality of storage nodes support a plurality of filesystems.4. The storage system of claim 1 , wherein each of the plurality of storage nodes includes a non-volatile random access memory (NVRAM) containing the remote procedure call cache and the mirrored remote procedure call cache.5. The storage system of claim 1 , each of the plurality of storage nodes having a table claim 1 , configured to indicate a primary authority claim 1 , a first backup authority claim 1 , and a second backup authority claim 1 , wherein the remote procedure call cache corresponds to the primary authority.6. The storage system of claim 1 , wherein each of the plurality of storage nodes is configured to send a copy of contents of the remote procedure call cache to a further storage node for storage in the mirrored remote procedure call cache of the further storage ...

Подробнее
04-04-2019 дата публикации

TECHNIQUES TO DIRECT ACCESS REQUESTS TO STORAGE DEVICES

Номер: US20190101880A1
Автор: Guim Bernat Francesc
Принадлежит:

Examples include techniques to direct access requests to storage or memory devices. Examples include receiving an access request to remotely access storage devices. The access request included in a fabric packet routed to a target host computing node coupled with the storage devices through a networking fabric. The access request directed to shared or dedicated storage devices based on whether the access request is characterized or define as a sequential stream or a random stream. 1. A controller comprising:circuitry; and receive an access request to read data to or write data from at least one storage device of a plurality of storage devices coupled with a target host computing node that hosts the controller, the access request included in a fabric packet routed from a requesting computing node through a networking fabric coupled with the target host computing node;', 'detect whether the access request has a sequential access pattern to storage device memory addresses at the at least one storage device or a random access pattern to storage device memory addresses at the at least one storage device;', 'characterize the access request as a sequential stream or a random stream based, at least in part, on the detected access pattern; and', 'direct the access request to a dedicated storage device from among the plurality of storage devices if the access request is characterized as a sequential stream or to one or more shared storage devices from among the plurality of storage devices if the access request is characterized as a random stream., 'logic for execution by the circuitry to2. The controller of claim 1 , further comprising the logic to:obtain configuration information that includes a time threshold for a period of time between when a previous access request was received from the requesting computing node and when the access request was received, the previous access request previously characterized as a random stream; andcharacterize the access request as a ...

Подробнее
13-04-2017 дата публикации

GLOBAL OBJECT DEFINITION AND MANAGEMENT FOR DISTRIBUTED FIREWALLS

Номер: US20170104720A1
Принадлежит:

A method of defining distributed firewall rules in a group of datacenters is provided. Each datacenter includes a group of data compute nodes (DCNs). The method sends a set of security tags from a particular datacenter to other datacenters. The method, at each datacenter, associates a unique identifier of one or more DCNs of the datacenter to each security tag. The method associates one or more security tags to each of a set of security group at the particular datacenter and defines a set of distributed firewall rules at the particular datacenter based on the security tags. The method sends the set of distributed firewall rules from the particular datacenter to other datacenters. The method, at each datacenter, translates the firewall rules by mapping the unique identifier of each DCN in a distributed firewall rule to a corresponding static address associated with the DCN. 120-. (canceled)21. A method of defining global objects for use in distributed firewall rules in a plurality of data centers , each data center comprising a network manager server and a plurality of data compute nodes (DCNs) , the method comprising:for each DCN, identifying a set of dynamically defined identifiers of the DCN for use in the distributed firewall rules;storing an object corresponding to each DCN of each datacenter in a distributed cache accessible to the network manager of each datacenter, each object comprising a mapping of the set of dynamically defined identifiers of the corresponding DCN to a static identifier of the DCN;receiving a distributed firewall rule comprising a dynamically defined identifier of a DCN of a first datacenter at a network manager of a second datacenter; andby the network manager of the second datacenter, translating the dynamically defined identifier of the DCN of the first datacenter into the static identifier of the DCN using the object corresponding to the DCN the first datacenter stored in the distributed cache.22. The method of claim 21 , wherein each ...

Подробнее