Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 6357. Отображено 199.
04-03-2004 дата публикации

Halbleiterspeicher mit einem ersten Tabellenspeicher und einem Zwischenspeicher

Номер: DE0010238914A1
Принадлежит:

Die Erfindung betrifft einen Halbleiterspeicher (2) mit einem ersten Tabellenspeicher (4) mit wahlfreiem Zugriff, in dem Daten für ein Rechengerät (3), insbesondere für einen Mikroprozessor, abgelegt sind. Außerdem umfasst der Halbleiterspeicher (2) einen weiteren Tabellenspeicher (5), der eine geringere maximale Zugriffszeit als der erste Tabellenspeicher (4) und der zwischen dem ersten Tabellenspeicher (4) und dem Rechengerät (3) als Zwischenspeicher für die zwischen dem ersten Tabellenspeicher (4) und dem Rechengerät (3) zu übertragenden Daten als integraler Bestandteil des Halbleiterspeichers (2) angeordnet ist. Um die Zugriffszeiten von dem Rechengerät (3) auf den Halbleiterspeicher (2), insbesondere bei einem Zugriff auf neue Adressbereiche, zu verkürzen, wird vorgeschlagen, dass der Halbleiterspeicher (2) eine konfigurierbare Logik (14) zum Auswerten der von dem Rechengerät (3) angeforderten Daten und zum Ermitteln der Speicheradressen von zukünftig von dem Rechengerät (3) angeforderten ...

Подробнее
09-06-2010 дата публикации

Preload instruction control

Номер: GB0201006758D0
Автор:
Принадлежит:

Подробнее
03-04-1996 дата публикации

Accessing data memories.

Номер: GB0002293668A
Принадлежит:

A data memory (eg: a cache memory) having an addressable array of memory cells which can be accessed as predetermined groups of memory cells, comprises output buffer means for storing the contents of at least the most recently read group of memory cells and another previously read group of memory cells; and reading means, responsive to an indication that the contents of the group of memory cells containing the required memory cell is not stored in the output buffer means, for reading the contents of the group of memory cells containing the required memory cell into the output buffer means; the contents of at least the required memory cell being supplied as an output from the output buffer means. ...

Подробнее
31-10-2012 дата публикации

Cache memory system

Номер: GB0002454808A8
Принадлежит:

Подробнее
10-02-1982 дата публикации

Improvements in or relating to data processing systems including cache stores

Номер: GB0002080989A
Принадлежит:

A data processing system includes a cache store to provide an interface with a main storage unit for a central processing unit. The central processing unit includes a microprogram control unit in addition to control circuits for establishing the sequencing of the processing unit during the execution of program instructions. Both the microprogram control unit and control circuits include means for generating pre-read commands to the cache store in conjunction with normal processing operations during the processing of certain types of instructions. In response to pre-read commands, the cache store, during predetermined points of the processing of each such instruction, fetches information which is required by such instruction at a later point in the processing thereof.

Подробнее
25-02-1981 дата публикации

Data processing system including a cache store

Номер: GB0002055233A
Принадлежит:

A data processing system includes a main memory system 3, a high speed buffer cache store, a central processor unit (CPU) and an Input/Output processor (IOP) all connected to a system bus. Apparatus in the cache store reads all information on the system bus into a first in, first out buffer comprising a plurality of registers, a write address counter, a read address counter and a means for selectively processing the information. The cache store is word oriented and further comprises a directory 202, a data buffer 201 and associated control logic. The CPU requests data words by sending a main memory address of the requested data word to the cache store. If the cache store does not have the requested data word, apparatus in the cache store requests the data word from the main memory system, and in addition, the apparatus requests additional data words from consecutively higher addresses. If the main memory system is busy, the cache store has apparatus to request fewer words. The cache store ...

Подробнее
27-12-2007 дата публикации

Cache memory

Номер: GB0000722707D0
Автор:
Принадлежит:

Подробнее
24-06-2020 дата публикации

Gateway pull model

Номер: GB0002579412A
Автор: BRIAN MANULA, Brian Manula
Принадлежит:

A gateway for interfacing a host with a subsystem for acting as a work accelerator to the host A computer system comprising: {i) a computer subsystem configured to act as a work accelerator, and {ii) a gateway connected to the computer subsystem, the gateway enabling the transfer of data to the computer subsystem from external storage at pre-compiled data exchange synchronisation points attained by the computer subsystem, which act as a barrier between a compute phase and an exchange phase of the computer subsystem, wherein the computer subsystem is configured to pull data from a gateway transfer memory of the gateway in response to the pre-compiled data exchange synchronisation point attained by the subsystem, wherein the gateway comprises at least one processor configured to perform at least one operation to pre-load at least some of the data from a first memory of the gateway to the gateway transfer memory in advance of the pre-complied data exchange synchronisation point attained by ...

Подробнее
30-03-1992 дата публикации

COMPUTER MEMORY ARRAY CONTROL

Номер: AU0008508191A
Принадлежит:

Подробнее
27-10-1990 дата публикации

METHOD AND APPARATUS FOR RELATING DIAGNOSTIC INFORMATION TO SPECIFIC COMPUTER INSTRUCTIONS

Номер: CA0002013735A1
Принадлежит:

METHOD AND APPARATUS FOR RELATING DIAGNOSTIC INFORMATION TO SPECIFIC COMPUTER INSTRUCTIONS A computer system 10 employs an apparatus for controllably generating interrupts to the computer system processor 12 in order to correlate a significant event occurring within the computer system to the value stored within the program counter at the time of the significant event. In this manner, the apparatus is useful for diagnosing the performance of the computer system. The apparatus includes at least one pair of 16-bit binary counters 42, 44, each having inputs connected through respective multiplexers 50, 52 to a variety of significant event signals. For example, the computer system 10 typically employs a cache where significant time can be wasted by repeated "misses" within the cache. By selecting the cache "miss" signal to be counted by the 16-bit binary counters 42, 44, a signal is generated approximately every 64,000 "misses." The output of the counters 42, 44 are connected to high priority ...

Подробнее
23-09-2003 дата публикации

SYSTEM FOR ACCESSING DISTRIBUTED DATA CACHE CHANNEL AT EACH NETWORK NODE TO PASS REQUESTS AND DATA

Номер: CA0002136727C
Принадлежит: PITTS, WILLIAM M., PITTS WILLIAM M

Network Distributed Caches ("NDCs") (50) permit accessing a named dataset stored at an NDC server terminator site (22) in response to a request submitted to an NDC client terminator site (24 ) by a client workstation (42). In accessing the dataset, the NDCs (50) form an NDC data conduit (62) that provides an active virtual circuit ("AVC") from the NDC client site (24) through intermediate NDC sites (26B, 26A) to the NDC server site (22). Throu gh the AVC provided by the conduit (62), the NDC sites (22, 26A and 26B) project an image of the requested portion of the nam ed dataset into the NDC client site (24). The NDCs (50) maintain absolute consistency between the source dataset and its projections at all NDC client terminator sites (24, 204B and 206) at which client workstations access the dataset. Channels (116) in each NDC (50) accumulate profiling data from the re- quests to access the dataset for which they have been claimed. The NDCs (50) use the profile data stored in channels (116 ...

Подробнее
23-06-2020 дата публикации

Cache prefetching

Номер: CN0111324556A
Автор:
Принадлежит:

Подробнее
04-05-2011 дата публикации

Cache system of storage system

Номер: CN0102043731A
Принадлежит:

The invention provides a cache system of a storage system, comprising an external interface layer, a cache management module and a block device hardware layer, wherein the external interface layer comprises a user interface and a standard block device interface; the cache management module comprises a virtual device mapping layer and a core management layer; and the block device hardware layer comprises a high-speed block device and a conventional standard block device based on RAM (Random Access Memory). The invention has the advantages of low system price, high performance and large capacity.

Подробнее
20-05-2015 дата публикации

Data prefetch method and microprocessor

Номер: CN104636274A
Принадлежит:

Подробнее
19-01-2018 дата публикации

Memory physical-address query method and memory physical-address query devices

Номер: CN0107608912A
Принадлежит:

Подробнее
21-06-2019 дата публикации

Номер: KR1020190070981A
Автор:
Принадлежит:

Подробнее
21-10-2020 дата публикации

MEMORY SYSTEM AND OPERATING METHOD THEREOF

Номер: KR1020200120113A
Автор:
Принадлежит:

Подробнее
18-11-2022 дата публикации

집약성을 개선하기 위한 머신 학습 모델들의 수정

Номер: KR20220153689A
Принадлежит:

... 집약성(locality)을 개선하기 위해 머신 학습 모델들을 업데이트하기 위한 방법들, 시스템들 및 장치가 설명된다. 일 양상에서, 방법은 머신 학습 모델의 데이터를 수신하는 단계를 포함한다. 데이터는 머신 학습 모델의 동작들 및 동작들 사이의 데이터 종속성들을 나타낸다. 머신 학습 모델이 전개될 머신 학습 프로세서에 대한 메모리 계층구조의 특징들을 지정하는 데이터가 수신된다. 메모리 계층구조는, 머신 학습 모델을 사용하여 머신 학습 컴퓨테이션들을 수행할 때, 머신 학습 프로세서에 의해 사용되는 머신 학습 데이터를 저장하기 위한, 다수의 메모리 레벨들의 다수의 메모리들을 포함한다. 업데이트된 머신 학습 모델은, 메모리 계층구조의 특징을 설명하기 위해 머신 학습 모델의 동작들 및 제어 종속성들을 수정함으로써 생성된다. 머신 학습 컴퓨테이션들은 업데이트된 머신 학습 모델을 사용하여 수행된다.

Подробнее
17-01-2019 дата публикации

MEMORY SYSTEM FOR A DATA PROCESSING NETWORK

Номер: WO2019012290A1
Принадлежит:

A data processing network includes a network of devices addressable via a system address space, the network including a computing device configured to execute an application in a virtual address space. A virtual-to-system address translation circuit is configured to translate a virtual address to a system address. A memory node controller has a first interface to a data resource addressable via a physical address space, a second interface to the computing device, and a system-to-physical address translation circuit, configured to translate a system address in the system address space to a corresponding physical address in the physical address space of the data resource. The virtual-to-system mapping may be a range table buffer configured to retrieve a range table entry comprising an offset address of a range together with a virtual address base and an indicator of the extent of the range.

Подробнее
25-01-2018 дата публикации

SELECTING CACHE TRANSFER POLICY FOR PREFETCHED DATA BASED ON CACHE TEST REGIONS

Номер: US20180024931A1
Автор: Paul James Moyer
Принадлежит:

A processor applies a transfer policy to a portion of a cache based on access metrics for different test regions of the cache, wherein each test region applies a different transfer policy for data in cache entries that were stored in response to a prefetch requests but were not the subject of demand requests. One test region applies a transfer policy under which unused prefetches are transferred to a higher level cache in a cache hierarchy upon eviction from the test region of the cache. The other test region applies a transfer policy under which unused prefetches are replaced without being transferred to a higher level cache (or are transferred to the higher level cache but stored as invalid data) upon eviction from the test region of the cache.

Подробнее
03-04-2018 дата публикации

Memory module with embedded access metadata

Номер: US0009934148B2

A memory module stores memory access metadata reflecting information about memory accesses to the memory module. The memory access metadata can indicate the number of times a particular unit of data (e.g., a row of data, a unit of data corresponding to a cache line, and the like) has been read, written, had one or more of its bits flipped, and the like. Modifications to the embedded access metadata can be made by a control module at the memory module itself, thereby reducing overhead at a processor core. In addition, the control module can be configured to record different access metadata for different memory locations of the memory module.

Подробнее
25-08-2020 дата публикации

Software translation prefetch instructions

Номер: US0010754791B2

Examples of techniques for software translation prefetch instructions are described herein. An aspect includes, based on encountering a translation prefetch instruction in software that is being executed by a processor, determining whether an address translation corresponding to the translation prefetch instruction is located in a translation lookaside buffer (TLB) of the processor. Another aspect includes, based on determining that the address translation is not located in the TLB, issuing an address translation request corresponding to the translation prefetch instruction. Another aspect includes storing an address translation corresponding to the address translation request in the TLB.

Подробнее
16-01-2020 дата публикации

INTELLIGENT THREAD DISPATCH AND VECTORIZATION OF ATOMIC OPERATIONS

Номер: US20200019401A1
Принадлежит: Intel Corporation

A mechanism is described for facilitating intelligent dispatching and vectorizing at autonomous machines. A method of embodiments, as described herein, includes detecting a plurality of threads corresponding to a plurality of workloads associated with tasks relating to a graphics processor. The method may further include determining a first set of threads of the plurality of threads that are similar to each other or have adjacent surfaces, and physically clustering the first set of threads close together using a first set of adjacent compute blocks.

Подробнее
04-09-2018 дата публикации

Cache aware searching of buckets in remote storage

Номер: US0010067944B2
Принадлежит: Splunk, Inc.

Embodiments are disclosed for performing cache aware searching. In response to a search query, a first bucket and a second bucket in remote storage for processing the search query. A determination is made that a first file in the first bucket is present in a cache when the search query is received. In response to the search query, a search is performed using the first file based on the determination that the first file is present in the cache when the search query is received, and the search is performed using a second file from the second bucket once the second file is stored in the cache.

Подробнее
07-03-2019 дата публикации

DEFERRED RESPONSE TO A PREFETCH REQUEST

Номер: US2019073309A1
Принадлежит:

Modifying prefetch request processing. A prefetch request is received by a local computer from a remote computer. The local computer responds to a determination that execution of the prefetch request is predicted to cause an address conflict during an execution of a transaction of the local processor by comparing a priority of the prefetch request with a priority of the transaction. Based on a result of the comparison, the local computer modifies program instructions that govern execution of the program instructions included in the prefetch request to include program instruction to perform one or both of: (i) a quiesce of the prefetch request prior to execution of the prefetch request, and (ii) a delay in execution of the prefetch request for a predetermined delay period.

Подробнее
20-06-2019 дата публикации

TWO ADDRESS TRANSLATIONS FROM A SINGLE TABLE LOOK-ASIDE BUFFER READ

Номер: US20190188151A1
Принадлежит:

A streaming engine employed in a digital data processor specifies a fixed read only data stream. An address generator produces virtual addresses of data elements. An address translation unit converts these virtual addresses to physical addresses by comparing the most significant bits of a next address N with the virtual address bits of each entry in an address translation table. Upon a match, the translated address is the physical address bits of the matching entry and the least significant bits of address N. The address translation unit can generate two translated addresses. If the most significant bits of address N+1 match those of address N, the same physical address bits are used for translation of address N+1. The sequential nature of the data stream increases the probability that consecutive addresses match the same address translation entry and can use this technique.

Подробнее
06-07-2017 дата публикации

DATA CACHE VIRTUAL HINT WAY PREDICTION, AND APPLICATIONS THEREOF

Номер: US20170192894A1
Принадлежит:

A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.

Подробнее
04-05-2021 дата публикации

Increasing the lookahead amount for prefetching

Номер: US0010997077B2

A data structure (e.g., a table) stores a listing of prefetches. Each entry in the data structure includes a respective virtual address and a respective prefetch stride for a corresponding prefetch. If the virtual address of a memory request (e.g., a request to load or fetch data) matches an entry in the data structure, then the value of a counter associated with that entry is incremented. If the value of the counter satisfies a threshold, then the lookahead amount associated with the memory request is increased.

Подробнее
22-11-2022 дата публикации

Warping data

Номер: US0011508031B2
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A method of warping data includes the steps of providing a set of target coordinates x∈ N, calculating, by a warping engine, source coordinates x′∈ Nfor the target coordinates x∈ N, requesting, by the warping engine, data values for a plurality of source coordinates from a cache, and computing, by the warping engine, interpolated data values for each x in a neighborhood of x′ from the data values of the source coordinates returned from the cache. Requesting data values from the cache includes notifying the cache that data values for a particular group of source points will be needed for computing interpolated data values for a particular target point, and fetching the data values for the particular group of source points when they are need for computing interpolated data values for the particular target point.

Подробнее
19-09-2023 дата публикации

Data management device for supporting high speed artificial neural network operation by using data caching based on data locality of artificial neural network

Номер: US0011763147B2
Автор: Lok Won Kim
Принадлежит: DEEPX CO., LTD.

Disclosed is a data cache or data management device for caching data between at least one processor and at least one memory, and supporting an artificial neural network (ANN) operation executed by the at least one processor. The data cache device or the data management device can comprise an internal controller for predicting the next data operation request on the basis of ANN data locality of the ANN operation. The internal controller monitors data operation requests associated with the ANN operation from among data operation requests actually made between the at least one processor and the at least one memory, thereby extracting the ANN data locality of the ANN operation.

Подробнее
22-06-2023 дата публикации

GRAPHICS PROCESSORS AND GRAPHICS PROCESSING UNITS HAVING DOT PRODUCT ACCUMULATE INSTRUCTION FOR HYBRID FLOATING POINT FORMAT

Номер: US20230195685A1
Принадлежит: Intel Corporation

Described herein is a graphics processing unit (GPU) configured to receive an instruction having multiple operands, where the instruction is a single instruction multiple data (SIMD) instruction configured to use a bfloat16 (BF16) number format and the BF16 number format is a sixteen-bit floating point format having an eight-bit exponent. The GPU can process the instruction using the multiple operands, where to process the instruction includes to perform a multiply operation, perform an addition to a result of the multiply operation, and apply a rectified linear unit function to a result of the addition.

Подробнее
21-02-2023 дата публикации

Memory cache with partial cache line valid states

Номер: US0011586552B2
Принадлежит: Apple Inc.

An apparatus includes a cache memory circuit configured to store a cache lines, and a cache controller circuit. The cache controller circuit is configured to receive a read request to an address associated with a portion of a cache line. In response to an indication that the portion of the cache line currently has at least a first sub-portion that is invalid and at least a second sub-portion that is modified relative to a version in a memory, the cache controller circuit is further configured to fetch values corresponding to the address from the memory, to generate an updated version of the portion of the cache line by using the fetched values to update the first sub-portion, but not the second sub-portion, of the portion of the cache line, and to generate a response to the read request that includes the updated version of the portion of the cache line.

Подробнее
21-02-2023 дата публикации

Data prefetching method and terminal device

Номер: US0011586544B2
Принадлежит: HUAWEI TECHNOLOGIES CO., LTD.

A data prefetching method and a terminal device are provided. The CPU core cluster is configured to deliver a data access request to a first cache of the at least one level of cache, where the data access request carries a first address, and the first address is an address of data that the CPU core cluster currently needs to access in the memory. The prefetcher in the terminal device provided in embodiments of this application may generate a prefetch-from address, and load data corresponding to the generated prefetch-from address to the first cache. When needing to access the data, the CPU core cluster can read from the first cache, without a need to read from the memory. This helps increase an operating rate of the CPU core cluster.

Подробнее
05-04-2022 дата публикации

Method and apparatus for vector permutation

Номер: US0011294826B2
Принадлежит: Texas Instruments Incorporated

A method is provided that includes performing, by a processor in response to a vector permutation instruction, permutation of values stored in lanes of a vector to generate a permuted vector, wherein the permutation is responsive to a control storage location storing permute control input for each lane of the permuted vector, wherein the permute control input corresponding to each lane of the permuted vector indicates a value to be stored in the lane of the permuted vector, wherein the permute control input for at least one lane of the permuted vector indicates a value of a selected lane of the vector is to be stored in the at least one lane, and storing the permuted vector in a storage location indicated by an operand of the vector permutation instruction.

Подробнее
12-04-2022 дата публикации

Memory control method, memory storage device, and memory control circuit unit

Номер: US0011301311B2
Автор: Chen Yap Tan
Принадлежит: PHISON ELECTRONICS CORP.

A memory control method for a rewritable non-volatile memory module is provided according to embodiments of the disclosure. The method includes: receiving at least one first read command from a host system; and determining, according to a total data amount of to-be-read data indicated by the at least one first read command, whether to start a pre-read operation. The pre-read operation is configured to pre-read data stored in at least one first logical unit, and the first logical unit is mapped to at least one physical unit.

Подробнее
28-11-2023 дата публикации

Managing write requests for drives in cloud storage systems

Номер: US0011829642B2
Принадлежит: RED HAT, INC.

Systems and methods are provided for managing write requests for drives in a cloud storage system. For example, a system can receive a plurality of write requests for writing a first set of data to a first drive of a plurality of drives. The first drive may be powered off. The system can write the first set of data to a cache in response to receiving the plurality of write requests. The system can determine that a number of the plurality of write requests exceeds a predetermined write request threshold. The system can power on the first drive in response to determining that the number of the plurality of write requests exceeds the predetermined write request threshold. The system can write the first set of data stored in the cache to the first drive.

Подробнее
23-01-2024 дата публикации

Prediction confirmation for cache subsystem

Номер: US0011880308B2
Принадлежит: Apple Inc.

A cache subsystem is disclosed. The cache subsystem includes a cache configured to store information in cache lines arranged in a plurality of ways. A requestor circuit generates a request to access a particular cache line in the cache. A prediction circuit is configured to generate a prediction of which of the ways includes the particular cache line. A comparison circuit verifies the prediction by comparing a particular address tag associated with the particular cache line to a cache tag corresponding to a predicted one of the ways. Responsive to determining that the prediction was correct, a confirmation indication is stored indicating the correct prediction. For a subsequent request for the particular cache line, the cache is configured to forego a verification of the prediction that the particular cache line is included in the one of the ways based on the confirmation indication.

Подробнее
21-05-2024 дата публикации

Device for packet processing acceleration

Номер: US0011989130B2
Автор: Kuo-Cheng Lu
Принадлежит: REALTEK SEMICONDUCTOR CORPORATION

A device for packet processing acceleration includes a CPU, a tightly coupled memory (TCM), a buffer descriptor (BD) prefetch circuit, and a BD write back circuit. The BD prefetch circuit reads reception-end (RX) BDs from an RX BD ring of a memory to write them into an RX ring of the TCM, and reads RX header data from a buffer of the memory to write them into the RX ring. The CPU accesses the RX ring to process the RX BDs and RX header data, and generates transmission-end (TX) BDs and TX header data; afterwards, the CPU writes the TX BDs and TX header data into a TX ring of the TCM. The BD write back circuit reads the TX BDs and TX header data from the TX ring, writes the TX BDs into a TX BD ring of the memory, and writes the TX header data into the buffer.

Подробнее
13-06-2024 дата публикации

Storage System and Method for Accessing Same

Номер: US20240193084A1
Автор: Sehat Sutardja
Принадлежит:

A data access system including a processor and a storage system including a main memory and a cache module. The cache module includes a FLC controller and a cache. The cache is configured as a FLC to be accessed prior to accessing the main memory. The processor is coupled to levels of cache separate from the FLC. The processor generates, in response to data required by the processor not being in the levels of cache, a physical address corresponding to a physical location in the storage system. The FLC controller generates a virtual address based on the physical address. The virtual address corresponds to a physical location within the FLC or the main memory. The cache module causes, in response to the virtual address not corresponding to the physical location within the FLC, the data required by the processor to be retrieved from the main memory.

Подробнее
10-06-2015 дата публикации

Data Processing system and method for data processing in a multiple processor system

Номер: GB0002520942A
Принадлежит:

Disclosed is a multi-processor system 1 with a multi-level cache L1, L2, L3, L4 structure between the processors 10, 20, 30 and the main memory 60. The memories of at least one of the cache levels is shared between the processors. A page mover 50 is positioned closer to the main memory and is connected to the cache memories of the shared cache level, to the main memory and to the processors. In response to a request from a processor the page mover fetches data of a storage area line-wise from one of the shared cache memories or the main memory, while maintaining cache memory access coherency. The page mover has a data processing engine that performs aggregation and filtering of the fetched data. The page mover moves processed data to the cache memories, the main memory or the requesting processor. The data processing engine may have a filter engine that filters data by comparing all elements of a fetched line from a source address of a fetched line from a source address of the shared cache ...

Подробнее
01-06-2016 дата публикации

Apparatus and method of throttling hardware pre-fetch

Номер: GB0002532851A
Принадлежит:

Hardware based prefetching for processor systems is implemented. A prefetch unit in a cache subsystem allocates a prefetch tracker in response to a demand request for a cache line that missed. Prefetch trackers track a match window of cachelines. In response to subsequent demand requests to consecutive cachelines in the match window, a confidence indicator is increased. In response to further demand misses 327 and a confidence indicator value 325, a prefetch tier 118 is increased 330, which allows the prefetch tracker to initiate prefetch requests for more cachelines. Requests for cachelines that are more than two cachelines apart within a match window for the allocated prefetch tracker decrease the confidence faster than requests for consecutive cachelines increase confidence. An age counter tracks 335 when a last demand request within the match window was received. The prefetch tier can be decreased 338 in response to reduced confidence and increased age.

Подробнее
24-12-2008 дата публикации

Cache memory system

Номер: GB0000821080D0
Автор:
Принадлежит:

Подробнее
23-12-2020 дата публикации

Gateway pull model

Номер: GB0002579412B
Автор: BRIAN MANULA, Brian Manula
Принадлежит: GRAPHCORE LTD, Graphcore Limited

Подробнее
24-11-1993 дата публикации

MULTI-LEVEL CACHE SYSTEM

Номер: GB0009320511D0
Автор:
Принадлежит:

Подробнее
20-05-2009 дата публикации

Cache memory which evicts data which has been accessed in preference to data which has not been accessed

Номер: GB2454810A
Принадлежит:

A cache memory 1, for providing rapid access to data from a system memory 105, can identify whether a region has been accessed by an external device, such as a processor 101 or DMA controller 107. When data is loaded into the cache, it is loaded into a region which has been accessed in preference to one which has not. If there are no free regions and no regions that have been accessed, the data may fail to load data into the cache. The loading of data may be a pre-fetch operation. The cache may have a status value for each line in the cache which indicates whether the line has been accessed. The status value may be stored with the address tag. The cache memory may be a level 2 cache.

Подробнее
11-03-1981 дата публикации

Data processing system including a cache store.

Номер: GB0002056134A
Принадлежит:

A data processing system includes a main memory system, a high speed buffer cache store, a central processor unit (CPU) and an Input/Output processor (IOP) all connected to a system bus. Apparatus in the cache store reads all information on the system bus into a first in, first out buffer comprising a plurality of registers, a write address counter, a read address counter and a means for selectively processing the information. The cache store is word oriented and further comprises a directory, a data buffer and associated control logic. The CPU requests data words by sending a main memory address of the requested data word to the cache store. If the cache store does not have the requested data word apparatus in the cache store requests the data word from the main memory system, and in addition, the apparatus requests additional data words from consecutively higher addresses. If the main memory system is busy, the cache store repetitively supplies the requested address to the main memory ...

Подробнее
27-02-2013 дата публикации

Improved control of pre-fetch traffic

Номер: GB0201300646D0
Автор:
Принадлежит:

Подробнее
06-09-2018 дата публикации

Providing memory bandwidth compression using multiple last-level cache (LLC) lines in a central processing unit (CPU)-based system

Номер: AU2017240430A1
Принадлежит: Madderns Patent & Trade Mark Attorneys

Providing memory bandwidth compression using multiple last-level cache (LLC) lines in a central processing unit (CPU)-based system is disclosed. In some aspects, a compressed memory controller (CMC) provides an LLC comprising multiple LLC lines, each providing a plurality of sub-lines the same size as a system cache line. The contents of the system cache line(s) stored within a single LLC line are compressed and stored in system memory within the memory sub-line region corresponding to the LLC line. A master table stores information indicating how the compressed data for an LLC line is stored in system memory by storing an offset value and a length value for each sub-line within each LLC line. By compressing multiple system cache lines together and storing compressed data in a space normally allocated to multiple uncompressed system lines, the CMC enables compression sizes to be smaller than the memory read/write granularity of the system memory.

Подробнее
27-09-2019 дата публикации

Spark time window data analysis-oriented cache data prefetching method

Номер: CN0110287010A
Автор:
Принадлежит:

Подробнее
31-05-2019 дата публикации

The microprocessor, used for the method for prefetching data and computer medium used for

Номер: CN0106168929B
Автор:
Принадлежит:

Подробнее
27-10-2020 дата публикации

Prefetch marks for facilitating removal

Номер: CN0108139976B
Автор:
Принадлежит:

Подробнее
31-10-2017 дата публикации

Based on the streamlining of the configuration system of the block device buffer apparatus and method thereof

Номер: CN0104035887B
Автор:
Принадлежит:

Подробнее
06-06-2023 дата публикации

Method and device for improving data consistency of cache and data source based on log pre-writing mechanism

Номер: CN116225779A
Принадлежит:

The invention relates to a method and a device for improving data consistency of a cache and a data source based on a log pre-writing mechanism. The method comprises the following steps: before executing cache deletion, storing a deletion operation record in a redo log file; after executing cache deletion, storing a deletion completion record in the redo log file; and after the application crashes and recovers again, executing the uncompleted effective deletion operation recorded in the redo log file, so that the data of the cache and the data of the data source are consistent. The method can also execute redo log recovery processing when the application is started. According to the method and the device, the uncompleted effective cache deletion in the redo log can be recovered and executed when the application is restarted after the abnormal crash, and the situation of data inconsistency possibly caused by the abnormal crash of the application is avoided. The method does not depend on ...

Подробнее
11-04-2019 дата публикации

Номер: KR0101940382B1
Автор:
Принадлежит:

Подробнее
01-09-2022 дата публикации

캐시 히트에 대한 예측 힌트를 갖는 데이터 캐시

Номер: KR20220121908A
Принадлежит:

... 캐시 히트에 대한 예측 힌트를 갖는 데이터 캐시가 설명된다. 데이터 캐시는 복수의 캐시 라인을 포함하는데, 여기서 캐시 라인은 데이터 필드, 태그 필드, 및 예측 힌트 필드를 포함한다. 예측 힌트 필드는 캐시 라인에 대한 캐시 히트에 대한 대체 거동을 지시하는 예측 힌트를 저장하도록 구성된다. 예측 힌트 필드는 태그 필드와 통합되거나 또는 경로 예측자 필드와 통합된다.

Подробнее
16-11-2022 дата публикации

데이터 캐시 영역 프리페처

Номер: KR102467817B1

... 데이터 캐시 영역 프리페처는, 데이터 캐시 미스가 발생할 때 영역을 생성한다. 각각의 영역은 각각의 데이터 캐시 미스에 근접한 미리 정해진 범위의 데이터 라인을 포함하고 관련 명령 포인터 레지스터(RIP)로 태깅된다. 데이터 캐시 영역 프리페처는 각각의 기존 영역에 대한 미리 정해진 범위의 데이터 라인에 대해 후속 메모리 요청을 비교한다. 각각의 매치에 대해, 데이터 캐시 영역 프리페처는 액세스 비트를 설정하고 설정된 액세스 비트에 기초하여 의사-랜덤 액세스 패턴을 식별하려고 시도한다. 데이터 캐시 영역 프리페처는 얼마나 자주 의사-랜덤 액세스 패턴이 발생하는지를 추적하기 위해 적절한 카운터를 증분 또는 감소시킨다. 의사-랜덤 액세스 패턴이 빈번하게 발생하는 경우, 다음번에 메모리 요청이 동일한 RIP 및 패턴으로 프로세싱되고, 데이터 캐시 영역 프리페처는 그 RIP에 대한 의사-랜덤 액세스 패턴에 따라 데이터 라인을 프리페칭한다.

Подробнее
15-04-2024 дата публикации

공간 메모리 스트리밍 트레이닝을 위한 장치 및 방법

Номер: KR102657076B1
Принадлежит: 삼성전자주식회사

... 공간 메모리 스트리밍(SMS) 프리패치 엔진을 위한 장치, 시스템, 방법이 설명된다. 일 특징에 따르면, SMS 프리패치 엔진은 트레이닝 테이블 엔트리를 패턴 이력 테이블 (PHT) 엔트리로 승격시키고, 더 먼 영역에서 공간 관련 프리패치를 구동시키기 위해 트리거간 스트라이드 검출을 사용한다. 다른 특징에 따르면, SMS 프리패치 엔진은 트리거 값으로 사용하지 않는 프로그램 카운터(PC) 값의 블랙리스트를 유지한다. 또 다른 특징에 따르면, SMS 프리패치 엔진은 테이블에 대한 인덱스 값으로서, 예를 들어 필터 테이블, 트레이닝 테이블 및 PHT의 엔트리에서 트리거 PC와 같은 특정 필드의 해싱된 값을 사용한다.

Подробнее
11-03-2021 дата публикации

DEFERRING CACHE STATE UPDATES IN A NON-SPECULATIVE CACHE MEMORY IN A PROCESSOR-BASED SYSTEM IN RESPONSE TO A SPECULATIVE DATA REQUEST UNTIL THE SPECULATIVE DATA REQUEST BECOMES NON-SPECULATIVE

Номер: WO2021045814A1
Принадлежит:

Deferring cache state updates in a non-speculative cache memory in a processor-based system in response to a speculative data request until the speculative data request becomes non-speculative is disclosed. The updating of at least one cache state in the cache memory resulting from a data request is deferred until the data request becomes non-speculative. Thus, a cache state in the cache memory is not updated for requests resulting from mispredictions. Deferring the updating of a cache state in the cache memory can include deferring the storing of received speculative requested data in the main data array of the cache memory as a result of a cache miss until the data request becomes non-speculative. The received speculative requested data can first be stored in a speculative buffer memory associated with a cache memory, and then stored in the main data array if the data request becomes non-speculative.

Подробнее
29-06-2017 дата публикации

INSTRUCTIONS AND LOGIC FOR LOAD-INDICES-AND-PREFETCH-SCATTERS OPERATIONS

Номер: WO2017112171A1
Принадлежит:

A processor includes an execution unit to execute instructions to load indices from an array of indices, optionally perform scatters, and prefetch (to a specified cache) contents of target locations for future scatters from arbitrary locations in memory. The execution unit includes logic to load, for each target location of a scatter or prefetch operation, an index value to be used in computing the address in memory for the operation. The index value may be retrieved from an array of indices identified for the instruction. The execution unit includes logic to compute the addresses based on the sum of a base address specified for the instruction, the index value retrieved for the location, and a prefetch offset (for prefetch operations), with optional scaling. The execution unit includes logic to retrieve data elements from contiguous locations in a source vector register specified for the instruction to be scattered to the memory.

Подробнее
18-07-2019 дата публикации

SPECULATIVE CACHE STORAGE REGION

Номер: WO2019138206A1
Принадлежит:

An apparatus (2) comprises processing circuitry (4) to perform speculative execution of instructions;a main cache storage region (30);a speculative cache storage region (32); and cache control circuitry (34) to allocate an entry, for which allocation is caused by a speculative memory access triggered by the processing circuitry, to the speculative cache storage region instead of the main cache storage region while the speculative memory access remains speculative.This can help protect against potential security attacks which exploit cache timing side-channels to gain information about allocations into the cache caused by speculative memory accesses.

Подробнее
17-03-2022 дата публикации

CONTROLLING CLIENT/SERVER INTERACTION BASED UPON INDICATIONS OF FUTURE CLIENT REQUESTS

Номер: WO2022055573A1
Автор: RAJASEKARAN, Adhithya
Принадлежит:

A client component identifies a dependent request, that is dependent from an initial request to be made to a service. The client component generates preview information, indicative of the dependent request, and provides the preview information to the service computing system before receiving a response to the initial request. The preview information enables the service computing system to obtain information responsive to the dependent request, before the dependent request is made by the client computing system.

Подробнее
26-05-2020 дата публикации

Instruction prefetching in a computer processor using a prefetch prediction vector

Номер: US0010664279B2

Instruction prefetching in a computer processor includes, upon a miss in an instruction cache for an instruction cache line: retrieving, for the instruction cache line, a prefetch prediction vector, the prefetch prediction vector representing one or more cache lines of a set of contiguous instruction cache lines following the instruction cache line to prefetch from backing memory; and prefetching, from backing memory into the instruction cache, the instruction cache lines indicated by the prefetch prediction vector.

Подробнее
27-07-2021 дата публикации

Slot/sub-slot prefetch architecture for multiple memory requestors

Номер: US0011074190B2

A prefetch unit generates a prefetch address in response to an address associated with a memory read request received from the first or second cache. The prefetch unit includes a prefetch buffer that is arranged to store the prefetch address in an address buffer of a selected slot of the prefetch buffer, where each slot of the prefetch unit includes a buffer for storing a prefetch address, and two sub-slots. Each sub-slot includes a data buffer for storing data that is prefetched using the prefetch address stored in the slot, and one of the two sub-slots of the slot is selected in response to a portion of the generated prefetch address. Subsequent hits on the prefetcher result in returning prefetched data to the requestor in response to a subsequent memory read request received after the initial received memory read request.

Подробнее
18-07-2017 дата публикации

Method and apparatus for accessing physical memory from a CPU or processing element in a high performance manner

Номер: US0009710385B2
Принадлежит: Intel Corporation

A method and apparatus is described herein for accessing a physical memory location referenced by a physical address with a processor. The processor fetches/receives instructions with references to virtual memory addresses and/or references to physical addresses. Translation logic translates the virtual memory addresses to physical addresses and provides the physical addresses to a common interface. Physical addressing logic decodes references to physical addresses and provides the physical addresses to a common interface based on a memory type stored by the physical addressing logic.

Подробнее
01-06-2021 дата публикации

Non-volatile storage system with filtering of data samples for a monitored operational statistic

Номер: US0011023380B2

A non-volatile storage device includes a compact and efficient filter of data samples for a monitored statistic about operation of the storage device. The non-volatile storage device comprises a plurality of non-volatile memory cells and a control circuit connected to the non-volatile memory cells. The control circuit is configured to maintain at the non-volatile storage device a sum of samples of the statistic for a moving window of the samples such that during operation new samples are added to the sum and contributions from old samples are removed from the sum by the control circuit multiplying the sum by a weight when adding the new samples.

Подробнее
01-04-2021 дата публикации

PREFETCHER, OPERATING METHOD OF PREFETCHER, AND PROCESSOR

Номер: US20210096995A1

A prefetcher, an operating method of the prefetcher, and a processor including the prefetcher are provided. The prefetcher includes a prefetch address generating circuit, an address tracking circuit, and an offset control circuit. The prefetch address generating circuit generates a prefetch address based on first prefetch information and an offset amount. The address tracking circuit stores the prefetch address and a plurality of historical prefetch addresses. When receiving an access address, the offset control circuit updates the offset amount based on second prefetch information, the access address, the prefetch address, and the historical prefetch addresses, and provides the prefetch address generating circuit with the updated offset amount.

Подробнее
01-04-2021 дата публикации

DYNAMIC GENERATION OF DEVICE IDENTIFIERS

Номер: US20210096821A1
Принадлежит:

Dynamic generation of device identifiers is disclosed, including: issuing an identifier configuration request in response to a user operation; after receiving the identifier configuration request, calling a true random number generator source to generate random information; and writing the random information or a data processing result from the random information as the identifier into a predetermined storage area.

Подробнее
14-07-2020 дата публикации

Cache hit ratio simulation using a partial data set

Номер: US0010713164B1

A method of cache hit ratio simulation using a partial data set includes determining a set of sampled addresses, the set of sampled addresses being a subset of all addresses of a storage system of a storage environment. The method further includes using, by a simulation engine, a cache management algorithm to determine a cache hit ratio of the sampled addresses, the cache management algorithm being also used by a cache manager to place a portion of the addresses of the storage system into cache during a runtime operation. The method further includes determining a quantity of memory access operations to frequently accessed addresses in the set of sampled addresses, and correcting, by the simulation engine, the cache hit ratio of the sampled addresses based on the quantity of memory access operations to the frequently accessed addresses in the set of sampled addresses. The simulation also handles sequential operations accurately.

Подробнее
31-03-2020 дата публикации

Effective address based load store unit in out of order processors

Номер: US0010606590B2

Technical solutions are described for out-of-order (OoO) execution of one or more instructions by a processing unit. An example method includes looking up, by a load-store unit (LSU), an entry in an effective address directory (EAD) for an effective address (EA) of an operand of an instruction to be launched. Further, the method includes, in response to the EA being present in the EAD, launching, by the LSU, the instruction with the RA from the EAD, and in response to the EA not being present in the EAD, looking up, by the LSU, the EA in an effective real table (ERT) entry, and launching the instruction with the RA from the ERT entry. Further, in response to the ERT entry to be removed, the ERT entry including an ERT index and a mapping between the EA and the RA, removing the entry of the EA from the EAD.

Подробнее
17-03-2020 дата публикации

Hierarchical pre-fetch pipelining in a hybrid memory server

Номер: US0010592118B2

A method, hybrid server system, and computer program product, prefetch data. A set of prefetch requests associated with one or more given datasets residing on the server system are received from a set of accelerator systems. A set of data is prefetched from a memory system residing at the server system for at least one prefetch request in the set of prefetch requests. The set of data satisfies the at least one prefetch request. The set of data that has been prefetched is sent to at least one accelerator system, in the set of accelerator systems, associated with the at least one prefetch request.

Подробнее
11-04-2019 дата публикации

MODE SWITCHING FOR INCREASED OFF-CHIP BANDWIDTH

Номер: US20190108128A1
Принадлежит:

Embodiments of the present invention include methods for increasing off-chip bandwidth. The method includes designing a circuit of switchable pins, replacing a portion of allocated pins of a processor with switchable pins, connecting the processor to a memory interface configured to switch the switchable pins between a power mode and a signal mode, providing a metric configured to identify which of the power mode and the signal mode is most beneficial during 1 millisecond intervals, and switching the switchable pins to signal mode during intervals where the signal mode provides more benefit than the power mode.

Подробнее
18-07-2017 дата публикации

Hierarchical translation structures providing separate translations for instruction fetches and data accesses

Номер: US0009710382B2

Hierarchical address translation structures providing separate translations for instruction fetches and data accesses. An address is to be translated from the address to another address using a hierarchy of address translation structures. The hierarchy of address translation structures includes a plurality of levels, and a determination is made as to which level of the plurality of levels it is indicated that translation through the hierarchy of address translation structures is to split into a plurality of translation paths. The hierarchy of address translation structures is traversed to obtain information to be used to translate the address to the another address, in which the traversing selects, based on a determination of the level that indicates the split and based on an attribute of the address to be translated, one translation path of the plurality of translation paths to obtain the information to be used to translate the address to the another address. The information is then used ...

Подробнее
24-05-2018 дата публикации

CONTINUOUS PAGE READ FOR MEMORY

Номер: US20180143908A1
Автор: Yihua Zhang, Jun Shen
Принадлежит:

Subject matter disclosed herein relates to techniques to read memory in a continuous fashion.

Подробнее
23-10-2018 дата публикации

Method and apparatus for pre-fetching data in a system having a multi-level system memory

Номер: US0010108549B2
Принадлежит: Intel Corporation, INTEL CORP

A method is described that includes creating a first data pattern access record for a region of system memory in response to a cache miss at a host side cache for a first memory access request. The first memory access request specifies an address within the region of system memory. The method includes fetching a previously existing data access pattern record for the region from the system memory in response to the cache miss. The previously existing data access pattern record identifies blocks of data within the region that have been previously accessed. The method includes pre-fetching the blocks from the system memory and storing the blocks in the cache.

Подробнее
09-01-2020 дата публикации

HIERARCHICAL PRE-FETCH PIPELINING IN A HYBRID MEMORY SERVER

Номер: US20200012428A1
Принадлежит:

A method, hybrid server system, and computer program product, prefetch data. A set of prefetch requests associated with one or more given datasets residing on the server system are received from a set of accelerator systems. A set of data is prefetched from a memory system residing at the server system for at least one prefetch request in the set of prefetch requests. The set of data satisfies the at least one prefetch request. The set of data that has been prefetched is sent to at least one accelerator system, in the set of accelerator systems, associated with the at least one prefetch request.

Подробнее
06-12-2018 дата публикации

SYSTEM AND METHOD FOR ADAPTIVE COMMAND FETCH AGGREGATION

Номер: US20180349026A1
Принадлежит: Western Digital Technologies, Inc.

Systems and methods for adaptive fetch coalescing are disclosed. NVM Express (NVMe) implements a paired submission queue and completion queue mechanism, with host software on the host device placing commands into the submission queue. The host device notifies the memory device, via a doorbell update, of commands on the submission queue. Instead of fetching the command responsive to the doorbell update, the memory device may analyze one or more aspects in order to determine whether and how to coalesce fetching of the commands. In this way, the memory device may include the intelligence to coalesce fetching in order to more efficiently fetch the commands from the host device.

Подробнее
08-11-2018 дата публикации

STREAMING ENGINE WITH COMPRESSED ENCODING FOR LOOP CIRCULAR BUFFER SIZES

Номер: US20180322061A1
Принадлежит:

A streaming engine employed in a digital data processor specifies a fixed read only data stream defined by plural nested loops. An address generator produces address of data elements for the nested loops. A steam head register stores data elements next to be supplied to functional units for use as operands. A stream template register specifies a circular address mode for the loop, first and second block size numbers and a circular address block size selection. For a first circular address block size selection the block size corresponds to the first block size number. For a first circular address block size selection the block size corresponds to the first block size number. For a second circular address block size selection the block size corresponds to a sum of the first block size number and the second block size number.

Подробнее
09-10-2018 дата публикации

Storage device and data processing method thereof

Номер: US0010095613B2

A data storage device which exchanges multi-stream data with a host includes a nonvolatile memory device; a buffer memory configured to temporarily store data to be stored in the nonvolatile memory device or data read from the nonvolatile memory device; and a storage controller configured to receive from the host an access command for accessing segments of the multi-stream data, the accessing including reading the segments of the multi-stream data from or writing the segments of the multi-stream data to the nonvolatile memory device, wherein the storage controller is configured to store the access-requested segments in the buffer memory, the access-requested segments being the segments of data for which access is requested in the access command, the multi-stream data including a plurality of data streams that correspond respectively to a plurality of multi-stream indexes, the first multi-stream index being one of a plurality of multi-stream indexes.

Подробнее
27-09-2018 дата публикации

ELECTRONIC APPARATUS AND METHOD OF OPERATING THE SAME

Номер: US20180276130A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

An electronic apparatus and an operating method thereof. The electronic apparatus includes a memory which stores one or more instructions and a processor which executes one or more instructions stored in the memory. The processor executes the instructions to obtain one or more contents to be pre-fetched, to obtain one or more resources available in the electronic apparatus, to determine a priority of the one or more resources, and to allocate the one or more of the obtained resources, based on the obtained priority, forming a pipeline in which the obtained one or more contents are processed.

Подробнее
04-01-2018 дата публикации

Translation Lookup and Garbage Collection Optimizations on Storage System with Paged Translation Table

Номер: US20180004652A1
Принадлежит:

A system comprising a processor and a memory storing instructions that, when executed, cause the system to receive a request for garbage collection, identify a range of physical blocks in a storage device, query a bitmap, the bitmap having a bit for each physical block in the range of physical blocks, determine a status associated with a first bit from the bitmap, in response to determining the status associated with the first bit is a first state, add a first physical block associated with the first bit to a list of physical blocks for relocation, and relocate the list of physical blocks. 1. A method comprising:receiving a request for garbage collection;identifying a range of physical blocks in a storage device;querying a bitmap, the bitmap having a bit for each physical block in the range of physical blocks;determining a status associated with a first bit from the bitmap;in response to determining the status associated with the first bit is a first state, adding a first physical block associated with the first bit to a list of physical blocks for relocation; andrelocating the list of physical blocks.2. The method of claim 1 , wherein a size of the bitmap corresponds to a size of the storage device.3. The method of claim 1 , wherein the first state indicates an active mapping associated with the first physical block.4. The method of claim 1 , further comprising:receiving a request to pre-fetch a translation table entry;in response to receiving the request to pre-fetch, marking the translation table entry in the memory; andgenerating a non-zero reference count for the translation table entry.5. The method of claim 3 , wherein the marked translation table entry is associated with an expiration timeout.6. The method of claim 1 , further comprising:receiving a write request for a first logical block;mapping the first logical block to a second physical block;allocating a second bit associated with the second physical block;assigning the first state to the second bit ...

Подробнее
28-09-2023 дата публикации

PROCESSOR MICRO-OPERATIONS CACHE ARCHITECTURE FOR INTERMEDIATE INSTRUCTIONS

Номер: US20230305843A1
Автор: Pranjal Kumar Dutta
Принадлежит:

Various example embodiments for supporting processor capabilities are presented herein. Various example embodiments may be configured to support a micro-architecture for a micro-operations cache (UC) of a processor. Various example embodiments for supporting a micro-architecture for a UC of a processor may be configured to implement the UC of a processor using an intermediate vector UC (IV-UC). Various example embodiments for supporting an IV-UC for a processor may be configured to support a processor including an IV-UC where the IV-UC includes a micro-operations cache (UC) configured to store a cache line including sets of micro-operations (UOPs) from instructions decoded by the processor and an intermediate vector cache (IVC) configured to store indications of locations of the sets of UOPs in the cache line of the UC for intermediate instructions of the cache line of the UC.

Подробнее
06-06-2023 дата публикации

Data prefetching method and apparatus

Номер: US0011669453B2
Автор: Tao Liu
Принадлежит: HUAWEI TECHNOLOGIES CO., LTD.

This application discloses a data prefetching method, including: receiving, by a home node, a write request sent by a first cache node after the first cache node processes received data; performing, by the home node, an action of determining whether the second cache node needs to perform a data prefetching operation on the to-be-written data; and when determining that the second cache node needs to perform a data prefetching operation on the to-be-written data, sending, by the home node, the to-be-written data to the second cache node. Embodiments of this application help improve accuracy and certainty of a data prefetching time point, and reduce a data prefetching delay.

Подробнее
16-01-2024 дата публикации

System and method for facilitating efficient address translation in a network interface controller (NIC)

Номер: US0011876702B2

A network interface controller (NIC) capable of facilitating efficient memory address translation is provided. The NIC can be equipped with a host interface, a cache, and an address translation unit (ATU). During operation, the ATU can determine an operating mode. The operating mode can indicate whether the ATU is to perform a memory address translation at the NIC. The ATU can then determine whether a memory address indicated in the memory access request is available in the cache. If the memory address is not available in the cache, the ATU can perform an operation on the memory address based on the operating mode.

Подробнее
10-03-2021 дата публикации

EXTENDED CACHING AND QUERY-TIME VALIDATION

Номер: EP3789878A1
Принадлежит:

In a distributed computing environment comprising a frontend system with a search platform having a cache of pre-computed search results and a backend system with one or more data-bases and a validation instance, a request is received at the search platform from a client comprising one or more first key-values indicating a first data record and at least a first pre-computed search result and a second pre-computed search result for the first data record is retrieved from the cache. The validation instance evaluates a current validity of the first pre-computed search result and the second pre-computed search result retrieved from the cache and returns the first pre-computed search result to the client device, or in response to evaluating that the first pre-computed search result is invalid and the second pre-computed search result is valid, returns the second pre-computed search result to the client.

Подробнее
23-06-2021 дата публикации

PREFETCH KILL AND REVIVAL IN AN INSTRUCTION CACHE

Номер: EP3837610A1
Принадлежит:

Подробнее
24-07-2023 дата публикации

Система многоуровневой выборки данных при визуальном отображении на экране монитора адресов выборки изображений документов

Номер: RU2800572C1

Изобретение относится к области автоматики и вычислительной техники, в частности к системам многоуровневой выборки данных. Технический результат заключается в расширении арсенала технических средств для многоуровневой выборки данных при визуальном отображении на экране монитора адресов выборки изображений документов. Технический результат достигается за счет того, что система содержит модуль управления выдачей данных для визуального отображения областей выборки и идентификации объектов, модуль памяти, модуль приема координат визуального представления изображений областей поиска и идентификации объектов, монитор для отображения изображений областей выборки и идентификации объектов, модуль селекции координат положения выбранных изображений областей выборки и идентификации объектов на экране монитора, модуль селекции опорных адресов областей выборки и идентификации объектов в базе данных сервера, модуль приема координат положения маркера на экране монитора. 1 табл., 6 ил.

Подробнее
16-02-2012 дата публикации

Scatter-Gather Intelligent Memory Architecture For Unstructured Streaming Data On Multiprocessor Systems

Номер: US20120042121A1
Принадлежит: Individual

A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.

Подробнее
16-02-2012 дата публикации

Intelligent cache management

Номер: US20120042123A1
Автор: Curt Kolovson
Принадлежит: Curt Kolovson

An exemplary storage network, storage controller, and methods of operation are disclosed. In one embodiment, a method of managing cache memory in a storage controller comprises receiving, at the storage controller, a cache hint generated by an application executing on a remote processor, wherein the cache hint identifies a memory block managed by the storage controller, and managing a cache memory operation for data associated with the memory block in response to the cache hint received by the storage controller.

Подробнее
01-03-2012 дата публикации

Method and apparatus for fuzzy stride prefetch

Номер: US20120054449A1
Автор: Shiliang Hu, Youfeng Wu
Принадлежит: Intel Corp

In one embodiment, the present invention includes a prefetching engine to detect when data access strides in a memory fall into a range, to compute a predicted next stride, to selectively prefetch a cache line using the predicted next stride, and to dynamically control prefetching. Other embodiments are also described and claimed.

Подробнее
29-03-2012 дата публикации

Method and apparatus for reducing processor cache pollution caused by aggressive prefetching

Номер: US20120079205A1
Автор: Patrick Conway
Принадлежит: Advanced Micro Devices Inc

A method and apparatus for controlling a first and second cache is provided. A cache entry is received in the first cache, and the entry is identified as having an untouched status. Thereafter, the status of the cache entry is updated to accessed in response to receiving a request for at least a portion of the cache entry, and the cache entry is subsequently cast out according to a preselected cache line replacement algorithm. The cast out cache entry is stored in the second cache according to the status of the cast out cache entry.

Подробнее
05-04-2012 дата публикации

Circuit and method for determining memory access, cache controller, and electronic device

Номер: US20120084513A1
Автор: Kazuhiko Okada
Принадлежит: Fujitsu Semiconductor Ltd

A memory access determination circuit includes a counter that switches between a first reference value and a second reference value in accordance with a control signal to generate a count value based on the first reference value or the second reference value. A controller performs a cache determination based on an address that corresponds to the count value and outputs the control signal in accordance with the cache determination. A changing unit changes the second reference value in accordance with the cache determination.

Подробнее
19-04-2012 дата публикации

Cache memory device, cache memory control method, program and integrated circuit

Номер: US20120096213A1
Автор: Kazuomi Kato
Принадлежит: Panasonic Corp

To aim to provide a cache memory device that performs a line size determination process for determining a refill size, in advance of a refill process that is performed at cache miss time. According to the line size determination process, the number of reads/writes of a management target line that belongs to a set is acquired (S 51 ), and in the case where the numbers of reads completely match one another and the numbers of writes completely match one another (S 52 : Yes), the refill size is determined to be large (S 54 ). Otherwise (S 52 : No), the refill size is determined to be small (S 55 ).

Подробнее
26-04-2012 дата публикации

Multiplexing Users and Enabling Virtualization on a Hybrid System

Номер: US20120102138A1
Принадлежит: International Business Machines Corp

A method, hybrid server system, and computer program product, support multiple users in an out-of-core processing environment. At least one accelerator system in a plurality of accelerator systems is partitioned into a plurality of virtualized accelerator systems. A private client cache is configured on each virtualized accelerator system in the plurality of virtualized accelerator systems. The private client cache of each virtualized accelerator system stores data that is one of accessible by only the private client cache and accessible by other private client caches associated with a common data set. Each user in a plurality of users is assigned to a virtualized accelerator system from the plurality of virtualized accelerator systems.

Подробнее
10-05-2012 дата публикации

Hybrid Server with Heterogeneous Memory

Номер: US20120117312A1
Принадлежит: International Business Machines Corp

A method, hybrid server system, and computer program product, for managing access to data stored on the hybrid server system. A memory system residing at a server is partitioned into a first set of memory managed by the server and a second set of memory managed by a set of accelerator systems. The set of accelerator systems are communicatively coupled to the server. The memory system comprises heterogeneous memory types. A data set stored within at least one of the first set of memory and the second set of memory that is associated with at least one accelerator system in the set of accelerator systems is identified. The data set is transformed from a first format to a second format, wherein the second format is a format required by the at least one accelerator system.

Подробнее
24-05-2012 дата публикации

Signal processing system, integrated circuit comprising buffer control logic and method therefor

Номер: US20120131241A1
Принадлежит: FREESCALE SEMICONDUCTOR INC

A signal processing system comprising buffer control logic arranged to allocate a plurality of buffers for the storage of information fetched from at least one memory element. Upon receipt of fetched information to be buffered, the buffer control logic is arranged to categorise the information to be buffered according to at least one of: a first category associated with sequential flow and a second category associated with change of flow, and to prioritise respective buffers from the plurality of buffers storing information relating to the first category associated with sequential flow ahead of buffers storing information relating to the second category associated with change of flow when allocating a buffer for the storage of the fetched information to be buffered.

Подробнее
24-05-2012 дата публикации

Correlation-based instruction prefetching

Номер: US20120131311A1
Автор: Yuan C. Chou
Принадлежит: Oracle International Corp

The disclosed embodiments provide a system that facilitates prefetching an instruction cache line in a processor. During execution of the processor, the system performs a current instruction cache access which is directed to a current cache line. If the current instruction cache access causes a cache miss or is a first demand fetch for a previously prefetched cache line, the system determines whether the current instruction cache access is discontinuous with a preceding instruction cache access. If so, the system completes the current instruction cache access by performing a cache access to service the cache miss or the first demand fetch, and also prefetching a predicted cache line associated with a discontinuous instruction cache access which is predicted to follow the current instruction cache access.

Подробнее
07-06-2012 дата публикации

Method and apparatus of route guidance

Номер: US20120143504A1
Принадлежит: Google LLC

Systems and methods of route guidance on a user device are provided. In one aspect, a system and method transmit partitions of map data to a client device. Each map partition may contain road geometries, road names, road network topology, or any other information needed to provide turn-by-turn navigation or driving directions within the partition. Each map partition may be encoded with enough data to allow them to be stitched together to form a larger map. Map partitions may be fetched along each route to be used in the event of a network outage or other loss of network connectivity. For example, if a user deviates from the original route and a network outage occurs, the map data may be assembled and a routing algorithm may be applied to the map data in order to direct the user back to the original route.

Подробнее
07-06-2012 дата публикации

Read-ahead processing in networked client-server architecture

Номер: US20120144123A1
Принадлежит: International Business Machines Corp

Various embodiments for read-ahead processing in a networked client-server architecture by a processor device are provided. Read messages are grouped by a plurality of unique sequence identifications (IDs), where each of the sequence IDs corresponds to a specific read sequence, consisting of all read and read-ahead requests related to a specific storage segment that is being read sequentially by a thread of execution in a client application. The storage system uses the sequence id value in order to identify and filter read-ahead messages that are obsolete when received by the storage system, as the client application has already moved to read a different storage segment. Basically, a message is discarded when its sequence id value is less recent than the most recent value already seen by the storage system. The sequence IDs are used by the storage system to determine corresponding read-ahead data to be loaded into a read-ahead cache.

Подробнее
14-06-2012 дата публикации

Cache Line Fetching and Fetch Ahead Control Using Post Modification Information

Номер: US20120151150A1
Принадлежит: LSI Corp

A method is provided for performing cache line fetching and/or cache fetch ahead in a processing system including at least one processor core and at least one data cache operatively coupled with the processor. The method includes the steps of: retrieving post modification information from the processor core and a memory address corresponding thereto; and the processing system performing, as a function of the post modification information and the memory address retrieved from the processor core, cache line fetching and/or cache fetch ahead control in the processing system.

Подробнее
30-08-2012 дата публикации

Opportunistic block transmission with time constraints

Номер: US20120221792A1
Принадлежит: Endeavors Technology Inc

A technique for determining a data window size allows a set of predicted blocks to be transmitted along with requested blocks. A stream enabled application executing in a virtual execution environment may use the blocks when needed.

Подробнее
06-09-2012 дата публикации

Systems and methods thereto for acceleration of web pages access using next page optimization, caching and pre-fetching techniques

Номер: US20120226766A1
Принадлежит: Limelight Networks Inc

A method and system for acceleration of access to a web page using next page optimization, caching and pre-fetching techniques. The method comprises receiving a web page responsive to a request by a user; analyzing the received web page for possible acceleration improvements of the web page access; generating a modified web page of the received web page using at least one of a plurality of pre-fetching techniques; providing the modified web page to the user, wherein the user experiences an accelerated access to the modified web page resulting from execution of the at least one of a plurality of pre-fetching techniques; and storing the modified web page for use responsive to future user requests.

Подробнее
06-09-2012 дата публикации

Method, apparatus, and system for speculative execution event counter checkpointing and restoring

Номер: US20120227045A1
Принадлежит: Intel Corp

An apparatus, method, and system are described herein for providing programmable control of performance/event counters. An event counter is programmable to track different events, as well as to be checkpointed when speculative code regions are encountered. So when a speculative code region is aborted, the event counter is able to be restored to it pre-speculation value. Moreover, the difference between a cumulative event count of committed and uncommitted execution and the committed execution, represents an event count/contribution for uncommitted execution. From information on the uncommitted execution, hardware/software may be tuned to enhance future execution to avoid wasted execution cycles.

Подробнее
27-09-2012 дата публикации

Communication device, communication method, and computer- readable recording medium storing program

Номер: US20120246402A1
Автор: Shunsuke Akimoto
Принадлежит: NEC Corp

A communication device reducing the processing time to install data on a disc storage medium onto multiple servers is provided. A protocol serializer 10 of a communication device 5 serializes read requests received from servers A 1 to A 2 for target data stored on a disc storage medium K in a processing order. A cache controller 11 determines whether the target data corresponding to the read requests are present in a cache memory 4 in the order of serialized read requests and, if present, receives the target data from the cache memory 4 via a memory controller 12 . If not present, the cache controller 11 acquires the target data from the disc storage medium K via a DVD/CD controller 13 . Then, the protocol serializer 10 sends the target data acquired by the cache controller 11 to the server of the transmission source of the read request corresponding to the target data.

Подробнее
04-10-2012 дата публикации

Method for giving read commands and reading data, and controller and storage system using the same

Номер: US20120254522A1
Автор: Chih-Kang Yeh
Принадлежит: Phison Electronics Corp

A method for giving a read command to a flash memory chip to read data to be accessed by a host system is provided. The method includes receiving a host read command; determining whether the received host read command follows a last host read command; if yes, giving a cache read command to read data from the flash memory chip; and if no, giving a general read command and the cache read command to read data from the flash memory chip. Accordingly, the method can effectively reduce time needed for executing the host read commands by using the cache read command to combine the host read commands which access continuous physical addresses and pre-read data stored in a next physical address.

Подробнее
25-10-2012 дата публикации

Efficient data prefetching in the presence of load hits

Номер: US20120272004A1
Принадлежит: Via Technologies Inc

A memory subsystem in a microprocessor includes a first-level cache, a second-level cache, and a prefetch cache configured to speculatively prefetch cache lines from a memory external to the microprocessor. The second-level cache and the prefetch cache are configured to allow the same cache line to be simultaneously present in both. If a request by the first-level cache for a cache line hits in both the second-level cache and in the prefetch cache, the prefetch cache invalidates its copy of the cache line and the second-level cache provides the cache line to the first-level cache.

Подробнее
22-11-2012 дата публикации

Dynamic hierarchical memory cache awareness within a storage system

Номер: US20120297142A1
Принадлежит: International Business Machines Corp

Described is a system and computer program product for implementing dynamic hierarchical memory cache (HMC) awareness within a storage system. Specifically, when performing dynamic read operations within a storage system, a data module evaluates a data prefetch policy according to a strategy of determining if data exists in a hierarchical memory cache and thereafter amending the data prefetch policy, if warranted. The system then uses the data prefetch policy to perform a read operation from the storage device to minimize future data retrievals from the storage device. Further, in a distributed storage environment that include multiple storage nodes cooperating to satisfy data retrieval requests, dynamic hierarchical memory cache awareness can be implemented for every storage node without degrading the overall performance of the distributed storage environment.

Подробнее
06-12-2012 дата публикации

Pre-Caching Resources Based on A Cache Manifest

Номер: US20120311020A1
Принадлежит: Research in Motion Ltd

A method executed on a first electronic device for accessing an application server on a second electronic device includes receiving a cache manifest for an application, the cache manifest identifying a resource item that can be pre-cached on the first electronic device, pre-caching the resource item as a cached resource item in a cache memory of the first electronic device prior to launching an application client on the first electronic device. The method further includes, upon launching the application client on the first electronic device, retrieving data from the application server, wherein the data includes content and a reference to the resource item, obtaining, from the cache memory, the cached resource item that corresponds to the resource item, and displaying an output based upon the content and the cached resource item.

Подробнее
13-12-2012 дата публикации

Cache prefetching from non-uniform memories

Номер: US20120317364A1
Автор: Gabriel H. Loh
Принадлежит: Advanced Micro Devices Inc

An apparatus is disclosed for performing cache prefetching from non-uniform memories. The apparatus includes a processor configured to access multiple system memories with different respective performance characteristics. Each memory stores a respective subset of system memory data. The apparatus includes caching logic configured to determine a portion of the system memory to prefetch into the data cache. The caching logic determines the portion to prefetch based on one or more of the respective performance characteristics of the system memory that stores the portion of data.

Подробнее
20-12-2012 дата публикации

List based prefetch

Номер: US20120324142A1
Принадлежит: International Business Machines Corp

A list prefetch engine improves a performance of a parallel computing system. The list prefetch engine receives a current cache miss address. The list prefetch engine evaluates whether the current cache miss address is valid. If the current cache miss address is valid, the list prefetch engine compares the current cache miss address and a list address. A list address represents an address in a list. A list describes an arbitrary sequence of prior cache miss addresses. The prefetch engine prefetches data according to the list, if there is a match between the current cache miss address and the list address.

Подробнее
14-03-2013 дата публикации

Caching for a file system

Номер: US20130067168A1
Принадлежит: Microsoft Corp

Aspects of the subject matter described herein relate to caching data for a file system. In aspects, in response to requests from applications and storage and cache conditions, cache components may adjust throughput of writes from cache to the storage, adjust priority of I/O requests in a disk queue, adjust cache available for dirty data, and/or throttle writes from the applications.

Подробнее
14-03-2013 дата публикации

Information processing method, information processing system, information processing apparatus, and program

Номер: US20130067177A1
Принадлежит: Sony Corp

An information processing method includes: grouping temporarily consecutive data into a plurality of groups based on a reference defined in advance and storing the grouped data; reading, in response to an access request from an external apparatus, target data to be a target of the request from a first group including the target data and outputting the read target data to the external apparatus; and reading, in response to the reading of the target data, at least part of data from a second group different from the first group as read-ahead target data.

Подробнее
18-04-2013 дата публикации

STORAGE DEVICE AND REBUILD PROCESS METHOD FOR STORAGE DEVICE

Номер: US20130097375A1
Автор: lida Takashi
Принадлежит: NEC Corporation

A storage device includes a plurality of magnetic disk devices each having a write cache, a processor unit that redundantly stores data, a rebuild execution control unit that performs a rebuild process, a write cache control unit that, at the time of the rebuild process, enables a write cache of a storage device that stores rebuilt data, and a rebuild progress management unit that is configured using a nonvolatile memory and manages progress information of the rebuild process. In the case where power discontinuity is caused during the rebuild process and then power is restored, the rebuild execution control unit calculates an address that is before an address of last written rebuilt data by an amount corresponding to the capacity of the write cache based on the progress information of the rebuild process managed by the progress management unit and resumes the rebuild process from that calculated address. 1. A storage device comprising:a storage unit including a plurality of memory devices each having a write cache;a first control unit that redundantly stores data in the plurality of memory devices;a second control unit that performs a rebuild process of rebuilding the data;a write cache control unit that, at a time of of the rebuild process, enables a write cache of a memory device that stores rebuilt data; anda progress management unit that is configured using a nonvolatile memory and manages, as progress information of the rebuild process, an address of rebuilt data for which rebuilding is completed and which is written in the write cache,wherein, in a case where power discontinuity is caused during the rebuild process and then power is restored, the second control unit calculates an address that is before an address of last written rebuilt data by an amount corresponding to a capacity of the write cache based on the progress information of the rebuild process managed by the progress management unit and resumes the rebuild process from that calculated address.2. ...

Подробнее
18-04-2013 дата публикации

Memory-based apparatus and method

Номер: US20130097387A1
Принадлежит: Leland Stanford Junior University

Aspects of various embodiments are directed to memory circuits, such as cache memory circuits. In accordance with one or more embodiments, cache-access to data blocks in memory is controlled as follows. In response to a cache miss for a data block having an associated address on a memory access path, data is fetched for storage in the cache (and serving the request), while one or more additional lookups are executed to identify candidate locations to store data. An existing set of data is moved from a target location in the cache to one of the candidate locations, and the address of the one of the candidate locations is associated with the existing set of data. Data in this candidate location may, for example, thus be evicted. The fetched data is stored in the target location and the address of the target location is associated with the fetched data.

Подробнее
18-04-2013 дата публикации

Data prefetching method for distributed hash table dht storage system, node, and system

Номер: US20130097402A1
Автор: Deping Yang, Dong Bao
Принадлежит: Huawei Technologies Co Ltd

Embodiments of the present disclosure provide a data prefetching method, a node, and a system. The method includes: a first storage node receives a read request sent by a client, determines a to-be-prefetched data block and a second storage node where the to-be-prefetched data block resides according to a read data block and a set to-be-prefetched data block threshold, and sends a prefetching request to the second storage node, the prefetching request includes identification information of the to-be-prefetched data block, and the identification information is used to identify the to-be-prefetched data block; and the second storage node reads the to-be-prefetched data block from a disk according to the prefetching request, and stores the to-be-prefetched data block in a local buffer, so that the client reads the to-be-prefetched data block from the local buffer of the second storage node.

Подробнее
25-04-2013 дата публикации

METHOD AND APPARATUS FOR IMPLEMENTING PROTECTION OF REDUNDANT ARRAY OF INDEPENDENT DISKS IN FILE SYSTEM

Номер: US20130103902A1
Автор: WEI Mingchang, ZHANG Wei
Принадлежит: Huawei Technologies Co., Ltd.

Embodiments of the present invention disclose a method and an apparatus for implementing protection of RAID in a file system, and are applied in the field of communications technologies. In the embodiments of the present invention, after receiving a file operation request, the file system needs to determine the type of a file to be operated as requested by the file operation request, and perform file operations in a hard disk drive of the file system directly according to a file operation method corresponding to the determined file type, that is, a RAID data protection method. Therefore, corresponding file operations may be performed in a proper operation method according to each different file types, and data of an important file type is primarily protected, thereby improving reliability of data storage. 1. In a file system of one or more computers , a method for implementing protection of a redundant array of independent disks (RAID) , comprising:receiving a file operation request;determining a type of a file to be operated as requested by the file operation request, wherein the type of the file comprises at least one of file metadata and file data;selecting a file operation method according to the determined file type, wherein the file operation method is a RAID data protection method; andperforming file operations on one or more hard disk drives according to the selected file operation method.2. The method according to claim 1 , wherein the file operation method selected according to the determined file type is a multi-mirroring redundant algorithm if the determined file type is the file metadata claim 1 , and the file metadata is backed up with multiple copies and storing the multiple copies in at least two hard disks according to the multi-mirroring redundant algorithm.35. The method according to claim 1 , wherein the file operation method selected according to the determined file type is a data protection method of RAID if the type of the file is the file ...

Подробнее
16-05-2013 дата публикации

PREFETCHING SOURCE TRACKS FOR DESTAGING UPDATED TRACKS IN A COPY RELATIONSHIP

Номер: US20130124803A1

A point-in-time copy relationship associates tracks in a source storage with tracks in a target storage. The target storage stores the tracks in the source storage as of a point-in-time. A write request is received including an updated source track for a point-in-time source track in the source storage in the point-in-time copy relationship. The point-in-time source track was in the source storage at the point-in-time the copy relationship was established. The updated source track is stored in a first cache device. A prefetch request is sent to the source storage to prefetch the point-in-time source track in the source storage subject to the write request to a second cache device. A read request is generated to read the source track in the source storage following the sending of the prefetch request. The read source track is copied to a corresponding target track in the target storage. 119-. (canceled)20. A method , comprising:maintaining a point-in-time copy relationship associating tracks in a source storage with tracks in a target storage, wherein the target storage stores the tracks in the source storage as of a point-in-time;receiving a write request including an updated source track for a point-in-time source track in the source storage in the point-in-time copy relationship, wherein the point-in-time source track was in the source storage at the point-in-time the copy relationship was established;storing the updated source track in a first cache device;sending a prefetch request to the source storage to prefetch the point-in-time source track in the source storage subject to the write request to a second cache device;generating a read request to read the source track in the source storage following the sending of the prefetch request; andcopying the read source track to a corresponding target track in the target storage.21. The method of claim 20 , further comprising:destaging the updated source track to the source track in the source volume in response to ...

Подробнее
23-05-2013 дата публикации

Optimizing distributed data analytics for shared storage

Номер: US20130132967A1
Принадлежит: NetApp Inc

Methods, systems, and computer executable instructions for performing distributed data analytics are provided. In one exemplary embodiment, a method of performing a distributed data analytics job includes collecting application-specific information in a processing node assigned to perform a task to identify data necessary to perform the task. The method also includes requesting a chunk of the necessary data from a storage server based on location information indicating one or more locations of the data chunk and prioritizing the request relative to other data requests associated with the job. The method also includes receiving the data chunk from the storage server in response to the request and storing the data chunk in a memory cache of the processing node which uses a same file system as the storage server.

Подробнее
06-06-2013 дата публикации

Information Processing Apparatus and Driver

Номер: US20130145094A1
Автор: Kurashige Takehiko
Принадлежит:

According to one embodiment, an information processing apparatus includes a memory that comprises a buffer area, a first external storage, a second external storage and a driver. The driver is configured to control the first and second external storages in units of predetermined blocks. The driver comprises a cache reservation module configured to (i) reserve a cache area in the memory, the cache area being logically between the buffer area and the first external storage and between the buffer area and the second external storage and (ii) manage the cache area. The cache area operates as a primary cache for the second external storage and a cache for the first external storage. Part or the entire first external storage is used as a secondary cache for the second external storage. The buffer area is used to transfer data between the driver and a host system that requests data reads/writes. 1. An information processing apparatus comprising:a memory comprising a buffer area;a first external storage separate from the memory;a second external storage separate from the memory; anda driver configured to control the first and second external storages in units of predetermined blocks,wherein the driver comprises a cache reservation module configured to reserve a cache area in the memory, the cache area being logically between the buffer area and the first external storage and between the buffer area and the second external storage, and the cache reservation module is configured to manage the cache area in units of the predetermined blocks, using the cache area, secured on the memory by the cache reservation module, as a primary cache for the second external storage and a cache for the first external storage, and using part or the entire first external storage as a secondary cache for the second external storage, the buffer area being reserved in order to transfer data between the driver and a host system that requests for data writing and data reading.2. A driver stored in a ...

Подробнее
13-06-2013 дата публикации

Method and apparatus for caching

Номер: US20130150015A1
Принадлежит: Telefonaktiebolaget LM Ericsson AB

A method and caching server for enabling caching of a portion of a media file in a User Equipment (UE) in a mobile telecommunications network. The caching server selects the media file and determines a size of the portion to be cached in the UE. The size may be determined depending on radio network conditions for the UE and/or characteristics of the media file. The caching server sends an instruction to the UE to cache the determined size of the portion of the media file in the UE.

Подробнее
13-06-2013 дата публикации

Information Processing Apparatus and Driver

Номер: US20130151775A1
Автор: Kurashige Takehiko
Принадлежит:

According to one embodiment, an information processing apparatus includes a memory includes a buffer area, a first storage, a second storage and a driver. Controlling the first and second external storages, the driver comprises a cache reservation module configured to reserve a cache area in the memory. The cache area is logically between the buffer area and the first external storage and between the buffer area and the second external storage. The driver being configured to use the cache area, secured on the memory by the cache reservation module, as a primary cache for the second external storage and a cache for the first external storage, and uses part or the entire first external storage as a secondary cache for the second external storage. The buffer area is reserved in order to transfer data between the driver and a host system that requests for data writing and data reading. 1. An information processing apparatus comprising:a memory comprising a buffer area;a first external storage separate from the memory;a second external storage separate from the memory; anda driver configured to control the first and second external storages,wherein the driver comprises a cache reservation module configured to reserve a cache area in the memory, the cache area being logically between the buffer area and the first external storage and between the buffer area and the second external storage, the driver being configured to use the cache area, secured on the memory by the cache reservation module, as a primary cache for the second external storage and a cache for the first external storage, and uses part or the entire first external storage as a secondary cache for the second external storage, the buffer area being reserved in order to transfer data between the driver and a host system that requests for data writing and data reading.2. A driver stored in a non-transitory computer readable medium which operates in an information processing apparatus comprising a memory ...

Подробнее
27-06-2013 дата публикации

DESTAGING OF WRITE AHEAD DATA SET TRACKS

Номер: US20130166837A1

Exemplary methods, computer systems, and computer program products for efficient destaging of a write ahead data set (WADS) track in a volume of a computing storage environment are provided. In one embodiment, the computer environment is configured for preventing destage of a plurality of tracks in cache selected for writing to a storage device. For a track N in a stride Z of the selected plurality of tracks, if the track N is a first WADS track in the stride Z, clearing at least one temporal bit for each track in the cache for the stride Z minus 2 (Z−2), and if the track N is a sequential track, clearing the at least one temporal bit for the track N minus a variable X (N−X). 1. A method for efficient destaging of a write ahead data set (WADS) track in a volume by a processor device in a computing storage environment , comprising:preventing destage of a plurality of tracks in cache selected for writing to a storage device; and if the track N is a first WADS track in the stride Z, clearing at least one temporal bit for each track in the cache for the stride Z minus 2 (Z−2), and', 'if the track N is a sequential track, clearing the at least one temporal bit for the track N minus a variable X (N−X)., 'for a track N in a stride Z of the selected plurality of tracks2. The method of claim 1 , further including prestaging data to the plurality of tracks such that the stride Z includes complete tracks claim 1 , enabling subsequent destage of complete WADS tracks.3. The method of claim 1 , further including incrementing the at least one temporal bit.4. The method of claim 1 , further including taking a track access to the WADS track and completing a write operation on the WADS track.5. The method of claim 1 , further including ending a track access to the WADS track upon a completion of a write operation and adding the WADS track to a wise order writing (WOW) list.6. The method of claim 5 , further including checking the WOW list and examining a left neighbor and a right ...

Подробнее
11-07-2013 дата публикации

Streaming Translation in Display Pipe

Номер: US20130179638A1
Принадлежит: Apple Inc

In an embodiment, a display pipe includes one or more translation units corresponding to images that the display pipe is reading for display. Each translation unit may be configured to prefetch translations ahead of the image data fetches, which may prevent translation misses in the display pipe (at least in most cases). The translation units may maintain translations in first-in, first-out (FIFO) fashion, and the display pipe fetch hardware may inform the translation unit when a given translation or translation is no longer needed. The translation unit may invalidate the identified translations and prefetch additional translation for virtual pages that are contiguous with the most recently prefetched virtual page.

Подробнее
18-07-2013 дата публикации

Techniques for improving throughput and performance of a distributed interconnect peripheral bus

Номер: US20130185472A1
Принадлежит: Wilocity Ltd

A method for accelerating execution of read operations in a distributed interconnect peripheral bus is provided. The method comprises generating a first number of speculative read requests addressed to an address space related to a last read request served on the bus; sending the speculative read requests to a root component connected to the bus; receiving a second number of read completion messages from the root component of the bus; and sending a read completion message out of the received read completion messages component to the endpoint component only if the read completion message is respective of a real read request or a valid speculative read request out of the speculative read requests, wherein a real read request is issued by the endpoint component.

Подробнее
18-07-2013 дата публикации

DEMOTING PARTIAL TRACKS FROM A FIRST CACHE TO A SECOND CACHE

Номер: US20130185502A1

A determination is made of a track to demote from the first cache to the second cache, wherein the track in the first cache corresponds to a track in the storage system and is comprised of a plurality of sectors. In response to determining that the second cache includes a the stale version of the track being demoted from the first cache, a determination is made as to whether the stale version of the track includes track sectors not included in the track being demoted from the first cache. The sectors from the track demoted from the first cache are combined with sectors from the stale version of the track not included in the track being demoted from the first cache into a new version of the track. The new version of the track is written to the second cache. 1. A method for managing data in a cache system comprising a first cache , a second cache , and a storage system , comprising:determining a track to demote from the first cache to the second cache, wherein the track in the first cache corresponds to a track in the storage system and is comprised of a plurality of sectors;determining whether the second cache includes a stale version of the track being demoted from the first cache;in response to determining that the second cache includes the stale version of the track, determining whether the stale version of the track includes track sectors not included in the track being demoted from the first cache;combining the sectors from the track demoted from the first cache with sectors from the stale version of the track not included in the track being demoted from the first cache into a new version of the track; andwriting the new version of the track to the second cache.2. The computer program product of claim 1 , wherein the operations further comprise:invalidating the stale version of the track in the second cache in response to writing the new version of the track to the second cache.3. The method of claim 1 , wherein the operations further comprise:determining ...

Подробнее
18-07-2013 дата публикации

Use of Loop and Addressing Mode Instruction Set Semantics to Direct Hardware Prefetching

Номер: US20130185516A1
Принадлежит: Qualcomm Inc

Systems and methods for prefetching cache lines into a cache coupled to a processor. A hardware prefetcher is configured to recognize a memory access instruction as an auto-increment-address (AIA) memory access instruction, infer a stride value from an increment field of the AIA instruction, and prefetch lines into the cache based on the stride value. Additionally or alternatively, the hardware prefetcher is configured to recognize that prefetched cache lines are part of a hardware loop, determine a maximum loop count of the hardware loop, and a remaining loop count as a difference between the maximum loop count and a number of loop iterations that have been completed, select a number of cache lines to prefetch, and truncate an actual number of cache lines to prefetch to be less than or equal to the remaining loop count, when the remaining loop count is less than the selected number of cache lines.

Подробнее
18-07-2013 дата публикации

TECHNIQUES FOR IMPROVING THROUGHPUT AND PERFORMANCE OF A DISTRIBUTED INTERCONNECT PERIPHERAL BUS CONNECTED TO A HOST CONTROLLER

Номер: US20130185517A1
Принадлежит: WILOCITY, LTD.

A method for accelerating execution of read operations in a distributed interconnect peripheral bus, the distributed interconnect peripheral bus is coupled to a host controller being connected to a universal serial bus (USB) device. The method comprises synchronizing on at least one ring assigned to the USB device; pre-fetching transfer request blocks (TRBs) maintained in the at least one ring, wherein the TRBs are saved in a host memory; saving the pre-fetched TRBs in an internal cache memory; upon reception of a TRB read request from the host controller, serving the request by transferring the requested TRB from the internal cache memory to the host controller; and sending a TRB read completion message to the host controller. 1. A method for accelerating execution of read operations in a distributed interconnect peripheral bus , the distributed interconnect peripheral bus is coupled to a host controller being connected to a universal serial bus (USB) device , comprising:synchronizing on at least one ring assigned to the USB device;pre-fetching transfer request blocks (TRBs) maintained in the at least one ring, wherein the TRBs are saved in a host memory;saving the pre-fetched TRBs in an internal cache memory;upon reception of a TRB read request from the host controller, serving the request by transferring the requested TRB from the internal cache memory to the host controller; andsending a TRB read completion message to the host controller.2. The method of claim 1 , further comprising:checking if the requested TRB resides in the internal cache memory; andsending a dummy TRB when the requested TRB does not reside in internal cache memory.3. The method of claim 2 , wherein the dummy TRB includes at least one No-Op command causing the host controller to retrieve TRBs from a different ring.4. The method of claim 3 , wherein synchronizing on the at least one ring further comprising:monitoring transactions flow from and to the host controller;identifying a first TRB ...

Подробнее
18-07-2013 дата публикации

Determining data contents to be loaded into a read-ahead cache in a storage system

Номер: US20130185518A1
Принадлежит: International Business Machines Corp

Read messages are issued by a client for data stored in a storage system of the networked client-server architecture. A client agent mediates between the client and the storage system. Each sequence of read requests generated by a single thread of execution in the client to read a specific data segment in the storage is defined as a client read session. Each read request sent from the client agent to the storage system includes positions and size for reading. A read-ahead cache is maintained for each client read session. The read-ahead cache is partitioned into two buffers. Data is loaded into the logical buffers according to the changes of the positions in the read requests of the client read session and loading of new data into the buffers is triggered by the read requests positions exceeding a position threshold in the data covered by the second logical buffer.

Подробнее
25-07-2013 дата публикации

CALCULATING READ OPERATIONS AND FILTERING REDUNDANT READ REQUESTS IN A STORAGE SYSTEM

Номер: US20130191602A1

Read messages are issued by a client for data stored in a storage system of the networked client-server architecture. A client agent mediates between the client and the storage system. Each sequence of read requests generated by a single thread of execution in the client to read a specific data segment in the storage is defined as a client read session. The client agent maintains a read-ahead cache for each client read session and generates read-ahead requests to load data into the read-ahead cache. Each read request and read-ahead request sent from the client agent to the storage system includes positions and a size for reading and a sequence id value. The storage system filters and modifies incoming read request and read-ahead requests based on sequence ID values, positions and sizes of the incoming read request and read-ahead requests. 1. A method for calculating a read operation and filtering redundant read requests in a computing environment by a processor device , comprising:issuing read requests by a client for data stored in a storage system;mediating between the client and the storage system by a client agent;defining, as a client read session, each sequence of the read requests generated by a single thread of execution in the client to read a specific data segment in the storage system;maintaining a read-ahead cache for each client read session by the client agent;generating read-ahead requests by the client agent to load data into the read-ahead cache, wherein each of the read requests and the read-ahead requests sent from the client agent to the storage system includes a plurality of positions and sizes for reading and a sequence identification (ID) value; andfiltering and modifying incoming read requests and the read-ahead requests by the storage system based on sequence ID values, the plurality of positions and the sizes of the incoming read requests and read-ahead requests.2. The method of claim 1 , further including allowing for the client and the ...

Подробнее
25-07-2013 дата публикации

Method And Apparatus For Accessing Physical Memory From A CPU Or Processing Element In A High Performance Manner

Номер: US20130191603A1
Принадлежит:

A method and apparatus is described herein for accessing a physical memory location referenced by a physical address with a processor. The processor fetches/receives instructions with references to virtual memory addresses and/or references to physical addresses. Translation logic translates the virtual memory addresses to physical addresses and provides the physical addresses to a common interface. Physical addressing logic decodes references to physical addresses and provides the physical addresses to a common interface based on a memory type stored by the physical addressing logic. 1. A processor comprising:an execution unit to execute one or more operations;a memory type register to store one of a first value and a second value, the first value associated with a cacheable memory type and the second value associated with an un-cacheable memory type; and translate a linear address referenced by a first operation to a first physical address that references a first memory location that stores a first element in a system memory;', 'responsive to the memory type register storing the first value, provide a representation of the first physical address to a cache interface to determine whether the first element is present in a cache, and if present in the cache to read the first element from the cache, and if not present in the cache to cause data fetch logic to read the first element from the system memory; and', 'responsive to the memory type register storing the second value, provide the first physical address to the data fetch logic that is to read the first element from the system memory., 'translation logic adapted to2. The processor of claim 1 , further comprising a control register to store one of the first value and the second value; and receive a second operation that references a second physical address;', 'responsive to the control register storing the first value, provide a representation of the second physical address, without address translation of the ...

Подробнее
01-08-2013 дата публикации

Computer system and storage control method

Номер: US20130198457A1
Принадлежит: HITACHI LTD

The entirety or a part of free space of a second storage device included in a host computer is used as a cache memory region (external cache) outside of a storage apparatus. If Input/Output (I/O) in the host computer is Write, a Write request is transmitted from the host computer to a storage apparatus, the storage apparatus writes data associated with the Write request into a main cache that is a cache memory region included in this storage apparatus, and the storage apparatus writes the data in the main cache into a first storage device included in the storage apparatus. The storage apparatus writes the data in the main cache into an external cache included in the host computer. If the I/O in the host computer is Read, the host computer determines whether or not Read data as target data of the Read exists in the external cache. If a result of the determination is positive, the host computer reads the Read data from the external cache.

Подробнее
08-08-2013 дата публикации

VIRTUAL TAPE DEVICE AND CONTROL METHOD OF VIRTUAL TAPE DEVICE

Номер: US20130205082A1
Принадлежит: FUJITSU LIMITED

A virtual tape device includes a storage unit, a cache determining unit, a selector, and a cache controller. The storage unit records logical volume information associated with an identifier of a logical volume, an updated time of the logical volume, information indicating whether the logical volume is allocated to a cache, an identifier of a physical volume storing data of the logical volume, and information indicating whether the physical volume are mounted in a physical tape drive. The cache determining unit determines, based on the logical volume information, whether the logical volume exists on the cache, when a request to store the logical volume on the cache is received and the cache does not have an available capacity. The selector selects the logical volume based on the determined result as an off-cache target logical volume. The selected logical volume is off-cached by the cache controller. 1. A virtual tape device comprising:a storage unit that records logical volume information associated with an identifier of a logical volume, an updated time of the logical volume, information indicating whether the logical volume is allocated to a cache or not, an identifier of a physical volume storing data of the logical volume, and information indicating whether the physical volume are mounted in a physical tape drive or not;a cache determining unit that determines, based on the logical volume information, whether the logical volume exists on the cache or not, when a request to store the logical volume on the cache is received and the cache does not have an available capacity, the logical volume being updated and stored in the physical volume mounted in the physical tape drive and being allocated to the cache;a selector that selects the logical volume based on the result of the determination made by the cache determining unit as an off-cache target logical volume to be off-cached from the cache; anda cache controller that off-caches the off-cache target logical ...

Подробнее
08-08-2013 дата публикации

VIRTUAL TAPE DEVICE AND CONTROL METHOD OF VIRTUAL TAPE DEVICE

Номер: US20130205083A1
Принадлежит: FUJITSU LIMITED

A virtual tape device includes a memory to record logical volume information that includes an identifier of a logical volume, an identifier of a physical volume that stores data of the logical volume, and information that indicates whether the data of the logical volume is cached in a cache unit, in association with each other. A determining unit that, when a copy command to copy data of the logical volume stored in a first physical volume to a second physical volume is received, determines whether a logical volume cached in the cache unit exists among the logical volumes, and a storage control unit that, when it is determined that the logical volume cached in the cache unit exists among the logical volumes, stores the data of the logical volume cached in the cache unit to the second physical volume without reference to an order indicated in the copy command. 1. A virtual tape device comprising:a memory to record logical volume information that is associated with an identifier of a logical volume, an identifier of a physical volume that stores data of the logical volume, and information that indicates whether the data of the logical volume is cached in a cache unit or not;a determining unit that, when a copy command to copy data of the logical volume stored in a first physical volume to a second physical volume is received, determines whether a logical volume cached in the cache unit exists among the logical volumes indicated in the copy command or not, based on the logical volume information; anda storage control unit that, when the determining unit determines that the logical volume cached in the cache unit exists among the logical volumes at receiving the copy command, stores the data of the logical volume cached in the cache unit among the logical volumes at receiving the copy command to the second physical volume, without reference to an order indicated in the copy command.2. The virtual tape device according to claim 1 , further comprising:a mounting control ...

Подробнее
08-08-2013 дата публикации

PROCESSING READ REQUESTS BY A STORAGE SYSTEM

Номер: US20130205095A1

Read messages are issued by a client for data stored in a storage system. A client agent mediates between the client and the storage system. Each sequence of read requests generated by a single thread of execution in the client to read a specific data segment in the storage is defined as a client read session. Each read request sent from the client agent to the storage system includes a position and a size for reading. The read-ahead cache and a current sequence ID value for each client read session are maintained. For each incoming read request, the storage system determines whether to further process the read request based on a sequence ID value of the read request, and the source from which to obtain data for the read request, and which of the data to load into the read-ahead cache according to data positions of the read request. 1. A method for processing reads requests by a storage system in a computing environment by a processor device , comprising:issuing the read requests by a client for data stored in the storage system;mediating between the client and the storage system by a client agent;defining, as a client read session, each sequence of the read requests generated by a single thread of execution in the client to read a specific data segment in the storage system;including in each of the read requests sent from the client agent to the storage system a position and size for reading;maintaining a read-ahead cache and a current sequence identification (ID) value for each client read session by the storage system;determining by the storage system whether to further process each of the incoming read requests based on a sequence ID value of the read request; anddetermining by the storage system a source from which to obtain data for a read request and which of the data to load into the read-ahead cache according to data position of the read request.2. The method of claim 1 , further including allowing for the client and the storage system to communicate in a ...

Подробнее
22-08-2013 дата публикации

Systems and methods thereto for acceleration of web pages access using next page optimization, caching and pre-fetching techniques

Номер: US20130219007A1
Принадлежит: Limelight Networks Inc

A method and system for acceleration of access to a web page using next page optimization, caching and pre-fetching techniques. The method comprises receiving a web page responsive to a request by a user; analyzing the received web page for possible acceleration improvements of the web page access; generating a modified web page of the received web page using at least one of a plurality of pre-fetching techniques; providing the modified web page to the user, wherein the user experiences an accelerated access to the modified web page resulting from execution of the at least one of a plurality of pre-fetching techniques; and storing the modified web page for use responsive to future user requests.

Подробнее
29-08-2013 дата публикации

Data Migration between Memory Locations

Номер: US20130227218A1
Принадлежит: Hewlett Packard Development Co LP

Migrating data may include determining to copy a first data block in a first memory location to a second memory location and determining to copy a second data block in the first memory location to the second memory location based on a migration policy.

Подробнее
19-09-2013 дата публикации

Adaptive prestaging in a storage controller

Номер: US20130246691A1
Принадлежит: International Business Machines Corp

In one aspect of the present description, at least one of the value of a prestage trigger and the value of the prestage amount, may be modified as a function of the drive speed of the storage drive from which the units of read data are prestaged into a cache memory. Thus, cache prestaging operations in accordance with another aspect of the present description may take into account storage devices of varying speeds and bandwidths for purposes of modifying a prestage trigger and the prestage amount. Other features and aspects may be realized, depending upon the particular application.

Подробнее
24-10-2013 дата публикации

Pre-Fetching Data into a Memory

Номер: US20130282970A1
Принадлежит:

Systems and methods for pre-fetching of data in a memory are provided. By pre-fetching stored data from a slower memory into a faster memory, the amount of time required for data retrieval and/or processing may be reduced. First, data is received and pre-scanned to generate a sample fingerprint. Fingerprints stored in a faster memory that are similar to the sample fingerprint are identified. Data stored in the slower memory associated with the identified stored fingerprints is copied into the faster memory. The copied data may be compared to the received data. Various embodiments may be included in a network memory architecture to allow for faster data matching and instruction generation in a central appliance. 1. A method for copying stored data from a slower memory into a faster memory , the method comprising:receiving at a first device data from a network;pre-scanning the received data to generate a sample fingerprint;copying stored data associated with the sample fingerprint from the slower memory into the faster memory;comparing first data bytes in the received data with first memory bytes within the faster memory;determining a mismatch between one of the first data bytes and one of the first memory bytes;accessing second memory bytes in the faster memory that are non-consecutive with the first memory bytes to determine second data bytes that match the second memory bytes; andsending instructions to a second device to reconstruct the received data.2. The method of claim 1 , further comprising retrieving the stored data associated with the sample fingerprint according to a prioritization algorithm.3. The method of claim 1 , wherein the faster memory comprises random access memory and a slower memory comprises a hard disk.4. The method of claim 1 , further comprising determining whether the received data represented by the sample fingerprint is similar to positioned data in the faster memory.5. The method of claim 1 , wherein the faster memory is associated with ...

Подробнее
24-10-2013 дата публикации

ANTICIPATORY RESPONSE PRE-CACHING

Номер: US20130282986A1
Принадлежит:

Interaction between a client and a service in which the service responds to requests from the client. In addition to responding to specific client requests, the service also anticipates or speculates about what the client may request in the future. Rather than await the client request (that may or may not ultimately be made), the service provides the unrequested anticipatory data to the client in the same data stream as the response data that actual responds to the specific client requests. The client may then use the anticipatory data to fully or partially respond to future requests from the client, if the client does make the request anticipated by the service. Thus, in some cases, latency may be reduced when responding to requests in which anticipatory data has already been provided. The service may give priority to the actual requested data, and gives secondary priority to the anticipatory data. 1. One or more computer-readable storage device having stored computer-executable instructions that , when executed by one or more processors of a computing system , cause the computing system to obtain unrequested pre-fetched data in anticipation of one or more future client requests by implementing a method that includes:an act of identifying one or more client requests;in response to the one or more client requests, an act of generating response data that is responsive to the one or more client requests;an act of providing the response data to the client;an act of attempting to predict a future client request; andan act of generating anticipatory data that is responsive to the predicted future client request without being directly responsive to the one or more client requests.2. The one or more computer-readable storage device of claim 1 , wherein the generated anticipatory data is used to at least partially satisfy one or more future client requests.3. The one or more computer-readable storage device of claim 1 , wherein the method further includes providing the ...

Подробнее
07-11-2013 дата публикации

METHOD AND SYSTEM FOR MANAGING POWER GRID DATA

Номер: US20130297868A1
Принадлежит: BATTELLE MEMORIAL INSTITUTE

A system and method of managing time-series data for smart grids is disclosed. Data is collected from a plurality of sensors. An index is modified for a newly created block. A one disk operation per read or write is performed. The one disk operation per read includes accessing and looking up the index to locate the data without movement of an arm of the disk, and obtaining the data. The one disk operation per write includes searching the disk for free space, calculating an offset, modifying the index, and writing the data contiguously into a block of the disk the index points to. 1. A method of managing time-series data for smart grids , comprising:a. collecting data from a plurality of sensors;b. modifying an index for a newly created block; andc. performing a one disk operation per read or write.2. The method of further comprising adding a look-up capability to the index.3. The method of wherein the index is stored in at least one of the following: main memory of a local machine claim 1 , main memory from a remote machine claim 1 , a solid-state storage device (SSD) from the local machine claim 1 , and the SSD from the remote machine.4. The method of wherein the performing a one disk operation per read comprises accessing and looking up the index to locate the data without movement of an arm of the disk claim 1 , and obtaining the data.5. The method of wherein the performing a one disk operation per write comprises searching the disk for free space claim 1 , calculating an offset claim 1 , modifying the index claim 1 , and writing the data contiguously into a block of the disk the index points to.6. The method of wherein the data is first written into a main memory buffer before being written into the disk.7. The method of wherein the collecting data from a plurality of sensors further comprises organizing the data contiguously in the disk.8. The method of wherein the data is reorganized contiguously in main memory before being written into the disk.9. The method ...

Подробнее
07-11-2013 дата публикации

METHODS AND APPARATUS FOR CUT-THROUGH CACHE MANAGEMENT FOR A MIRRORED VIRTUAL VOLUME OF A VIRTUALIZED STORAGE SYSTEM

Номер: US20130297870A1
Автор: Young Howard
Принадлежит: NETAPP, INC.

Methods and apparatus for cut-through cache memory management in write command processing on a mirrored virtual volume of a virtualized storage system, the virtual volume comprising a plurality of physical storage devices coupled with the storage system. Features and aspects hereof within the storage system provide for receipt of a write command and associated write data from an attached host. Using a cut-through cache technique, the write data is stored in a cache memory and transmitted to a first of the plurality of storage devices as the write data is stored in the cache memory thus eliminating one read-back of the write data for transfer to a first physical storage device. Following receipt of the write data and storage in the cache memory, the write data is transmitted from the cache memory to the other physical storage devices. 1. A method operable in a virtualized storage system , the method comprising:receiving a write command from a host system directed to a virtual volume of the storage system, the storage system comprising a plurality of storage devices, the virtual volume comprising data on multiple storage devices;detecting that a first storage device of the multiple storage devices is ready to receive write data associated with the write command;receiving the write data from the host system responsive to detecting that the first storage device is ready to receive write data;storing the write data in a cache memory;transmitting the write data to the first storage device as the write data is stored in the cache memory; andtransmitting the write data from the cache memory to other storage devices of the multiple storage devices, responsive to receipt of the write data by the first storage device.2. The method of further comprising:detecting that the transmission of the write data to the first storage device was successful.3. The method ofwherein the step of transmitting the write data from the cache memory to the other storage devices further comprises: ...

Подробнее
07-11-2013 дата публикации

FILE HANDLING WITHIN A CLOUD-BASED FILE SYSTEM

Номер: US20130297887A1
Принадлежит: GOOGLE INC.

In one general aspect, a computer-readable storage medium can be configured to store instructions that when executed cause one or more processors to perform a process. The process can include establishing at least a portion of a communication link between a computing device and a storage system operating within a cloud environment. The process can include accessing a user interface including a listing of files representing a plurality of files where at least a first portion of the plurality of files are stored in a local memory of the computing device and a second portion of the plurality of files are stored in the storage system. 1. A non-transitory computer-readable storage medium configured to store instructions that when executed cause one or more processors to perform a process , the process comprising:establishing at least a portion of a communication link between a computing device and a storage system operating within a cloud environment;storing a listing of files representing a plurality of files where at least a first portion of the plurality of files are stored in a local memory of the computing device and a second portion of the plurality of files are stored in the storage system; anddesignating a file from the listing of files for availability offline based on a file category associated with the file from the listing of files.2. The non-transitory computer-readable storage medium of claim 1 , wherein the storage system operates as a primary storage system for the computing device claim 1 , and the local memory of the computing device operates as a cache of the storage system.3. The non-transitory computer-readable storage medium of claim 1 , wherein the listing of files includes:a reference to a web file produced and stored in the storage system using a web application,a reference to a client file produced using a local application installed at the computing device, anda reference to a remote source file from a remote source operating outside of the ...

Подробнее
14-11-2013 дата публикации

SYSTEMS AND METHODS FOR SECURE HOST RESOURCE MANAGEMENT

Номер: US20130304986A1
Принадлежит:

Systems and methods are described herein to provide for secure host resource management on a computing device. Other embodiments include apparatus and system for management of one or more host device drivers from an isolated execution environment. Further embodiments include methods for querying and receiving event data from manageable resources on a host device. Further embodiments include data structures for the reporting of event data from one or more host device drivers to one or more capability modules. 1. A method , comprising:querying at least one host device driver for event types supported by the at least one host device driver;receiving the event types from the at least one device driver; andcaching the event types in a resource data record repository, where the resource data record repository is stored in an environment that is isolated from the host device.2. The method of claim 1 , wherein the at least one host device driver includes each host device driver on a host device.3. The method of claim 2 , further comprising:receiving a request from a capability module for an event type;determining which of the event types cached in the resource data record repository match the request; andsubscribing the capability module to the event types cached in the resource data record that matched the request.4. The method of claim 3 , further comprising receiving from the capability module at least one event threshold for the requested event type.53. The method of claim 1 , wherein the host device driver is a ring software application.60. The method of claim 1 , wherein the host device driver is a ring software application.7. A machine-readable medium that is not a transitory propagating signal claim 1 , the machine-readable medium including instructions that claim 1 , when executed by a machine claim 1 , cause the machine to perform operations comprising:querying at least one host device driver for event types supported by the at least one host device driver; ...

Подробнее
28-11-2013 дата публикации

Micro-Staging Device and Method for Micro-Staging

Номер: US20130318194A1
Автор: Jeffrey L. Timbs
Принадлежит: Dell Products LP

A micro-staging device has a wireless interface module for detecting a first data request that indicates a presence of a user and an application processor that establishes a network connection to a remote data center. The micro-staging device further allocates a portion of storage in a cache memory storage device for storing pre-fetched workflow data objects associated with the detected user.

Подробнее
05-12-2013 дата публикации

Memory Look Ahead Engine for Video Analytics

Номер: US20130322551A1
Принадлежит: Intel Corp

Video analytics may be used to assist video encoding by selectively encoding only portions of a frame and using, instead, previously encoded portions. Previously encoded portions may be used when succeeding frames have a level of motion less than a threshold. In such case, all or part of succeeding frames may not be encoded, increasing bandwidth and speed in some embodiments.

Подробнее
05-12-2013 дата публикации

Methods and Systems for Retrieving and Caching Geofence Data

Номер: US20130326137A1
Принадлежит: QUALCOMM INCORPORATED

Mobile device systems and methods for monitoring geofences cache a subset of geofences within a likely travel perimeter determined based on speed and direction of travel, available roads, current traffic, etc. A server may download to mobile devices subsets of geofences within a likely travel perimeter determined based on a threshold travel time possible from a current location given current travel speed, direction and roads. Mobile device may receive a list of local geofences from a server, which may maintain or have access to a database containing all geofences. The mobile device may use the cashed geofences in the normal manner, by comparing its location to the cached list of local geofences to detect matches. In an embodiment, the mobile device may calculate or receive from the server an update perimeter, which when crossed may prompt the mobile device to request an update to the geofences stored in cache. 1. A method for enabling a mobile computing device to monitor geofences , comprising:determining a current location of the mobile computing device;receiving a subset of geofences from a global database of geofences based upon the current location of the mobile computing device;caching the subset of geofences in memory of the mobile computing device;andcomparing the current location to the geofences cached on the mobile computing device to determine if a geofence criterion is satisfied.2. The method of claim 1 , wherein the global database of geofences is located on a server claim 1 , the method further comprising:receiving the current location of the mobile computing device in the server;selecting the subset of geofences from the global database of geofences based upon the current location of the mobile computing device;transmitting the selected subset of geofences to the mobile computing device; andreceiving the transmitted subset of geofences in the mobile computing device,wherein caching the subset of geofences in memory of the mobile computing device ...

Подробнее
05-12-2013 дата публикации

MEMORY MANAGEMENT SYSTEM AND PROGRAM

Номер: US20130326151A1

To reduce power consumption of a computer or the like, a nonvolatile memory divided into a plurality of segments is applied to main memory used for virtual storage management. Thus, power supply even to a segment having a physical address that is being used can be stopped. As a result, power consumption of the computer or the like performing virtual storage management can be reduced. 1. A memory device comprising a program for making a computer execute virtual storage management , the program allowing the computer to perform the steps of:stopping power supply to each of a plurality of segments in a nonvolatile memory included in the computer;looking up a table mapping a plurality of virtual addresses and a plurality of physical addresses included in each of the plurality of segments;supplying power to a part of the plurality of segments; andexecuting a process.2. The memory device according to claim 1 ,wherein during executing the process, the part of the plurality of segments is at least one of the plurality of segments including a physical address used in the process.3. The memory device according to claim 1 ,wherein the part of the plurality of segments is one segment necessary to continue execution of the process.4. The memory device according to claim 1 ,wherein during executing the process, the power is supplied to at least one segment among the part of the plurality of segments including k physical addresses (k is a natural number of 2 or more) used in the process, and power is temporarily supplied to at least one remaining segment among the part of the plurality of segments including a physical address of one or more and less than k used in the process.5. The memory device according to claim 1 ,wherein the program further makes the computer perform the steps of:supplying power to one of the plurality of segments after stopping power supply to each of the plurality of segments, andwriting a new correspondence between a virtual address and a physical address ...

Подробнее
19-12-2013 дата публикации

Cache memory prefetching

Номер: US20130339625A1
Принадлежит: International Business Machines Corp

According to exemplary embodiments, a computer program product, system, and method for prefetching in memory include determining a missed access request for a first line in a first cache level and accessing an entry in a prefetch table, wherein the entry corresponds to a memory block, wherein the entry includes segments of the memory block. Further, the embodiment includes determining a demand segment of the segments in the entry, the demand segment corresponding to a segment of the memory block that includes the first line, reading a first field in the demand segment to determine if a second line in the demand segment is spatially related with respect to accesses of the demand segment and reading a second field in the demand segment to determine if a second segment in the entry is temporally related to the demand segment.

Подробнее
19-12-2013 дата публикации

Next Instruction Access Intent Instruction

Номер: US20130339672A1
Принадлежит: International Business Machines Corp

Executing a Next Instruction Access Intent instruction by a computer. The processor obtains an access intent instruction indicating an access intent. The access intent is associated with an operand of a next sequential instruction. The access intent indicates usage of the operand by one or more instructions subsequent to the next sequential instruction. The computer executes the access intent instruction. The computer obtains the next sequential instruction. The computer executes the next sequential instruction, which comprises based on the access intent, adjusting one or more cache behaviors for the operand of the next sequential instruction.

Подробнее
26-12-2013 дата публикации

Data Cache Method, Device, and System in a Multi-Node System

Номер: US20130346693A1
Автор: Zhang Xiaofeng
Принадлежит: Huawei Technologies Co., Ltd.

A data cache method, device, and system in a multi-node system are provided. The method includes: dividing a cache area of a cache medium into multiple sub-areas, where each sub-area is corresponding to a node in the system; dividing each of the sub-areas into a thread cache area and a global cache area; when a process reads a file, detecting a read frequency of the file; when the read frequency of the file is greater than a first threshold and the size of the file does not exceed a second threshold, caching the file in the thread cache area; or when the read frequency of the file is greater than the first threshold and the size of the file exceeds the second threshold, caching the file in the global cache area. Thus overheads of remote access of a system are reduced, and I/O performance of the system is improved. 1. A data cache method in a multi-node system , wherein the multi-node system comprises a cache medium and a disk array , and wherein the method comprises:dividing a cache area in the cache medium into multiple sub-areas, wherein each sub-area is corresponding to a node in the multi-node system;dividing each of the sub-areas into a thread cache area and a global cache area, wherein a mapping is established between the thread cache area and the disk array by adopting an associative mapping manner, and wherein a mapping is established between the global cache area and the disk array by adopting a set-associative mapping manner;detecting a read frequency of a file when a process reads the file;caching the file in the thread cache area when the read frequency of the file is greater than a first threshold and a size of the file does not exceed a second threshold; andcaching the file in the global cache area when the read frequency of the file is greater than the first threshold and the size of the file exceeds the second threshold.2. The method according to claim 1 , wherein the method further comprises:dividing the thread cache area into multiple small areas; ...

Подробнее
02-01-2014 дата публикации

MOBILE MEMORY CACHE READ OPTIMIZATION

Номер: US20140006719A1
Принадлежит: Memory Technologies LLC

Examples of enabling cache read optimization for mobile memory devices are described. One or more access commands may be received, from a host, at a memory device. The one or more access commands may instruct the memory device to access at least two data blocks. The memory device may generate pre-fetch information for the at least two data blocks based at least in part on an order of accessing the at least two data blocks. 120.-. (canceled)21. A method comprising:receiving, by a memory device, a plurality of write commands from a host, at least one write command of the plurality of write commands instructing the memory device to write a plurality of data blocks;writing the plurality of data blocks to the memory device; andgenerating, by the memory device, pre-fetch information associated with the plurality of data blocks based at least in part on an order of writing the plurality of data blocks.22. The method of claim 21 , wherein the at least one write command of the plurality of write commands is associated with index information comprising at least one of: a context identifier claim 21 , a task tag claim 21 , a pre-fetch identifier claim 21 , or a group number.23. The method of claim 21 , wherein receiving the plurality of write commands comprises:receiving a first write command that includes first index information;receiving, after the first write command, a second write command that includes second index information that is different from the first index information; andreceiving, after the second write command, a third write command that includes the first index information.24. The method of claim 23 , wherein generating the pre-fetch information comprises:generating linkage information to link a last data block that is written by the first write command with a first data block that is written by the third write command.25. The method of claim 24 , further comprising:storing the linkage information in at least one of the first data block or the last data block ...

Подробнее
09-01-2014 дата публикации

ADAPTIVE MEMORY SYSTEM FOR ENHANCING THE PERFORMANCE OF AN EXTERNAL COMPUTING DEVICE

Номер: US20140013039A1
Принадлежит: MOBILE SEMICONDUCTOR CORPORATION

An adaptive memory system is provided for improving the performance of an external computing device. The adaptive memory system includes a single controller, a first memory type (e.g., Static Random Access Memory or SRAM), a second memory type (e.g., Dynamic Random Access Memory or DRAM), a third memory type (e.g., Flash), an internal bus system, and an external bus interface. The single controller is configured to: (i) communicate with all three memory types using the internal bus system; (ii) communicate with the external computing device using the external bus interface; and (iii) allocate cache-data storage assignment to a storage space within the first memory type, and after the storage space within the first memory type is determined to be full, allocate cache-data storage assignment to a storage space within the second memory type. 110-. (canceled)11. A tangible computer-readable medium having computer-executable instructions stored thereon that , if executed by a single controller of an adaptive memory system , cause the single controller to perform actions for implementing a data look-ahead training sequence; wherein the adaptive memory system includes a first memory of a first memory type , a second memory of a second memory type , a third memory of a third memory type , an internal bus system , and an external bus interface; and wherein the actions comprise:acquiring one or more sequences of sector data from an application executed by an external computing device;performing data reduction on the one or more sequences of sector data; andstoring the reduced sequence data in the first memory or the second memory.12. The computer-readable medium of claim 11 , wherein the actions further comprise storing a copy of the sequence data to the third memory as a backup data storage device.13. The computer-readable medium of claim 11 , wherein performing data reduction on the one or more sequences of sector data includes replacing sequences of unordered sector data ...

Подробнее
09-01-2014 дата публикации

PREFETCHING OF NEXT PHYSICALLY SEQUENTIAL CACHE LINE AFTER CACHE LINE THAT INCLUDES LOADED PAGE TABLE ENTRY

Номер: US20140013058A1
Принадлежит: VIA TECHNOLOGIES, INC.

A microprocessor includes a translation lookaside buffer, a request to load a page table entry into the microprocessor generated in response to a miss of a virtual address in the translation lookaside buffer, and a prefetch unit. The prefetch unit receives a physical address of a first cache line that includes the requested page table entry and responsively generates a request to prefetch into the microprocessor a second cache line that is the next physically sequential cache line to the first cache line. 1: A microprocessor , comprising:a translation lookaside buffer;a request to load a page table entry into the microprocessor generated in response to a miss of a virtual address in the translation lookaside buffer; anda prefetch unit, configured to receive a physical address of a first cache line that includes the requested page table entry, wherein the prefetch unit is further configured to responsively generate a request to prefetch into the microprocessor a second cache line that is the next physically sequential cache line to the first cache line.2: The microprocessor of claim 1 , wherein the second cache line is absent the page table entry implicated by the virtual address that missed in the translation lookaside buffer.3: The microprocessor of claim 1 , further comprising:a tablewalk engine, configured to perform a tablewalk in response to the miss of the virtual address in the translation lookaside buffer, wherein the tablewalk does not access any of the page table entries in the prefetched second cache line.4. (canceled)5: The microprocessor of claim 1 , wherein the request to load the page table entry claim 1 , in response to which the prefetch unit prefetches the second cache line claim 1 , is generated internally by the microprocessor in response to the miss of the virtual address in the translation lookaside buffer claim 1 , rather than the load request being a load request of a program executed by the microprocessor.6: The microprocessor of claim 1 , ...

Подробнее
16-01-2014 дата публикации

SAVING LOG DATA USING A DISK SYSTEM AS PRIMARY CACHE AND A TAPE LIBRARY AS SECONDARY CACHE

Номер: US20140019682A1

Various embodiments are provided for saving a log data in a hierarchical storage management system using a disk system as a primary cache with a tape library as a secondary cache. The user data is stored in the primary cache and written into the secondary cache at a subsequent period of time. Blank tapes in the secondary cache is prepared for storing the user data and the log data based on priorities. At least one of the blank tapes is selected for copying the log data and the user data from the primary cache to the secondary cache based on priorities. The log data is stored in the primary cache. The selection of at least one of the blank tapes completely filled with the log data is delayed for writing additional amounts of the user data. 1. A system , for saving a plurality of log data in a hierarchical storage management (HSM) system using a disk system as a primary cache with a tape library as a secondary cache , comprising:at least one tape drive; and stores user data in the primary cache, the user data being written from the primary cache into the secondary cache at a subsequent period of time,', 'prepares a plurality of blank tapes in the secondary cache for storing the user data and the plurality of log data based on a plurality of priorities, the plurality of blank tapes are unused until the user data is written to at least one tape media,', 'selects at least one of the plurality of blank tapes for copying the plurality of log data and the user data from the primary cache to the secondary cache based upon the plurality of priorities, and', 'stores the plurality of log data in the primary cache, wherein the plurality of log data is wrapped in the primary cache and a copy of the plurality of log data being copied into the plurality of blank tapes, the plurality of blank tapes being configured to appear blank to a user for storing the user data., 'at least one processor device, operable with the at least one tape drive, wherein the at least one processor ...

Подробнее
16-01-2014 дата публикации

Methods of cache preloading on a partition or a context switch

Номер: US20140019689A1
Принадлежит: International Business Machines Corp

A scheme referred to as a “Region-based cache restoration prefetcher” (RECAP) is employed for cache preloading on a partition or a context switch. The RECAP exploits spatial locality to provide a bandwidth-efficient prefetcher to reduce the “cold” cache effect caused by multiprogrammed virtualization. The RECAP groups cache blocks into coarse-grain regions of memory, and predicts which regions contain useful blocks that should be prefetched the next time the current virtual machine executes. Based on these predictions, and using a simple compression technique that also exploits spatial locality, the RECAP provides a robust prefetcher that improves performance without excessive bandwidth overhead or slowdown.

Подробнее
16-01-2014 дата публикации

Processor, information processing apparatus, and control method of processor

Номер: US20140019690A1
Автор: Mikio Hondo, Toru Hikichi
Принадлежит: Fujitsu Ltd

A request storing unit in a PF port stores an expanded request. A PF port entry selecting unit controls two pre-fetch requests expanded from the expanded request to consecutively be input to a L2-pipe. When only one of the expanded two pre-fetch requests is aborted, the PF port entry selecting unit further controls the requests such that the aborted pre-fetch request is input to the L2-pipe as the highest priority request. Further, the PF port entry selecting unit receives the number of available resources from a resource managing unit in order to select a pre-fetch request to be input to a pipe inputting unit based on the number of available resources.

Подробнее
13-02-2014 дата публикации

Opportunistic block transmission with time constraints

Номер: US20140047192A1
Принадлежит: Numecent Holdings Inc

A technique for determining a data window size allows a set of predicted blocks to be transmitted along with requested blocks. A stream enabled application executing in a virtual execution environment may use the blocks when needed.

Подробнее
20-02-2014 дата публикации

Data cache prefetch hints

Номер: US20140052927A1
Принадлежит: Advanced Micro Devices Inc

The present invention provides a method and apparatus for using prefetch hints. One embodiment of the method includes bypassing, at a first prefetcher associated with a first cache, issuing requests to prefetch data from a number of memory addresses in a sequence of memory addresses determined by the first prefetcher. The number is indicated in a request received from a second prefetcher associated with a second cache. This embodiment of the method also includes issuing, from the first prefetcher, a request to prefetch data from a memory address subsequent to the bypassed memory addresses.

Подробнее
20-02-2014 дата публикации

INFORMATION PROCESSING DEVICE AND METHOD

Номер: US20140052928A1
Автор: Shimoi Hiroyuki
Принадлежит: FUJITSU LIMITED

An information processing device detects a sequential access for reading first data by sequentially accessing consecutive areas or inconsecutive areas within a specified range of a first storage unit when the sequential access consecutively occurs by a specified number, calculates, based on a size of the first data, a size of second data read by a prefetch for prereading the data stored consecutively in the first storage unit and for storing the read data in a second storage unit, and performs the prefetch based on the calculated size of the second data. 1. An information processing device comprising:a first storage unit that stores data;a second storage unit that stores the data read from the first storage unit; and detecting a sequential access for reading first data of a specified size by sequentially accessing consecutive areas or inconsecutive areas within a specified range of the first storage unit when the sequential access consecutively occurs by a specified number;', 'calculating, based on a size of the first data, a size of second data read by a prefetch for prereading data stored consecutively in the first storage unit and for storing the read data in the second storage unit; and', 'performing the prefetch based on the calculated size of the second data., 'a processor that executes a procedure, the procedure including2. The information processing device according to claim 1 , whereinthe detecting determines, for each sequential access, whether or not a sequential access made at a ratio of the size of the first data to the size of the second data, which is equal to or lower than a threshold value, is consecutive by a specified number, andthe calculating calculates the size of the second data based on the ratio when the sequential access made at the ratio of the size of the first data to the size of the second data, which is equal to or lower than the threshold value, is consecutive by the specified number.3. The information processing device according to ...

Подробнее
27-02-2014 дата публикации

TRANSPARENT HOST-SIDE CACHING OF VIRTUAL DISKS LOCATED ON SHARED STORAGE

Номер: US20140059292A1
Принадлежит:

Techniques for using a host-side cache to accelerate virtual machine (VM) I/O are provided. In one embodiment, the hypervisor of a host system can intercept an I/O request from a VM running on the host system, where the I/O request is directed to a virtual disk residing on a shared storage device. The hypervisor can then process the I/O request by accessing a host-side cache that resides one or more cache devices distinct from the shared storage device, where the accessing of the host-side cache is transparent to the VM. 1. A method for using a host-side cache to accelerate virtual machine (VM) I/O , the method comprising:intercepting, by a hypervisor of a host system, an I/O request from a VM running on the host system, the I/O request being directed to a virtual disk residing on a shared storage device; andprocessing, by the hypervisor, the I/O request by accessing a host-side cache that resides one or more cache devices distinct from the shared storage device, the accessing of the host-side cache being transparent to the VM.2. The method of wherein processing the I/O request by accessing the host-side cache comprises invoking a caching module that has been preconfigured for use with the VM or the virtual disk.3. The method of wherein the caching module is a modular component of the hypervisor that is implemented by a third-party developer.4. The method of wherein the host-side cache is spread across a plurality of cache devices claim 1 , and wherein the host-side cache is presented as a single logical resource to the hypervisor by pooling the plurality of cache devices using a common file system.5. The method of wherein the plurality of cache devices are heterogeneous devices.6. The method of wherein the VM or the virtual disk is allocated a portion of the host-side cache when the VM is powered on claim 1 , the allocated portion having a preconfigured minimum size and a preconfigured maximum size.7. The method of wherein the allocated portion is freed when the VM ...

Подробнее
27-02-2014 дата публикации

Method, apparatus, and system for speculative abort control mechanisms

Номер: US20140059333A1
Принадлежит: Intel Corp

An apparatus and method is described herein for providing robust speculative code section abort control mechanisms. Hardware is able to track speculative code region abort events, conditions, and/or scenarios, such as an explicit abort instruction, a data conflict, a speculative timer expiration, a disallowed instruction attribute or type, etc. And hardware, firmware, software, or a combination thereof makes an abort determination based on the tracked abort events. As an example, hardware may make an initial abort determination based on one or more predefined events or choose to pass the event information up to a firmware or software handler to make such an abort determination. Upon determining an abort of a speculative code region is to be performed, hardware, firmware, software, or a combination thereof performs the abort, which may include following a fallback path specified by hardware or software. And to enable testing of such a fallback path, in one implementation, hardware provides software a mechanism to always abort speculative code regions.

Подробнее
06-03-2014 дата публикации

PROCESSOR, INFORMATION PROCESSING APPARATUS, AND CONTROL METHOD

Номер: US20140068179A1
Принадлежит:

A processor includes a cache memory that holds data from a main storage device. The processor includes a first control unit that controls acquisition of data, and that outputs an input/output request that requests the transfer of the target data. The processor includes a second control unit that controls the cache memory, that determines, when an instruction to transfer the target data and a response output by the first processor on the basis of the input/output request that has been output to the first processor is received, whether the destination of the response is the processor, and that outputs, to the first control unit when the second control unit determines that the destination of the response is the processor, the response and the target data with respect to the input/output request. 1. A processor comprising:a cache memory that holds data from a main storage device connected to a first processor;a first control unit that controls acquisition of data performed by a input/output device connected to the processor and that outputs, to the first processor connected to the processor when the input/output device requests a transfer of target data stored in the main storage device connected to the first processor, an input/output request that requests the transfer of the target data; anda second control unit that controls the cache memory, that determines, when an instruction to transfer the target data and a response output by the first processor on the basis of the input/output request that has been output to the first processor is received from the first processor, whether the destination of the response is the processor, and that outputs, to the first control unit when the second control unit determines that the destination of the response is the processor, the response and the target data with respect to the input/output request.2. The processor according to claim 1 , wherein claim 1 , when the second control unit determines that the destination of the ...

Подробнее
06-03-2014 дата публикации

DATA ANALYSIS SYSTEM

Номер: US20140068180A1
Принадлежит:

A data analysis system, particularly, a system capable of efficiently analyzing big data is provided. The data analysis system includes an analyst server, at least one data storage unit, a client terminal independent of the analyst server, and a caching device independent of the analyst server. The caching device includes a caching memory, a data transmission interface, and a controller for obtaining a data access pattern of the client terminal with respect to the at least one data storage unit, performing caching operations on the at least one data storage unit according to a caching criterion to obtain and store cache data in the caching memory, and sending the cache data to the analyst server via the data transmission interface, such that the analyst server analyzes the cache data to generate an analysis result, which may be used to request a change in the caching criterion. 1. A data analysis system , comprising:an analyst server;at least one data storage unit;a client terminal independent of the analyst server; anda caching device independent of the analyst server, the caching device further comprising a cache memory, a data transmission interface, and a controller in communication with the analyst server, the client terminal, and the storage unit, wherein the controller obtains a data access pattern of the client terminal with respect to the storage unit and performs caching operations on the storage unit according to a caching criterion to obtain and store cache data in the cache memory and send the cache data to the analyst server via the data transmission interface, thereby allowing the analyst server to analyze the cache data and generate an analysis result.2. The data analysis system of claim 1 , wherein the caching criterion is specified or changeable by the analyst server.3. The data analysis system of claim 2 , wherein the caching criterion relates to a given access frequency.4. The data analysis system of claim 2 , wherein the caching criterion ...

Подробнее
13-03-2014 дата публикации

CACHE OPTIMIZATION

Номер: US20140075109A1
Принадлежит: Amazon Technologies, Inc.

A system and method for management and processing of resource requests at cache server computing devices is provided. Cache server computing devices segment content into an initialization fragment for storage in memory and one or more remaining fragments for storage in a media having higher latency than the memory. Upon receipt of a request for the content, a cache server computing device transmits the initialization fragment from the memory, retrieves the one or more remaining fragments, and transmits the one or more remaining fragments without retaining the one or more remaining fragments in the memory for subsequent processing. 1. A computer-implemented method comprising:receiving, at a cache component, an object for storage;segmenting the object into a first fragment for storage in memory, a second fragment for storage in a first media having higher latency than the memory, and a third fragment for storage in a second media having higher latency than the first media, wherein the size of the first fragment is based on a latency associated with retrieval of the second fragment, and wherein the size of the second fragment is based on a latency associated with retrieval of the third fragment;receiving a request for the object at a cache component;causing transmission of the first fragment of the object from the memory;causing transmission of the second fragment without retaining the second fragment in memory for subsequent processing; andcausing transmission of the third fragment without retaining the third fragment in the memory for subsequent processing.2. A computer-implemented method comprising:receiving a request for an object at a cache component;causing transmission of a first fragment of the object from memory;causing transmission of a second fragment of the object without retaining the second fragment in memory for subsequent processing; andcausing transmission of a third fragment of the object without retaining the third fragment in memory for subsequent ...

Подробнее
20-03-2014 дата публикации

STORAGE APPARATUS AND METHOD FOR CONTROLLING INTERNAL PROCESS

Номер: US20140082276A1
Принадлежит: FUJITSU LIMITED

According to an aspect of the present invention, provided is a storage apparatus including a plurality of solid state drives (SSDs) and a processor. The SSDs store data in a redundant manner. The processor controls a reading process of reading data from an SSD and a writing process of writing data into an SSD. The processor controls an internal process, which is performed during the writing process, to be performed in each of the SSDs when any one of the SSDs satisfies a predetermined condition. 1. A storage apparatus comprising:a plurality of solid state drives (SSDs) to store data in a redundant manner; and control a reading process of reading data from an SSD and a writing process of writing data into an SSD, and', 'control an internal process, which is performed during the writing process, to be performed in each of the SSDs when any one of the SSDs satisfies a predetermined condition., 'a processor to'}3. The storage apparatus according to claim 1 , whereinthe predetermined condition is that an amount of data which has been written into any one of the SSDs reaches or exceeds a start amount, the start amount being an estimated amount of data to be written by a time when an internal process is started.4. The storage apparatus according to claim 1 , whereinthe predetermined condition is that a waiting time in any one of the SSDs exceeds a threshold, the waiting time being a time length from a time at which a write command is issued to an SSD to a time at which a response is made, the threshold being determined based on a waiting time in a state where no internal process is performed.5. The storage apparatus according to claim 1 , whereinthe plurality of SSDs are arranged in a Redundant Array of Inexpensive Disks (RAID) configuration.6. A method for controlling an internal process of a plurality of solid state drives (SSDs) configured to store data in a redundant manner claim 1 , the method comprising:controlling, by a storage apparatus, a reading process of ...

Подробнее
20-03-2014 дата публикации

EFFICIENT PROCESSING OF CACHE SEGMENT WAITERS

Номер: US20140082277A1

For a plurality of input/output (I/O) operations waiting to assemble complete data tracks from data segments, a process, separate from a process responsible for the data assembly into the complete data tracks, is initiated for waking a predetermined number of the waiting I/O operations. A total number of I/O operations to be awoken at each of an iterated instance of the waking is limited. 1. A method for cache management by a processor device in a computing storage environment , the method comprising:for a plurality of input/output (I/O) operations waiting to assemble complete data tracks from data segments, initiating a process, separate from a process responsible for the data assembly into the complete data tracks, for waking a predetermined number of the waiting I/O operations, wherein a total number of I/O operations to be awoken at each of an iterated instance of the waking is limited.2. The method of claim 1 , further including performing the waking process for a first iteration subsequent to the data assembly process building at least one complete data track.3. The method of claim 2 , further including claim 2 , pursuant to the waking process claim 2 , removing claim 2 , by a first I/O waiter claim 2 , the at least one complete data track off of a free list.4. The method of claim 3 , further including claim 3 , pursuant to the waking process claim 3 , if additional complete data tracks are available on the free list claim 3 , waking at least a second I/O waiter to remove the additional complete data tracks off the free list.5. The method of claim 4 , further including iterating through at least one additional waking process corresponding to a predetermined wake up depth.6. The method of claim 1 , further including setting the predetermined number of waiting I/O operations to be awoken according to the waking process. This application is a Continuation of U.S. patent application Ser. No. 13/616,902, filed on Sep. 14, 2012.The present invention relates in ...

Подробнее
20-03-2014 дата публикации

CACHE MEMORY PREFETCHING

Номер: US20140082287A1

According to exemplary embodiments, a computer program product, system, and method for prefetching in memory include determining a missed access request for a first line in a first cache level and accessing an entry in a prefetch table, wherein the entry corresponds to a memory block, wherein the entry includes segments of the memory block. Further, the embodiment includes determining a demand segment of the segments in the entry, the demand segment corresponding to a segment of the memory block that includes the first line, reading a first field in the demand segment to determine if a second line in the demand segment is spatially related with respect to accesses of the demand segment and reading a second field in the demand segment to determine if a second segment in the entry is temporally related to the demand segment. 1. A computer program product for prefetching in memory , the computer program product comprising:a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method comprising:determining a missed access request for a first line in a first cache level;accessing an entry in a prefetch table, the entry corresponding to a memory block and comprising segments of the memory block;determining a demand segment of the segments in the entry, the demand segment corresponding to a segment of the memory block that includes the first line;reading, by a cache controller, a first field in the demand segment to determine if a second line in the demand segment is spatially related with respect to accesses of the demand segment; andreading, by the cache controller, a second field in the demand segment to determine if a second segment in the entry is temporally related to the demand segment.2. The computer program product of claim 1 , comprising reading a first field in the second segment to determine if a third line in the second segment is spatially related with respect to accesses of ...

Подробнее
27-03-2014 дата публикации

Caching Based on Spatial Distribution of Accesses to Data Storage Devices

Номер: US20140089597A1
Автор: Pruthi Arvind
Принадлежит: Marvell International Ltd.

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for quantifying a spatial distribution of accesses to storage systems and for determining spatial locality of references to storage addresses in the storage systems, are described. In one aspect, a method includes determining a measure of spatial distribution of accesses to a data storage system based on multiple distinct groups of accesses to the data storage system, and adjusting a caching policy used for the data storage system based on the determined measure of spatial distribution. 120-. (canceled)21. A method performed by data processing apparatus , the method comprising:identifying distinct groups of accesses to a data storage system, wherein the identifying comprises skipping at least some accesses to the data storage system based on a random number;determining a measure of spatial distribution of accesses to the data storage system based on the distinct groups of accesses to the data storage system; andadjusting a caching policy used for the data storage system based on the determined measure of spatial distribution.22. The method of claim 21 , wherein the random number comprises a number of accesses to the data storage system that are skipped.23. The method of claim 21 , wherein the random number comprises a random time interval between accesses to the data storage system.24. The method of claim 21 , wherein determining the measure of spatial distribution comprises combining a measure determined based on a currently identified group of the distinct groups with a previously determined measure.25. The method of claim 24 , wherein said combining comprises averaging a weighted value of the measure determined based on the currently identified group and a weighted value of the previously determined measure.26. The method of claim 21 , wherein identifying the distinct groups of accesses to the data storage system comprises identifying two distinct groups of accesses ...

Подробнее
10-04-2014 дата публикации

CACHE MANAGEMENT

Номер: US20140101389A1
Принадлежит: FUSION-IO

A system includes a data store and a memory cache subsystem. A method for pre-fetching data from the data store for the cache includes determining a performance characteristic of a data store. The method also includes identifying a pre-fetch policy configured to utilize the determined performance characteristic of the data store. The method also includes pre-fetching data stored in the data store by copying data from the data store to the cache according to the pre-fetch policy identified to utilize the determined performance characteristic of the data store. 1. A method for pre-fetching data for a cache , the method comprising:determining a performance characteristic of a data store;identifying a pre-fetch policy configured to utilize the determined performance characteristic of the data store; andpre-fetching data stored in the data store by copying data from the data store to the cache according to the pre-fetch policy identified to utilize the determined performance characteristic of the data store.2. The method of claim 1 , further comprising:receiving a read operation to read data from the data store, wherein the read operation specifies at least one block of data;reading the block of data and storing the block in the cache; andpre-fetching at least one additional block of data to store in the cache.3. The method of claim 1 , further comprising:receiving a pre-fetch request from an external application, wherein the pre-fetch request identifies a sequence of blocks on the data store; andpre-fetching data from the sequence of blocks identified in the pre-fetch request, and storing the pre-fetched data in the cache.4. The method of claim 3 , further comprising designating the sequence of blocks identified in the pre-fetch request as high priority for populating the cache from an initial state following startup of the cache.5. The method of claim 3 , further comprising designating the sequence of blocks identified in the pre-fetch request as low priority for ...

Подробнее
05-01-2017 дата публикации

CACHE MANAGEMENT METHOD FOR OPTIMIZING READ PERFORMANCE OF DISTRIBUTED FILE SYSTEM

Номер: US20170004086A1
Принадлежит:

A cache management method for optimizing read performance in a distributed file system is provided. The cache management method includes: acquiring metadata of a file system; generating a list regarding data blocks based on the metadata; and pre-loading data blocks into a cache with reference to the list. Accordingly, read performance in analyzing big data in a Hadoop distributed file system environment can be optimized in comparison to a related-art method. 1. A cache management method comprising:acquiring metadata of a file system;generating a list regarding data blocks based on the metadata; andpre-loading data blocks into a cache with reference to the list.2. The cache management method of claim 1 , wherein the pre-loading comprises pre-loading data blocks requested by a client into the cache.3. The cache management method of claim 2 , wherein the pre-loading comprises pre-loading other data blocks into the cache while a data block is being processed by the client.4. The cache management method of claim 1 , wherein the pre-loading comprises pre-loading claim 1 , into the cache claim 1 , data blocks which are requested by the client claim 1 , and data blocks which are referred to with the data blocks more than a reference number of times.5. The cache management method of claim 1 , wherein the file system is a Hadoop distributed file system claim 1 , andwherein the cache is implemented by using an SSD.6. A server comprising:a cache; anda processor configured to acquire metadata of a file system, generate a list regarding data blocks based on the metadata, and order to pre-load data blocks into the cache with reference to the list. The present application claims the benefit under 35 U.S.C. §119(a) to a Korean patent application filed in the Korean Intellectual Property Office on Jun. 30, 2015, and assigned Serial No. 10-2015-0092735, the entire disclosure of which is hereby incorporated by reference.The present invention relates generally to a cache management ...

Подробнее
05-01-2017 дата публикации

ADAPTIVE CACHE MANAGEMENT METHOD ACCORDING TO ACCESS CHARACTERISTICS OF USER APPLICATION IN DISTRIBUTED ENVIRONMENT

Номер: US20170004087A1
Принадлежит:

An adaptive cache management method according to access characteristic of a user application in a distributed environment is provided. The adaptive cache management method includes: determining an access pattern of a user application; and determining a cache write policy based on the access pattern. Accordingly, a delay in speed which may occur in an application can be minimized by efficiently using resources established in a distributed environment and using an adaptive policy. 1. An adaptive cache management method comprising:determining an access pattern of a user application; anddetermining a cache write policy based on the access pattern.2. The adaptive cache management method of claim 1 , wherein the determining the cache write policy comprises claim 1 , when the access pattern indicates that recently referred data is referred to again claim 1 , determining a cache write policy of storing data recorded on a cache in a storage medium afterward.3. The adaptive cache management method of claim 1 , wherein the determining the cache write policy comprises claim 1 , when the access pattern indicates that referred data is referred to again after a predetermined interval claim 1 , determining a cache write policy of immediately storing data recorded on a cache in a storage medium.4. The adaptive cache management method of claim 1 , wherein the determining the cache write policy comprises claim 1 , when the access pattern indicates that referred data is not referred to again claim 1 , determining a cache write policy of immediately storing data in a storage medium without recording on a cache.5. The adaptive cache management method of claim 1 , further comprising:selecting data which is most likely to be referred to based on the access pattern; andloading the selected data into a cache.6. A storage server comprising:a cache; anda processor configured to determine an access pattern of a user application and determine a cache write policy based on the access pattern. The ...

Подробнее
05-01-2017 дата публикации

System and Method for Cache Monitoring in Storage Systems

Номер: US20170004093A1
Принадлежит:

A system and method of cache monitoring in storage systems includes storing storage blocks in a cache memory. Each of the storage blocks is associated with status indicators. As requests are received at the cache memory, the requests are processed and the status indicators associated with the storage blocks are updated in response to the processing of the requests. One or more storage blocks are selected for eviction when a storage block limit is reached. As ones of the selected one or more storage blocks are evicted from the cache memory, the block counters are updated based on the status indicators associated with the evicted storage blocks. Each of the block counters is associated with a corresponding combination of the status indicators. Caching statistics are periodically updated based on the block counters. 1. A method comprising:storing, by a cache controller, storage blocks in a cache memory, each of the storage blocks being associated with its own respective status indicators and a respective tenant identifier; processing the requests; and', 'updating the respective status indicators associated with the storage blocks in response to the processing of the requests;, 'as requests are received at the cache controllerselecting one of the storage blocks for eviction when a storage block limit is reached;as the selected storage block is evicted from the cache memory, determining an eviction state of the selected storage block based on values of the status indicators associated with the selected storage block and updating a selected one of a plurality of block counters, the selected one of the block counters corresponding to the eviction state and the respective tenant identifier associated with the selected storage block; andperiodically updating caching statistics based on the block counters.2. The method of claim 1 , wherein:the respective status indicators include a respective written status indicator; andthe method further comprises updating each respective ...

Подробнее
07-01-2016 дата публикации

NVRAM CACHING AND LOGGING IN A STORAGE SYSTEM

Номер: US20160004637A1
Автор: Kimmel Jeffrey S.
Принадлежит:

In one embodiment, a node coupled to solid state drives (SSDs) of a plurality of storage arrays executes a storage input/output (I/O) stack having a plurality of layers. The node includes a non-volatile random access memory (NVRAM). A first portion of the NVRAM is configured as a write-back cache to store write data associated with a write request and a second portion of the NVRAM is configured as one or more non-volatile logs (NVLogs) to record metadata associated with the write request. The write data is passed from the write-back cache over a first path of the storage I/O stack for storage on a first storage array and the metadata is passed from the one or more NVLogs over a second path of the storage I/O stack for storage on a second storage array, wherein the first path is different from the second path. 1. A system comprising:a central processing unit (CPU) of a node of a cluster coupled to solid state drives (SSDs) of a plurality of storage arrays;a memory coupled to the CPU and configured to store a storage input/output (I/O) stack having a plurality of layers executable by the CPU; anda non-volatile random access memory (NVRAM) coupled to the CPU, a first portion of the NVRAM configured as a write-back cache to store write data associated with a write request and a second portion of the NVRAM configured as one or more non-volatile logs (NVLogs) to record metadata associated with the write request, the write data passed from the write-back cache over a first path of the storage I/O stack for storage on a first storage array and the metadata passed from the one or more NVLogs over a second path of the storage I/O stack for storage on a second storage array, wherein the first path is different from the second path.2. The system of wherein the write data is preserved in the write-back cache until successfully stored on the first storage array and the metadata is preserved in the one or more NVLogs until successfully stored on the second storage array.3. The ...

Подробнее
07-01-2021 дата публикации

PREFETCHING DATA BLOCKS FROM A PRIMARY STORAGE TO A SECONDARY STORAGE SYSTEM WHILE DATA IS BEING SYNCHRONIZED BETWEEN THE PRIMARY STORAGE AND SECONDARY STORAGE

Номер: US20210004160A1
Принадлежит:

Provided are a computer program product, system, and method for prefetching data blocks from a primary storage to a secondary storage system while data is being synchronized between the primary storage and secondary storage. A determination is made of data blocks to prefetch from the primary storage to the secondary controller not yet synchronized from the primary storage to the secondary storage in anticipation of future access requests for the data blocks to the secondary controller while data blocks are being synchronized between the primary storage and the secondary storage over the network. A prefetch command is sent to prefetch the determined data blocks to copy from the primary storage to the secondary controller to make available to future access requests received at the secondary controller for the determined data blocks. 1. A computer program product for managing data synchronized between a primary storage managed by a primary controller and a secondary storage managed by a secondary controller , wherein the primary controller and the secondary controller communicate over a network , the computer program product comprises a computer readable storage medium having program instructions embodied therewith , the program instructions executable by a processor to cause operations , the operations comprising:determining data blocks to prefetch from the primary storage to the secondary controller not yet synchronized from the primary storage to the secondary storage in anticipation of future access requests for the data blocks to the secondary controller while data blocks are being synchronized between the primary storage and the secondary storage over the network; andsending a prefetch command to prefetch the determined data blocks to copy from the primary storage to the secondary controller to make available to future access requests received at the secondary controller for the determined data blocks.2. The computer program product of claim 1 , wherein the ...

Подробнее
02-01-2020 дата публикации

CACHE EFFICIENT READING OF RESULT VALUES IN A COLUMN STORE DATABASE

Номер: US20200004517A1
Автор: Legler Thomas
Принадлежит:

A system for cache efficient reading of column values in a database is provided. In some aspects, the system performs operations including pre-fetching, asynchronously and in response to a request for data in a column store database system, a plurality of first values associated with the requested data. The request may identify a row of the column store database system associated with the requested data. The plurality of first values may be located in the row. The operations may further include storing the plurality of first values in a cache memory. The operations may further include pre-fetching, asynchronously and based on the plurality of first values, a plurality of second values. The operations may further include storing the plurality of second values in the cache memory. The operations may further include reading, in response to the storing the plurality of second values, the requested data from the cache memory. 1. A system , comprising:at least one data processor; and pre-fetching, in response to a request for data in a column store database system, a plurality of first values associated with the requested data, the request identifying a row of the column store database system associated with the requested data, the plurality of first values located in the row;', 'storing the plurality of first values in a cache memory;', 'pre-fetching, based on the plurality of first values, a plurality of second values;', 'storing the plurality of second values in the cache memory; and', 'reading, in response to the storing the plurality of second values, the requested data from the cache memory., 'at least one memory storing instructions which, when executed by the at least one data processor, cause operations comprising2. The system of claim 1 , the operations further comprising:reading the plurality of first values, wherein the plurality of second values are associated with the plurality of first values in a dictionary.3. The system of claim 1 , the operations further ...

Подробнее
02-01-2020 дата публикации

System and method for prediction of multiple read commands directed to non-sequential data

Номер: US20200004540A1
Принадлежит: Western Digital Technologies Inc

Systems and methods for predicting read commands and pre-fetching data when a memory device is receiving random read commands to non-sequentially addressed data locations are disclosed. A limited length search sequence of prior read commands is generated and that search sequence is then converted into an index value in a predetermined set of index values. A history pattern match table having entries indexed to that predetermined set of index values contains a plurality of read commands that have previously followed the search sequence represented by the index value. The index value is obtained via application of a many-to-one algorithm to the search sequence. The index value obtained from the search sequence may be used to find, and pre-fetch data for, a plurality of next read commands in the table that previously followed a search sequence having that index value.

Подробнее
04-01-2018 дата публикации

PREFETCH BANDWIDTH THROTTLING BY DYNAMICALLY ADJUSTING MISS BUFFER PREFETCH-DROPPING THRESHOLDS

Номер: US20180004670A1
Автор: Chou Yuan C., SUDHIR Suraj
Принадлежит: ORACLE INTERNATIONAL CORPORATION

The disclosed embodiments relate to a method for controlling prefetching in a processor to prevent over-saturation of interfaces in the memory hierarchy of the processor. While the processor is executing, the method determines a bandwidth utilization of an interface from a cache in the processor to a lower level of the memory hierarchy. Next, the method selectively adjusts a prefetch-dropping high-water mark for occupancy of a miss buffer associated with the cache based on the determined bandwidth utilization, wherein the miss buffer stores entries for outstanding demand requests and prefetches that missed in the cache and are waiting for corresponding data to be returned from the lower level of the memory hierarchy, and wherein when the occupancy of the miss buffer exceeds the prefetch-dropping high-water mark, subsequent prefetches that cause a cache miss are dropped. 1. A method for controlling prefetching to prevent over-saturation of interfaces in a memory hierarchy of a processor , comprising:while the processor is executing, determining a bandwidth utilization of an interface from a cache in the processor to a lower level of the memory hierarchy; andselectively adjusting a prefetch-dropping high-water mark for occupancy of a miss buffer associated with the cache based on the determined bandwidth utilization;wherein the miss buffer stores entries for outstanding demand requests and prefetches that missed in the cache and are waiting for corresponding data to be returned from the lower level of the memory hierarchy; andwherein when an occupancy of the miss buffer exceeds the prefetch-dropping high-water mark, subsequent prefetches that cause a cache miss are dropped.2. The method of claim 1 , wherein selectively adjusting the prefetch-dropping high-water mark based on the determined bandwidth utilization comprises:selecting a lower prefetch-dropping high-water mark when the determined bandwidth utilization indicates that the interface from the cache to the ...

Подробнее
04-01-2018 дата публикации

ACCESSING PHYSICAL MEMORY FROM A CPU OR PROCESSING ELEMENT IN A HIGH PERFOMANCE MANNER

Номер: US20180004671A1
Принадлежит:

A method and apparatus is described herein for accessing a physical memory location referenced by a physical address with a processor. The processor fetches/receives instructions with references to virtual memory addresses and/or references to physical addresses. Translation logic translates the virtual memory addresses to physical addresses and provides the physical addresses to a common interface. Physical addressing logic decodes references to physical addresses and provides the physical addresses to a common interface based on a memory type stored by the physical addressing logic. 1. A method comprising:receiving a first instruction with a microprocessor to read a first element from a first virtual memory address in a virtual memory, wherein the first instruction is generated by a first virtual machine;translating the first virtual memory address to a first physical address;fetching the first element from a first location referenced by the first physical address;receiving a second instruction with the microprocessor to store a second element at a second physical address, wherein the second instruction is generated by a second virtual machine; andstoring the second element in a second location referenced by the second physical address without disabling paging of the virtual memory.2. The method of claim 1 , wherein the first element is a data operand.3. The method of claim 1 , wherein the second element is the first element.4. The method of claim 2 , further comprising: operating on the first element with the microprocessor to obtain a first result claim 2 , wherein the second element is based on the first result.5. The method of claim 2 , wherein translating the first virtual memory address to a first physical address is done with a translation look-aside buffer (TLB).6. The method of claim 5 , wherein the first and second locations are in a system memory.7. The method of claim 1 , wherein the first and second virtual machines are the same virtual machine.8. The ...

Подробнее
04-01-2018 дата публикации

Cache unit and processor

Номер: US20180004672A1
Автор: Seiji Maeda
Принадлежит: Toshiba Memory Corp

According to an embodiment, a cache unit includes: a first memory configured to temporarily hold data and an address of the data, a second memory configured to temporarily hold an address of particular data set in advance, and a controller configured to, when an instruction to load the data is made for a first specified address, search for a storage destination of the first specified address, output the data of the first specified address if the storage destination is the first memory, and output the particular data if the storage destination is the second memory.

Подробнее
02-01-2020 дата публикации

APPARATUS, METHOD, AND SYSTEM FOR ENHANCED DATA PREFETCHING BASED ON NON-UNIFORM MEMORY ACCESS (NUMA) CHARACTERISTICS

Номер: US20200004684A1
Принадлежит:

Apparatus, method, and system for enhancing data prefetching based on non-uniform memory access (NUMA) characteristics are described herein. An apparatus embodiment includes a system memory, a cache, and a prefetcher. The system memory includes multiple memory regions, at least some of which are associated with different NUMA characteristic (access latency, bandwidth, etc.) than others. Each region is associated with its own set of prefetch parameters that are set in accordance to their respective NUMA characteristics. The prefetcher monitors data accesses to the cache and generates one or more prefetch requests to fetch data from the system memory to the cache based on the monitored data accesses and the set of prefetch parameters associated with the memory region from which data is to be fetched. The set of prefetcher parameters may include prefetch distance, training-to-stable threshold, and throttle threshold. 1. An apparatus comprising:a cache to store data received from a system memory, the system memory comprising a plurality of memory regions, each of the plurality of memory regions associated with a respective set of prefetch parameters, at least one of the plurality of memory regions having a prefetch parameter value that is different than a corresponding prefetch parameter value of another one of the plurality of memory regions; anda prefetcher to monitor data accesses to the cache and to generate one or more prefetch requests to fetch data from the system memory to the cache, wherein the one or more prefetch requests are generated based on the monitored data accesses and the set of prefetch parameters associated with the memory region from which data is to be fetched.2. The apparatus of claim 1 , wherein the plurality of memory regions includes at least a first memory region comprising a first memory type and a second memory region comprising a second memory type that is different than the first memory type.3. The apparatus of claim 1 , wherein the ...

Подробнее
02-01-2020 дата публикации

PROACTIVE DATA PREFETCH WITH APPLIED QUALITY OF SERVICE

Номер: US20200004685A1
Принадлежит:

Examples described herein relate to prefetching content from a remote memory device to a memory tier local to a higher level cache or memory. An application or device can indicate a time availability for data to be available in a higher level cache or memory. A prefetcher used by a network interface can allocate resources in any intermediary network device in a data path from the remote memory device to the memory tier local to the higher level cache. Memory access bandwidth, egress bandwidth, memory space in any intermediary network device can be allocated for prefetch of content. In some examples, proactive prefetch can occur for content expected to be prefetched but not requested to be prefetched. 1. A network interface comprising:a memory;an interface to a communications medium; and the associated information comprises a time limit and', 'the prefetcher to determine whether a resource allocation is available to complete at least a portion of the prefetch within the time limit based on the command and associated information., 'a prefetcher communicatively coupled to the interface and to receive a command to perform a prefetch of content from a remote memory with associated information, wherein'}2. The network interface of claim 1 , wherein the associated information includes one or more of: (1) base virtual address to be fetched from remote memory claim 1 , (2) amount of content to be fetched from remote memory claim 1 , (3) the remote memory storing a region to be fetched claim 1 , (4) priority of prefetch claim 1 , (5) indication if resources in an end-to-end path are to be reserved for a response claim 1 , or (6) a length of time of validity of the prefetch and unit of time.3. The network interface of claim 1 , wherein the prefetcher is to cause copying of content from the remote memory to one or more memory tiers including level 1 cache claim 1 , level 2 cache claim 1 , last level cache claim 1 , local memory claim 1 , persistent memory claim 1 , or memory of ...

Подробнее
13-01-2022 дата публикации

LAST-LEVEL COLLECTIVE HARDWARE PREFETCHING

Номер: US20220012178A1
Принадлежит:

A last-level collective hardware prefetcher (LLCHP) is described. The LLCHP is to detect a first off-chip memory access request by a first processor core of a plurality of processor cores. The LLCHP is further to determine, based on the first off-chip memory access request, that first data associated with the first off-chip memory access request is associated with second data of a second processor core of the plurality of processor cores. The LLCHP is further to prefetch the first data and the second data based on the determination. 1. A multi-core computer processor comprising:a plurality of processor cores interconnected in a Network-on-Chip (NoC) architecture; and detect a first off-chip memory access request by a first processor core of the plurality of processor cores;', 'determine, based on the first off-chip memory access request, that first data associated with the first off-chip memory access request is associated with second data of a second processor core of the plurality of processor cores; and', 'prefetch the first data and the second data based on the determination., 'a hardware prefetcher operatively coupled to the plurality of processor cores and to the cache, wherein the hardware prefetcher is to2. The multi-core computer processor of claim 1 , further comprising a last-level cache operatively coupled to the plurality of processor cores and to the hardware prefetcher claim 1 , wherein to prefetch the first data and the second data the hardware prefetcher is to store the first data and the second data in the last-level cache.3. The multi-core computer processor of claim 1 , wherein to determine that the first data associated with the first off-chip memory access request is associated with the second data of the second processor core of the plurality of processor cores claim 1 , the hardware prefetcher is to:determine that a stride entry exists for the first off-chip memory access request; anddetermine that a group exists for the stride entry.4. The ...

Подробнее
13-01-2022 дата публикации

TECHNIQUES AND TECHNOLOGIES TO ADDRESS MALICIOUS SINGLE-STEPPING AND ZERO-STEPPING OF TRUSTED EXECUTION ENVIRONMENTS

Номер: US20220012369A1
Принадлежит: Intel Corporation

In one embodiment, an apparatus comprises a processing circuitry to detect an occurrence of at least one of a single-stepping event or a zero-stepping event in an execution thread on an architecturally protected enclave and in response to the occurrence, implement at least one mitigation process to inhibit further occurrences of the at least one of a single-stepping event or a zero-stepping event in the architecturally protected enclave. 1. An apparatus comprising a processing circuitry to:detect an occurrence of at least one of a single-stepping event or a zero-stepping event in an execution thread on an architecturally protected enclave; andin response to the occurrence, implement at least one mitigation process to inhibit further occurrences of the at least one of a single-stepping event or a zero-stepping event in the architecturally protected enclave.2. The apparatus of claim 1 , comprising circuitry to:implement a counter to monitor forward progress of the compute process which is to execute in the architecturally protected enclave; andgenerate an error signal when the counter indicates that the forward progress is less than a threshold.3. The apparatus of claim 1 , comprising circuitry to:monitor a frequency of fault events in the execution thread on the architecturally protected enclave;monitor a number instructions that execute between an occurrence of fault events in the execution thread on the architecturally protected enclave; andgenerate an error signal when a frequency of the fault events is greater than a threshold.4. The apparatus processor of claim 1 , comprising circuitry to:detect a page fault within a locked region of a computer-readable memory in the architecturally protected enclave; andin response to the page fault, generate an error signal.5. The apparatus of claim 1 , comprising circuitry to:implement a counter to monitor a number of asynchronous enclave exit (AEX) events that occur in the architecturally protected enclave; andgenerate an ...

Подробнее
03-01-2019 дата публикации

PROFILING ASYNCHRONOUS EVENTS RESULTING FROM THE EXECUTION OF SOFTWARE AT CODE REGION GRANULARITY

Номер: US20190004916A1
Принадлежит:

A combination of hardware and software collect profile data for asynchronous events, at code region granularity. An exemplary embodiment is directed to collecting metrics for prefetching events, which are asynchronous in nature. Instructions that belong to a code region are identified using one of several alternative techniques, causing a profile bit to be set for the instruction, as a marker. Each line of a data block that is prefetched is similarly marked. Events corresponding to the profile data being collected and resulting from instructions within the code region are then identified. Each time that one of the different types of events is identified, a corresponding counter is incremented. Following execution of the instructions within the code region, the profile data accumulated in the counters are collected, and the counters are reset for use with a new code region. 1. A processor comprising:(a) a first logic to indicate whether an instruction that has been fetched by the processor is within a code region for which profile information will be collected;(b) a second logic to detect an asynchronous event related to the profile information being collected in response to performing the instruction that is within the code region and to produce a first signal in response thereto;(c) a third logic to cause a record to be generated for each asynchronous event in response to the first signal, wherein the record comprises the profile information; and(d) a fourth logic to store the profile information.2. The processor of claim 1 , wherein the first logic is to compare an address for each instruction fetched to a low address and high address to determine if the address for the instruction is within a range bounded by the low address and high address claim 1 , and if so claim 1 , determines that the instruction is within the code region claim 1 , but if not claim 1 , determines that the software instruction is not with the code region.3. The processor of claim 1 , wherein ...

Подробнее
03-01-2019 дата публикации

APPLICATION AND PROCESSOR GUIDED MEMORY PREFETCHING

Номер: US20190004954A1
Принадлежит: Intel Corporation

Devices and systems having memory-side adaptive prefetch decision-making, including associated methods, are disclosed and described. Adaptive information can be provided to memory-side controller and prefetch components that allow such memory-side components to prefetch data in a manner that is adaptive with respect to a particular read memory request or to a thread performing read memory requests. 1. A device , comprising:a nonvolatile memory (NVM) configured as main memory;a memory-side (MS) cache communicatively coupled to the NVM and operable to store a cached subset of the NVM, the MS cache including volatile memory; retrieve read data from a memory address in the NVM to fill a read request;', 'check for a correlation between the read request and other data in the NVM;', 'retrieve prefetch data having the correlation with the read request; and', 'store the prefetch data in the MS cache., 'a MS controller communicatively coupled to the NVM and to the MS cache, the MS controller including circuitry configured to;'}2. The device of claim 1 , wherein the MS controller circuitry is further configured to discard any prefetch data retrieved with the read data if a correlation is not found.3. The device of claim 1 , wherein the MS controller circuitry further comprises a prefetch engine that claim 1 , to retrieve the prefetch data having the correlation with the read request claim 1 , is further configured to:determine a prefetch pattern from received adaptive information;identify the prefetch data from the prefetch pattern; andretrieve the prefetch data from the NVM according to the prefetch pattern.4. The device of claim 3 , wherein the adaptive information comprises a thread identification (TID) of a hardware thread sending the read request.5. The device of claim 4 , wherein the adaptive information further comprises a prefetch hint.6. The device of claim 5 , wherein the prefetch hint includes a function that correlates the prefetch data with the read data claim 5 , ...

Подробнее
03-01-2019 дата публикации

PROCESSORS, METHODS, AND SYSTEMS FOR A CONFIGURABLE SPATIAL ACCELERATOR WITH MEMORY SYSTEM PERFORMANCE, POWER REDUCTION, AND ATOMICS SUPPORT FEATURES

Номер: US20190004955A1
Принадлежит:

Systems, methods, and apparatuses relating to a configurable spatial accelerator are described. In one embodiment, a processor includes a plurality of processing elements; and an interconnect network between the plurality of processing elements to receive an input of a dataflow graph comprising a plurality of nodes, wherein the dataflow graph is to be overlaid into the interconnect network and the plurality of processing elements with each node represented as a dataflow operator in the plurality of processing elements, and the plurality of processing elements is to perform an operation when an incoming operand set arrives at the plurality of processing elements. The processor also includes a streamer element to prefetch the incoming operand set from two or more levels of a memory system. 1. A processor comprising:a plurality of processing elements;an interconnect network between the plurality of processing elements to receive an input of a dataflow graph comprising a plurality of nodes, wherein the dataflow graph is to be overlaid into the interconnect network and the plurality of processing elements with each node represented as a dataflow operator in the plurality of processing elements, and the plurality of processing elements is to perform an operation when an incoming operand set arrives at the plurality of processing elements; anda streamer element to prefetch the incoming operand set from two or more levels of a memory system.2. The processor of claim 1 , wherein the streamer element is to prefetch based on a programmable memory access pattern.3. The processor of claim 2 , wherein the streamer element includes a plurality of tracking registers to fetch ahead of a demand stream.4. The processor of claim 3 , wherein the plurality of tracking registers includes an x-dimension register to fetch ahead in a first dimension of a multidimensional streaming fetch pattern.5. The processor of claim 4 , wherein the plurality of tracking registers includes a y-dimension ...

Подробнее