Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 22398. Отображено 100.
19-01-2012 дата публикации

Sharing memory spaces for access by hardware and software in a virtual machine environment

Номер: US20120017029A1
Принадлежит: Hewlett Packard Development Co LP

Example methods, apparatus, and articles of manufacture to share memory spaces for access by hardware and software in a virtual machine environment are disclosed. A disclosed example method involves enabling a sharing of a memory page of a source domain executing on a first virtual machine with a destination domain executing on a second virtual machine. The example method also involves mapping the memory page to an address space of the destination domain and adding an address translation entry for the memory page in a table. In addition, the example method involves sharing the memory page with a hardware device for direct memory access of the memory page by the hardware device.

Подробнее
16-02-2012 дата публикации

Scatter-Gather Intelligent Memory Architecture For Unstructured Streaming Data On Multiprocessor Systems

Номер: US20120042121A1
Принадлежит: Individual

A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.

Подробнее
16-02-2012 дата публикации

Intelligent cache management

Номер: US20120042123A1
Автор: Curt Kolovson
Принадлежит: Curt Kolovson

An exemplary storage network, storage controller, and methods of operation are disclosed. In one embodiment, a method of managing cache memory in a storage controller comprises receiving, at the storage controller, a cache hint generated by an application executing on a remote processor, wherein the cache hint identifies a memory block managed by the storage controller, and managing a cache memory operation for data associated with the memory block in response to the cache hint received by the storage controller.

Подробнее
01-03-2012 дата публикации

Method and apparatus for fuzzy stride prefetch

Номер: US20120054449A1
Автор: Shiliang Hu, Youfeng Wu
Принадлежит: Intel Corp

In one embodiment, the present invention includes a prefetching engine to detect when data access strides in a memory fall into a range, to compute a predicted next stride, to selectively prefetch a cache line using the predicted next stride, and to dynamically control prefetching. Other embodiments are also described and claimed.

Подробнее
15-03-2012 дата публикации

Scheduling of i/o writes in a storage environment

Номер: US20120066435A1
Принадлежит: Pure Storage Inc

A system and method for effectively scheduling read and write operations among a plurality of solid-state storage devices. A computer system comprises client computers and data storage arrays coupled to one another via a network. A data storage array utilizes solid-state drives and Flash memory cells for data storage. A storage controller within a data storage array comprises an I/O scheduler. The data storage controller is configured to receive requests targeted to the data storage medium, said requests including a first type of operation and a second type of operation. The controller is further configured to schedule requests of the first type for immediate processing by said plurality of storage devices, and queue requests of the second type for later processing by the plurality of storage devices. Operations of the first type may correspond to operations with an expected relatively low latency, and operations of the second type may correspond to operations with an expected relatively high latency.

Подробнее
29-03-2012 дата публикации

Cache with Multiple Access Pipelines

Номер: US20120079204A1
Принадлежит: Texas Instruments Inc

Parallel pipelines are used to access a shared memory. The shared memory is accessed via a first pipeline by a processor to access cached data from the shared memory. The shared memory is accessed via a second pipeline by a memory access unit to access the shared memory. A first set of tags is maintained for use by the first pipeline to control access to the cache memory, while a second set of tags is maintained for use by the second pipeline to access the shared memory. Arbitrating for access to the cache memory for a transaction request in the first pipeline and for a transaction request in the second pipeline is performed after each pipeline has checked its respective set of tags.

Подробнее
05-04-2012 дата публикации

Circuit and method for determining memory access, cache controller, and electronic device

Номер: US20120084513A1
Автор: Kazuhiko Okada
Принадлежит: Fujitsu Semiconductor Ltd

A memory access determination circuit includes a counter that switches between a first reference value and a second reference value in accordance with a control signal to generate a count value based on the first reference value or the second reference value. A controller performs a cache determination based on an address that corresponds to the count value and outputs the control signal in accordance with the cache determination. A changing unit changes the second reference value in accordance with the cache determination.

Подробнее
19-04-2012 дата публикации

Cache memory device, cache memory control method, program and integrated circuit

Номер: US20120096213A1
Автор: Kazuomi Kato
Принадлежит: Panasonic Corp

To aim to provide a cache memory device that performs a line size determination process for determining a refill size, in advance of a refill process that is performed at cache miss time. According to the line size determination process, the number of reads/writes of a management target line that belongs to a set is acquired (S 51 ), and in the case where the numbers of reads completely match one another and the numbers of writes completely match one another (S 52 : Yes), the refill size is determined to be large (S 54 ). Otherwise (S 52 : No), the refill size is determined to be small (S 55 ).

Подробнее
26-04-2012 дата публикации

Multiplexing Users and Enabling Virtualization on a Hybrid System

Номер: US20120102138A1
Принадлежит: International Business Machines Corp

A method, hybrid server system, and computer program product, support multiple users in an out-of-core processing environment. At least one accelerator system in a plurality of accelerator systems is partitioned into a plurality of virtualized accelerator systems. A private client cache is configured on each virtualized accelerator system in the plurality of virtualized accelerator systems. The private client cache of each virtualized accelerator system stores data that is one of accessible by only the private client cache and accessible by other private client caches associated with a common data set. Each user in a plurality of users is assigned to a virtualized accelerator system from the plurality of virtualized accelerator systems.

Подробнее
10-05-2012 дата публикации

Hybrid Server with Heterogeneous Memory

Номер: US20120117312A1
Принадлежит: International Business Machines Corp

A method, hybrid server system, and computer program product, for managing access to data stored on the hybrid server system. A memory system residing at a server is partitioned into a first set of memory managed by the server and a second set of memory managed by a set of accelerator systems. The set of accelerator systems are communicatively coupled to the server. The memory system comprises heterogeneous memory types. A data set stored within at least one of the first set of memory and the second set of memory that is associated with at least one accelerator system in the set of accelerator systems is identified. The data set is transformed from a first format to a second format, wherein the second format is a format required by the at least one accelerator system.

Подробнее
10-05-2012 дата публикации

Invalidating a Range of Two ro More Translation Table Entries and Instruction Therefore

Номер: US20120117356A1
Принадлежит: International Business Machines Corp

An instruction is provided to perform invalidation of an instruction specified range of segment table entries or region table entries. The instruction can be implemented by software emulation, hardware, firmware or some combination thereof.

Подробнее
24-05-2012 дата публикации

Signal processing system, integrated circuit comprising buffer control logic and method therefor

Номер: US20120131241A1
Принадлежит: FREESCALE SEMICONDUCTOR INC

A signal processing system comprising buffer control logic arranged to allocate a plurality of buffers for the storage of information fetched from at least one memory element. Upon receipt of fetched information to be buffered, the buffer control logic is arranged to categorise the information to be buffered according to at least one of: a first category associated with sequential flow and a second category associated with change of flow, and to prioritise respective buffers from the plurality of buffers storing information relating to the first category associated with sequential flow ahead of buffers storing information relating to the second category associated with change of flow when allocating a buffer for the storage of the fetched information to be buffered.

Подробнее
24-05-2012 дата публикации

Correlation-based instruction prefetching

Номер: US20120131311A1
Автор: Yuan C. Chou
Принадлежит: Oracle International Corp

The disclosed embodiments provide a system that facilitates prefetching an instruction cache line in a processor. During execution of the processor, the system performs a current instruction cache access which is directed to a current cache line. If the current instruction cache access causes a cache miss or is a first demand fetch for a previously prefetched cache line, the system determines whether the current instruction cache access is discontinuous with a preceding instruction cache access. If so, the system completes the current instruction cache access by performing a cache access to service the cache miss or the first demand fetch, and also prefetching a predicted cache line associated with a discontinuous instruction cache access which is predicted to follow the current instruction cache access.

Подробнее
31-05-2012 дата публикации

Method and apparatus for selectively performing explicit and implicit data line reads

Номер: US20120136857A1
Автор: Greggory D. Donley
Принадлежит: Advanced Micro Devices Inc

A method and apparatus are described for selectively performing explicit and implicit data line reads. When a data line request is received, a determination is made as to whether there are currently sufficient data resources to perform an implicit data line read. If there are not currently sufficient data resources to perform an implicit data line read, a time period (number of clock cycles) before sufficient data resources will become available to perform an implicit data line read is estimated. A determination is then made as to whether the estimated time period exceeds a threshold. An explicit tag request is generated if the estimated time period exceeds the threshold. If the estimated time period does not exceed the threshold, the generation of a tag request is delayed until sufficient data resources become available. An implicit tag request is then generated.

Подробнее
14-06-2012 дата публикации

Systems and methods for background destaging storage tracks

Номер: US20120151148A1
Принадлежит: International Business Machines Corp

Systems and methods for background destaging storage tracks from cache when one or more hosts are idle are provided. One system includes a write cache configured to store a plurality of storage tracks and configured to be coupled to one or more hosts, and a processor coupled to the write cache. The processor includes code that, when executed by the processor, causes the processor to perform the method below. One method includes monitoring the write cache for write operations from the host(s) and determining if the host(s) is/are idle based on monitoring the write cache for write operations from the host(s). The storage tracks are destaged from the write cache if the host(s) is/are idle and are not destaged from the write cache if one or more of the hosts is/are not idle. Also provided are physical computer storage mediums including a computer program product for performing the above method.

Подробнее
14-06-2012 дата публикации

Memory stacks management

Номер: US20120151179A1
Автор: Mark Gaertner, Mark Heath
Принадлежит: SEAGATE TECHNOLOGY LLC

A method for managing a memory stack provides mapping a part of the memory stack to a span of fast memory and a part of the memory stack to a span of slow memory, wherein the fast memory provides access speed substantially higher than the access speed provided by the slow memory.

Подробнее
21-06-2012 дата публикации

Memory Module With Reduced Access Granularity

Номер: US20120159061A1
Принадлежит: RAMBUS INC

A memory module having reduced access granularity. The memory module includes a substrate having signal lines thereon that form a control path and first and second data paths, and further includes first and second memory devices coupled in common to the control path and coupled respectively to the first and second data paths. The first and second memory devices include control circuitry to receive respective first and second memory access commands via the control path and to effect concurrent data transfer on the first and second data paths in response to the first and second memory access commands.

Подробнее
28-06-2012 дата публикации

Executing a Perform Frame Management Instruction

Номер: US20120166758A1
Принадлежит: International Business Machines Corp

What is disclosed is a frame management function defined for a machine architecture of a computer system. In one embodiment, a frame management instruction is obtained which identifies a first and second general register. The first general register contains a frame management field having a key field with access-protection bits and a block-size indication. If the block-size indication indicates a large block then an operand address of a large block of data is obtained from the second general register. The large block of data has a plurality of small blocks each of which is associated with a corresponding storage key having a plurality of storage key access-protection bits. If the block size indication indicates a large block, the storage key access-protection bits of each corresponding storage key of each small block within the large block is set with the access-protection bits of the key field.

Подробнее
12-07-2012 дата публикации

Remapping of data addresses for large capacity low-latency random read memory

Номер: US20120179890A1
Принадлежит: Individual

Described herein are method and apparatus for using an LLRRM device as a storage device in a storage system. At least three levels of data structures may be used to remap storage system addresses to LLRRM addresses for read requests, whereby a first-level data structure is used to locate a second-level data structure corresponding to the storage system address, which is used to locate a third-level data structure corresponding to the storage system address. An LLRRM address may comprise a segment number determined from the second-level data structure and a page number determined from the third-level data structure. Update logs may be produced and stored for each new remapping caused by a write request. An update log may specify a change to be made to a particular data structure. The stored update logs may be performed on the data structures upon the occurrence of a predetermined event.

Подробнее
26-07-2012 дата публикации

Managing Access to a Cache Memory

Номер: US20120191917A1
Принадлежит: International Business Machines Corp

Managing access to a cache memory includes dividing said cache memory into multiple of cache areas, each cache area having multiple entries; and providing at least one separate lock attribute for each cache area such that only a processor thread having possession of the lock attribute corresponding to a particular cache area can update that cache area.

Подробнее
02-08-2012 дата публикации

Guest to native block address mappings and management of native code storage

Номер: US20120198122A1
Автор: Mohammad Abdallah
Принадлежит: Soft Machines Inc

A method for managing mappings of storage on a code cache for a processor. The method includes storing a plurality of guest address to native address mappings as entries in a conversion look aside buffer, wherein the entries indicate guest addresses that have corresponding converted native addresses stored within a code cache memory, and receiving a subsequent request for a guest address at the conversion look aside buffer. The conversion look aside buffer is indexed to determine whether there exists an entry that corresponds to the index, wherein the index comprises a tag and an offset that is used to identify the entry that corresponds to the index. Upon a hit on the tag, the corresponding entry is accessed to retrieve a pointer to the code cache memory corresponding block of converted native instructions. The corresponding block of converted native instructions are fetched from the code cache memory for execution.

Подробнее
02-08-2012 дата публикации

Memory Attribute Sharing Between Differing Cache Levels of Multilevel Cache

Номер: US20120198166A1
Принадлежит: Texas Instruments Inc

The level one memory controller maintains a local copy of the cacheability bit of each memory attribute register. The level two memory controller is the initiator of all configuration read/write requests from the CPU. Whenever a configuration write is made to a memory attribute register, the level one memory controller updates its local copy of the memory attribute register.

Подробнее
06-09-2012 дата публикации

Method, apparatus, and system for speculative execution event counter checkpointing and restoring

Номер: US20120227045A1
Принадлежит: Intel Corp

An apparatus, method, and system are described herein for providing programmable control of performance/event counters. An event counter is programmable to track different events, as well as to be checkpointed when speculative code regions are encountered. So when a speculative code region is aborted, the event counter is able to be restored to it pre-speculation value. Moreover, the difference between a cumulative event count of committed and uncommitted execution and the committed execution, represents an event count/contribution for uncommitted execution. From information on the uncommitted execution, hardware/software may be tuned to enhance future execution to avoid wasted execution cycles.

Подробнее
20-09-2012 дата публикации

Flash storage device with read disturb mitigation

Номер: US20120239990A1
Принадлежит: Stec Inc

A method for managing a flash storage device includes initiating a read request and reading requested data from a first storage block of a plurality of storage blocks in the flash storage device based on the read request. The method further includes incrementing a read count for the first storage block and moving the data in the first storage block to an available storage block of the plurality of storage blocks when the read count reaches a first threshold value.

Подробнее
04-10-2012 дата публикации

Method for giving read commands and reading data, and controller and storage system using the same

Номер: US20120254522A1
Автор: Chih-Kang Yeh
Принадлежит: Phison Electronics Corp

A method for giving a read command to a flash memory chip to read data to be accessed by a host system is provided. The method includes receiving a host read command; determining whether the received host read command follows a last host read command; if yes, giving a cache read command to read data from the flash memory chip; and if no, giving a general read command and the cache read command to read data from the flash memory chip. Accordingly, the method can effectively reduce time needed for executing the host read commands by using the cache read command to combine the host read commands which access continuous physical addresses and pre-read data stored in a next physical address.

Подробнее
11-10-2012 дата публикации

Data Storage and Data Sharing in a Network of Heterogeneous Computers

Номер: US20120259953A1
Автор: Ilya Gertner
Принадлежит: Network Disk Inc

A network of PCs includes an I/O channel adapter and network adapter, and is configured for management of a distributed cache memory stored in the plurality of PCs interconnected by the network. The use of standard PCs reduces the cost of the data storage system. The use of the network of PCs permits building large, high-performance, data storage systems.

Подробнее
25-10-2012 дата публикации

Efficient data prefetching in the presence of load hits

Номер: US20120272004A1
Принадлежит: Via Technologies Inc

A memory subsystem in a microprocessor includes a first-level cache, a second-level cache, and a prefetch cache configured to speculatively prefetch cache lines from a memory external to the microprocessor. The second-level cache and the prefetch cache are configured to allow the same cache line to be simultaneously present in both. If a request by the first-level cache for a cache line hits in both the second-level cache and in the prefetch cache, the prefetch cache invalidates its copy of the cache line and the second-level cache provides the cache line to the first-level cache.

Подробнее
01-11-2012 дата публикации

Distributed shared memory

Номер: US20120278392A1
Автор: Lior Aronovich, Ron Asher
Принадлежит: International Business Machines Corp

Systems and methods for implementing a distributed shared memory (DSM) in a computer cluster in which an unreliable underlying message passing technology is used, such that the DSM efficiently maintains coherency and reliability. DSM agents residing on different nodes of the cluster process access permission requests of local and remote users on specified data segments via handling procedures, which provide for recovering of lost ownership of a data segment while ensuring exclusive ownership of a data segment among the DSM agents detecting and resolving a no-owner messaging deadlock, pruning of obsolete messages, and recovery of the latest contents of a data segment whose ownership has been lost.

Подробнее
01-11-2012 дата публикации

Increasing granularity of dirty bit information

Номер: US20120278525A1
Принадлежит: VMware LLC

One or more unused bits of a virtual address range are allocated for aliasing so that multiple virtually addressed sub-pages can be mapped to a common memory page. When one bit is allocated for aliasing, dirty bit information can be provided at a granularity that is one-half of a memory page. When M bits are allocated for aliasing, dirty bit information can be provided at a granularity that is 1/(2 M )-th of a memory page.

Подробнее
08-11-2012 дата публикации

Method and apparatus for saving power by efficiently disabling ways for a set-associative cache

Номер: US20120284462A1
Принадлежит: Individual

A method and apparatus for disabling ways of a cache memory in response to history based usage patterns is herein described. Way predicting logic is to keep track of cache accesses to the ways and determine if an access to some ways are to be disabled to save power, based upon way power signals having a logical state representing a predicted miss to the way. One or more counters associated with the ways count accesses, wherein a power signal is set to the logical state representing a predicted miss when one of said one or more counters reaches a saturation value. Control logic adjusts said one or more counters associated with the ways according to the accesses.

Подробнее
15-11-2012 дата публикации

Managing Bandwidth Allocation in a Processing Node Using Distributed Arbitration

Номер: US20120290756A1
Принадлежит: Texas Instruments Inc

Management of access to shared resources within a system comprising a plurality of requesters and a plurality of target resources is provided. A separate arbitration point is associated with each target resource. An access priority value is assigned to each requester. An arbitration contest is performed for access to a first target resource by requests from two or more of the requesters using a first arbitration point associated with the first target resource to determine a winning requester. The request from the winning requester is forwarded to a second target resource. A second arbitration contest is performed for access to the second target resource by the forwarded request from the winning requester and requests from one or more of the plurality of requesters using a second arbitration point associated with the second target resource.

Подробнее
22-11-2012 дата публикации

Dynamic hierarchical memory cache awareness within a storage system

Номер: US20120297142A1
Принадлежит: International Business Machines Corp

Described is a system and computer program product for implementing dynamic hierarchical memory cache (HMC) awareness within a storage system. Specifically, when performing dynamic read operations within a storage system, a data module evaluates a data prefetch policy according to a strategy of determining if data exists in a hierarchical memory cache and thereafter amending the data prefetch policy, if warranted. The system then uses the data prefetch policy to perform a read operation from the storage device to minimize future data retrievals from the storage device. Further, in a distributed storage environment that include multiple storage nodes cooperating to satisfy data retrieval requests, dynamic hierarchical memory cache awareness can be implemented for every storage node without degrading the overall performance of the distributed storage environment.

Подробнее
22-11-2012 дата публикации

Recovering transactions of failed nodes in a clustered file system

Номер: US20120297247A1
Принадлежит: International Business Machines Corp

Systems. Methods, and Computer Program Products are provided for recovering transactions of failed nodes using a recovery procedure in a clustered file system (CFS). A data segment is determined that the data segment should be copied to a final storage location by validating that an ownership of the data segment is not associated with any other operational node, via a distributed shared memory (DSM) agent. The ownership of the data segment is set to a local DSM agent.

Подробнее
29-11-2012 дата публикации

Concurrent transactional checkpoints in a clustered file system

Номер: US20120303683A1
Принадлежит: International Business Machines Corp

Systems, Methods, and Computer Program Products are provided for performing concurrent checkpoints from file system agents residing on different nodes within in a clustered file system (CFS). Responsibility to checkpoint a modified and a committed data segment to a final storage location is assigned to one of the file system agents. One of the file system agents, which is assigned, is the file system agent whose associated distributed shared memory (DSM) agent is an owner of the data segment.

Подробнее
17-01-2013 дата публикации

Multi-core processor system, memory controller control method, and computer product

Номер: US20130019069A1
Принадлежит: Fujitsu Ltd

A multi-core processor system includes a memory controller that includes multiple ports and shared memory that includes physical address spaces divided among the ports. A CPU acquires from a parallel degree information table, the number of CPUs to which software that is to be executed by the multi-core processor system, is to be assigned. After this acquisition, the CPU determines the CPUs to which the software to be executed is to be assigned and sets for each CPU, physical address spaces corresponding to logical address spaces defined by the software to be executed. After this setting, the CPU notifies an address converter of the addresses and notifies the software to be executed of the start of execution.

Подробнее
24-01-2013 дата публикации

Method and apparatus for adaptive cache frame locking and unlocking

Номер: US20130024620A1
Принадлежит: Agere Systems LLC

Most recently accessed frames are locked in a cache memory. The most recently accessed frames are likely to be accessed by a task again in the near future and may be locked at the beginning of a task switch or interrupt to improve cache performance. The list of most recently used frames is updated as a task executes and may be embodied as a list of frame addresses or a flag associated with each frame. The list of most recently used frames may be separately maintained for each task if multiple tasks may interrupt each other. An adaptive frame unlocking mechanism is also disclosed that automatically unlocks frames that may cause a significant performance degradation for a task. The adaptive frame unlocking mechanism monitors a number of times a task experiences a frame miss and unlocks a given frame if the number of frame misses exceeds a predefined threshold.

Подробнее
14-02-2013 дата публикации

Data hazard handling for copending data access requests

Номер: US20130042077A1
Принадлежит: ARM LTD

A data processing system that manages data hazards at a coherency controller and not at an initiator device is disclosed. The data processing system process write requests in a two part form, such that a first part is transmitted and when the coherency controller has space to accept data it responds to the first part and the data and state of the data prior to the write are sent as a second part of the write request. When there are copending reads and writes to the same address the writes are stalled by the coherency controller by not responding to the first part of the write and the initiator device proceeds to process any snoop requests received to the address of the write regardless of the fact that the write is pending. When the pending read has completed the coherency controller will respond to the first part of the write and the initiator device will complete the write by sending the data and an indicator of the state of the data following the snoop. The coherency controller can then avoid any potential data hazard using this information to update memory as required.

Подробнее
21-02-2013 дата публикации

Mechanisms To Accelerate Transactions Using Buffered Stores

Номер: US20130046925A1
Принадлежит: Individual

In one embodiment, the present invention includes a method for executing a transactional memory (TM) transaction in a first thread, buffering a block of data in a first buffer of a cache memory of a processor, and acquiring a write monitor on the block to obtain ownership of the block at an encounter time in which data at a location of the block in the first buffer is updated. Other embodiments are described and claimed.

Подробнее
23-05-2013 дата публикации

Pci express enhancements and extensions

Номер: US20130132622A1
Принадлежит: Individual

A method and apparatus forenhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses.

Подробнее
30-05-2013 дата публикации

Efficient Memory and Resource Management

Номер: US20130138840A1

The present system enables passing a pointer, associated with accessing data in a memory, to an input/output (I/O) device via an input/output memory management unit (IOMMU). The I/O device accesses the data in the memory via the IOMMU without copying the data into a local I/O device memory. The I/O device can perform an operation on the data in the memory based on the pointer, such that I/O device accesses the memory without expensive copies.

Подробнее
27-06-2013 дата публикации

Apparatus, System, and Method for Storing Metadata

Номер: US20130166831A1
Принадлежит: Fusion IO LLC

Apparatuses, systems, and methods are disclosed for storing metadata. A mapping module is configured to maintain a mapping structure for logical addresses of a non-volatile device. A metadata module is configured to store membership metadata for the logical addresses with logical-to-physical mappings for the logical addresses in the mapping structure.

Подробнее
04-07-2013 дата публикации

Instruction fetch translation lookaside buffer management to support host and guest o/s translations

Номер: US20130173882A1
Принадлежит: Advanced Micro Devices Inc

A translation lookaside buffer (TLB) configured for use in a multiple operating system environment includes a plurality of storage locations, each storage location being configured to store a page translation entry configured to relate a virtual address range to a physical address range, each page translation entry having an address space identifier (ASID) associated with an operating system. The TLB also includes flush logic configured to receive a TLB flush request from an operating system having an operating system ASID and flush only TLB page translation entries having a stored ASID that matches the operating system ASID.

Подробнее
04-07-2013 дата публикации

Application processor and a computing system having the same

Номер: US20130173883A1
Автор: Il-Ho Lee, Kyong-Ho Cho
Принадлежит: SAMSUNG ELECTRONICS CO LTD

An application processor includes a system memory unit, peripheral devices, a control unit and a central processing unit (CPU). The system memory unit includes one page table. The peripheral devices share the page table and perform a DMA (Direct Memory Access) operation on the system memory unit using the page table, where each of the peripheral devices includes a memory management unit having a translation lookaside buffer. The control unit divides a total virtual address space corresponding to the page table into sub virtual address spaces, assigns the sub virtual address spaces to the peripheral devices, respectively, allocates and releases a DMA buffer in the system memory unit, and updates the page table, where at least two of the sub virtual address spaces have different sizes from each other. The CPU controls the peripheral devices and the control unit. The application processor reduces memory consumption.

Подробнее
11-07-2013 дата публикации

Streaming Translation in Display Pipe

Номер: US20130179638A1
Принадлежит: Apple Inc

In an embodiment, a display pipe includes one or more translation units corresponding to images that the display pipe is reading for display. Each translation unit may be configured to prefetch translations ahead of the image data fetches, which may prevent translation misses in the display pipe (at least in most cases). The translation units may maintain translations in first-in, first-out (FIFO) fashion, and the display pipe fetch hardware may inform the translation unit when a given translation or translation is no longer needed. The translation unit may invalidate the identified translations and prefetch additional translation for virtual pages that are contiguous with the most recently prefetched virtual page.

Подробнее
18-07-2013 дата публикации

Writing adjacent tracks to a stride, based on a comparison of a destaging of tracks to a defragmentation of the stride

Номер: US20130185507A1
Автор: Lokesh M. Gupta
Принадлежит: International Business Machines Corp

Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks. The one or more tracks are written to a selected stride of the plurality of strides, based on comparing the number of operations required to destage selected tracks from the selected stride to the number of operations required to defragment the compressed data in the selected stride.

Подробнее
18-07-2013 дата публикации

Use of Loop and Addressing Mode Instruction Set Semantics to Direct Hardware Prefetching

Номер: US20130185516A1
Принадлежит: Qualcomm Inc

Systems and methods for prefetching cache lines into a cache coupled to a processor. A hardware prefetcher is configured to recognize a memory access instruction as an auto-increment-address (AIA) memory access instruction, infer a stride value from an increment field of the AIA instruction, and prefetch lines into the cache based on the stride value. Additionally or alternatively, the hardware prefetcher is configured to recognize that prefetched cache lines are part of a hardware loop, determine a maximum loop count of the hardware loop, and a remaining loop count as a difference between the maximum loop count and a number of loop iterations that have been completed, select a number of cache lines to prefetch, and truncate an actual number of cache lines to prefetch to be less than or equal to the remaining loop count, when the remaining loop count is less than the selected number of cache lines.

Подробнее
18-07-2013 дата публикации

Managing global cache coherency in a distributed shared caching for clustered file systems

Номер: US20130185519A1
Принадлежит: International Business Machines Corp

Systems. Methods, and Computer Program Products are provided for managing a global cache coherency in a distributed shared caching for a clustered file systems (CFS). The CFS manages access permissions to an entire space of data segments by using the DSM module. In response to receiving a request to access one of the data segments, a calculation operation is performed for obtaining most recent contents of one of the data segments. The calculation operation performs one of providing the most recent contents via communication with a remote DSM module which obtains the one of the data segments from an associated external cache memory, instructing by the DSM module to read from storage the one of the data segments, and determining that any existing contents of the one of the data segments in the local external cache are the most recent contents.

Подробнее
08-08-2013 дата публикации

System and method for execution of a secured environment initialization instruction

Номер: US20130205127A1
Принадлежит: Individual

A method and apparatus for initiating secure operations in a microprocessor system is described. In one embodiment, one initiating logical processor initiates the process by halting the execution of the other logical processors, and then loading initialization and secure virtual machine monitor software into memory. The initiating processor then loads the initialization software into secure memory for authentication and execution. The initialization software then authenticates and registers the secure virtual machine monitor software prior to secure system operations.

Подробнее
15-08-2013 дата публикации

Technique to share information among different cache coherency domains

Номер: US20130207987A1
Принадлежит: Individual

A technique to enable information sharing among agents within different cache coherency domains. In one embodiment, a graphics device may use one or more caches used by one or more processing cores to store or read information, which may be accessed by one or more processing cores in a manner that does not affect programming and coherency rules pertaining to the graphics device.

Подробнее
05-09-2013 дата публикации

Method and Apparatus of Accessing Data of Virtual Machine

Номер: US20130232303A1
Автор: Xiao Fei Quan
Принадлежит: Alibaba Group Holding Ltd

A methods and device for accessing virtual machine (VM) data are described. A computing device for accessing virtual machine comprises an access request process module, a data transfer proxy module and a virtual disk. The access request process module receives a data access request sent by a VM and adds the data access request to a request array. The data transfer proxy module obtains the data access request from the request array, maps the obtained data access request to a corresponding virtual storage unit, and maps the virtual storage unit to a corresponding physical storage unit of a distributed storage system. A corresponding data access operation may be performed based on a type of the data access request.

Подробнее
12-09-2013 дата публикации

Systems and methods for accessing a unified translation lookaside buffer

Номер: US20130238874A1
Принадлежит: Soft Machines Inc

Systems and methods for accessing a unified translation lookaside buffer (TLB) are disclosed. A method includes receiving an indicator of a level one translation lookaside buffer (L 1 TLB) miss corresponding to a request for a virtual address to physical address translation, searching a cache that includes virtual addresses and page sizes that correspond to translation table entries (TTEs) that have been evicted from the L 1 TLB, where a page size is identified, and searching a second level TLB and identifying a physical address that is contained in the second level TLB. Access is provided to the identified physical address.

Подробнее
12-09-2013 дата публикации

Multiple page size memory management unit

Номер: US20130238875A1
Принадлежит: FREESCALE SEMICONDUCTOR INC

A memory management unit can receive an address associated with a page size that is unknown to the MMU. The MMU can concurrently determine whether a translation lookaside buffer data array stores a physical address associated with the address based on different portions of the address, where each of the different portions is associated with a different possible page size. This provides for efficient translation lookaside buffer data array access when different programs, employing different page sizes, are concurrently executed at a data processing device.

Подробнее
19-09-2013 дата публикации

Integer and Half Clock Step Division Digital Variable Clock Divider

Номер: US20130243148A1
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

A clock divider is provided that is configured to divide a high speed input clock signal by an odd, even or fractional divide ratio. The input clock may have a clock cycle frequency of 1 GHz or higher, for example. The input clock signal is divided to produce an output clock signal by first receiving a divide factor value F representative of a divide ratio N, wherein the N may be an odd or an even integer. A fractional indicator indicates the divide ratio is N.5 when the fractional indicator is one and indicates the divide ratio is N when the fractional indicator is zero. F is set to 2(N.5)/2 for a fractional divide ratio and F is set to N/2 for an integer divide ratio. A count indicator is asserted every N/2 input clock cycles when N is even. The count indicator is asserted alternately N/2 input clock cycles and then 1+N/2 input clock cycles when N is odd. One period of an output clock signal is synthesized in response to each assertion of the count indicator when the fractional indicator indicates the divide ratio is N.5. One period of the output clock signal is synthesized in response to two assertions of the count indicator when the fractional indicator indicates the divide ratio is an integer.

Подробнее
03-10-2013 дата публикации

Translation lookaside buffer for multiple context compute engine

Номер: US20130262816A1
Принадлежит: Intel Corp

Some implementations disclosed herein provide techniques and arrangements for an specialized logic engine that includes translation lookaside buffer to support multiple threads executing on multiple cores. The translation lookaside buffer enables the specialized logic engine to directly access a virtual address of a thread executing on one of the plurality of processing cores. For example, an acceleration compute engine may receive one or more instructions from a thread executed by a processing core. The acceleration compute engine may retrieve, based on an address space identifier associated with the one or more instructions, a physical address associated with the one or more instructions from the translation lookaside buffer to execute the one or more instructions using the physical address.

Подробнее
10-10-2013 дата публикации

Apparatus and method for implementing a multi-level memory hierarchy having different operating modes

Номер: US20130268728A1
Принадлежит: Individual

A system and method are described for integrating a memory and storage hierarchy including a non-volatile memory tier within a computer system. In one embodiment, PCMS memory devices are used as one tier in the hierarchy, sometimes referred to as “far memory.” Higher performance memory devices such as DRAM placed in front of the far memory and are used to mask some of the performance limitations of the far memory. These higher performance memory devices are referred to as “near memory.” In one embodiment, the “near memory” is configured to operate in a plurality of different modes of operation including (but not limited to) a first mode in which the near memory operates as a memory cache for the far memory and a second mode in which the near memory is allocated a first address range of a system address space with the far memory being allocated a second address range of the system address space, wherein the first range and second range represent the entire system address space.

Подробнее
17-10-2013 дата публикации

Remote memory management when switching optically-connected memory

Номер: US20130275705A1
Принадлежит: International Business Machines Corp

A remote memory superpage is retained in a remote memory of the memory blade when reading the remote memory super page of the remote memory into a local memory.

Подробнее
17-10-2013 дата публикации

Program execution device and compiler system

Номер: US20130275716A1
Автор: Yoshitaka Nishida
Принадлежит: Panasonic Corp

A program execution device includes a program loader reading a machine language program including a machine language code and access frequency information; an address conversion table creator creating an address conversion table including entries, each of which indicates a relation between a logical address range and a physical address range; and a TLB register registering, in a TLB, an entry of the address conversion table storing a logical address range accessed according to the machine language code. When determining that the frequency of access to a logical address range is high based on the access frequency information, the address conversion table creator adjusts the size of an entry storing this logical address range to an appropriate size.

Подробнее
24-10-2013 дата публикации

Systems and methods for backing up storage volumes in a storage system

Номер: US20130282975A1
Принадлежит: International Business Machines Corp

Systems and methods for backing up storage volumes are provided. One system includes a primary side, a secondary side, and a network coupling the primary and secondary sides. The secondary side includes first and second VTS including a cache and storage tape. The first VTS is configured to store a first portion of a group of storage volumes in its cache and migrate the remaining portion to its storage tape. The second VTS is configured to store the remaining portion of the storage volumes in its cache and migrate the first portion to its storage tape. One method includes receiving multiple storage volumes from a primary side, storing the storage volumes in the cache of the first and second VTS, migrating a portion of the storage volumes from the cache to storage tape in the first VTS, and migrating a remaining portion of the storage volumes from the cache to storage tape in the second VTS.

Подробнее
31-10-2013 дата публикации

Method and apparatus for adjustable virtual addressing for data storage

Номер: US20130290668A1
Автор: Se Wook Na
Принадлежит: SEAGATE TECHNOLOGY LLC

Methods and apparatuses for adjusting the size of a virtual band or virtual zone of a storage medium are provided. In one embodiment, an apparatus may comprise a data storage device including a data storage medium having a physical zone; and a processor configured to receive a virtual addressing adjustment command, and adjust a number of virtual addresses in a virtual band mapped to the physical zone based on the virtual addressing adjustment command. In another embodiment, a method may comprise providing a data storage device configured to implement virtual addresses associated with a virtual band mapped to a physical zone of a data storage medium of the data storage device, receiving at the data storage device a virtual addressing adjustment command, and adjusting a number of virtual addresses in a virtual band based on the virtual addressing adjustment command.

Подробнее
31-10-2013 дата публикации

Memory range preferred sizes and out-of-bounds counts

Номер: US20130290670A1
Принадлежит: Oracle International Corp

A system that includes a memory, a tilelet data structure entry, a first tile freelist, and an allocation subsystem. The memory includes a first tilelet on a first tile. The tilelet data structure entry includes a first tilelet preferred pagesize assigned to a first value. The first tile freelist for the first tile includes a first tile in-bounds page freelist, and a first tile out-of-bounds page freelist. The allocation subsystem is configured to detect that a first physical page is freed, store, in the first tile in-bounds page freelist, a first page data structure, detect that a second physical page is freed, store, in the first tile out-of-bounds page freelist, a second page data structure, and coalesce the memory using the second page and at least one of the physical pages associated with the plurality of out-of-bounds page data structures into a third physical page.

Подробнее
28-11-2013 дата публикации

Apparatus and method for accelerating operations in a processor which uses shared virtual memory

Номер: US20130318323A1
Принадлежит: Intel Corp

An apparatus and method are described for coupling a front end core to an accelerator component (e.g., such as a graphics accelerator). For example, an apparatus is described comprising: an accelerator comprising one or more execution units (EUs) to execute a specified set of instructions; and a front end core comprising a translation lookaside buffer (TLB) communicatively coupled to the accelerator and providing memory access services to the accelerator, the memory access services including performing TLB lookup operations to map virtual to physical addresses on behalf of the accelerator and in response to the accelerator requiring access to a system memory.

Подробнее
28-11-2013 дата публикации

Methods for managing failure of a solid state device in a caching storage

Номер: US20130318391A1
Принадлежит: Stec Inc

Techniques for managing caching use of a solid state device are disclosed. In some embodiments, the techniques may be realized as a method for managing caching use of a solid state device. Management of the caching use may include receiving, at a host device, notification of failure of a solid state device. In response to the notification a cache mode may be set to uncached. In uncached mode input/output (I/O) requests may be directed to uncached storage (e.g., disk).

Подробнее
19-12-2013 дата публикации

Configurable and scalable storage system

Номер: US20130339602A1
Автор: James A. Tucci
Принадлежит: ARCHION Inc

The system utilizes a plurality of layers to provide a robust storage solution. One layer is the RAID engine that provides parity RAID protection, disk management and striping for the RAID sets. The second layer is called the virtualization layer and it separates the physical disks and storage capacity into virtual disks that minor the drives that a target system requires. A third layer is a LUN (logical unit number) layer that is disposed between the virtual disks and the host. By using this approach, the system can be used to represent any number, size, or capacity of disks that a host system requires while using any configuration of physical RAID storage.

Подробнее
19-12-2013 дата публикации

Cache memory prefetching

Номер: US20130339625A1
Принадлежит: International Business Machines Corp

According to exemplary embodiments, a computer program product, system, and method for prefetching in memory include determining a missed access request for a first line in a first cache level and accessing an entry in a prefetch table, wherein the entry corresponds to a memory block, wherein the entry includes segments of the memory block. Further, the embodiment includes determining a demand segment of the segments in the entry, the demand segment corresponding to a segment of the memory block that includes the first line, reading a first field in the demand segment to determine if a second line in the demand segment is spatially related with respect to accesses of the demand segment and reading a second field in the demand segment to determine if a second segment in the entry is temporally related to the demand segment.

Подробнее
19-12-2013 дата публикации

Next Instruction Access Intent Instruction

Номер: US20130339672A1
Принадлежит: International Business Machines Corp

Executing a Next Instruction Access Intent instruction by a computer. The processor obtains an access intent instruction indicating an access intent. The access intent is associated with an operand of a next sequential instruction. The access intent indicates usage of the operand by one or more instructions subsequent to the next sequential instruction. The computer executes the access intent instruction. The computer obtains the next sequential instruction. The computer executes the next sequential instruction, which comprises based on the access intent, adjusting one or more cache behaviors for the operand of the next sequential instruction.

Подробнее
19-12-2013 дата публикации

Vm inter-process communication

Номер: US20130339953A1
Принадлежит: VMware LLC

A method for enabling inter-process communication between a first application and a second application, the first application running within a first context and the second application running within a second context of a virtualization system is described. The method includes receiving a request to attach a shared region of memory to a memory allocation, identifying a list of one or more physical memory pages defining the shared region that corresponds to the handle, and mapping guest memory pages corresponding to the allocation to the physical memory pages. The request is received by a framework from the second application and includes a handle that uniquely identifies the shared region of memory as well as an identification of at least one guest memory page corresponding to the memory allocation. The framework is a component of a virtualization software, which executes in a context distinct from the context of the first application.

Подробнее
02-01-2014 дата публикации

Method and Apparatus For Bus Lock Assistance

Номер: US20140006661A1
Принадлежит: Intel Corp

A method is described that includes detecting that an instruction of a thread is a locked instruction. The instruction also includes determining that execution of said instruction includes imposing a bus lock. The instruction also include executing a bus lock assistance function in response to said determining, said bus lock assistance function including a function associated with said bus lock other than implementation of a bus lock protocol.

Подробнее
09-01-2014 дата публикации

Computer system, cache control method and computer program

Номер: US20140012936A1
Принадлежит: HITACHI LTD

The first application program and/or the second application program send(s) an access request to the second cache management module. The second cache management module receives the access request from the first application program and/or the second application program, and references the second cache management table to identify the storage location of the access-target data conforming to the access request. When access-target data exists in first cache area, the second cache management module sends a data transfer request to the first cache management module storing the access-target data, and where access-target data does not exist in the first cache area, acquires the access-target data from the second storage device. When the access-target data is in first cache area, the first cache management module acquires the access-target data conforming to the data transfer request from the relevant first cache area, and sends access-target data to the second cache management module.

Подробнее
16-01-2014 дата публикации

Methods of cache preloading on a partition or a context switch

Номер: US20140019689A1
Принадлежит: International Business Machines Corp

A scheme referred to as a “Region-based cache restoration prefetcher” (RECAP) is employed for cache preloading on a partition or a context switch. The RECAP exploits spatial locality to provide a bandwidth-efficient prefetcher to reduce the “cold” cache effect caused by multiprogrammed virtualization. The RECAP groups cache blocks into coarse-grain regions of memory, and predicts which regions contain useful blocks that should be prefetched the next time the current virtual machine executes. Based on these predictions, and using a simple compression technique that also exploits spatial locality, the RECAP provides a robust prefetcher that improves performance without excessive bandwidth overhead or slowdown.

Подробнее
16-01-2014 дата публикации

Processor, information processing apparatus, and control method of processor

Номер: US20140019690A1
Автор: Mikio Hondo, Toru Hikichi
Принадлежит: Fujitsu Ltd

A request storing unit in a PF port stores an expanded request. A PF port entry selecting unit controls two pre-fetch requests expanded from the expanded request to consecutively be input to a L2-pipe. When only one of the expanded two pre-fetch requests is aborted, the PF port entry selecting unit further controls the requests such that the aborted pre-fetch request is input to the L2-pipe as the highest priority request. Further, the PF port entry selecting unit receives the number of available resources from a resource managing unit in order to select a pre-fetch request to be input to a pipe inputting unit based on the number of available resources.

Подробнее
30-01-2014 дата публикации

Techniques to request stored data from a memory

Номер: US20140028693A1
Автор: Jianyu Li, Jun Ye, Kebing Wang
Принадлежит: Jianyu Li, Jun Ye, Kebing Wang

Techniques are described to configure a cache line structure based on attributes of a draw call and access direction of a texture. Attributes of textures (e.g., texture format and filter type), samplers, and shaders used by the draw call can be considered to determine the line size of a cache. Access direction can be considered to reduce the number of lines that are used to store texels required by a sample request.

Подробнее
30-01-2014 дата публикации

Providing a hybrid memory

Номер: US20140032818A1
Принадлежит: Hewlett Packard Development Co LP

A hybrid memory has a volatile memory and a non-volatile memory. The volatile memory is dynamically configurable to have a first portion that is part of a memory partition, and a second portion that provides a cache for the non-volatile memory.

Подробнее
30-01-2014 дата публикации

Systems and methods for supporting a plurality of load and store accesses of a cache

Номер: US20140032846A1
Принадлежит: Soft Machines Inc

Systems and methods for supporting a plurality of load and store accesses of a cache are disclosed. Responsive to a request of a plurality of requests to access a block of a plurality of blocks of a load cache, the block of the load cache and a logically and physically paired block of a store coalescing cache are accessed in parallel. The data that is accessed from the block of the load cache is overwritten by the data that is accessed from the block of the store coalescing cache by merging on a per byte basis. Access is provided to the merged data.

Подробнее
30-01-2014 дата публикации

Systems and methods for maintaining the coherency of a store coalescing cache and a load cache

Номер: US20140032856A1
Принадлежит: Soft Machines Inc

A method for maintaining the coherency of a store coalescing cache and a load cache is disclosed. As a part of the method, responsive to a write-back of an entry from a level one store coalescing cache to a level two cache, the entry is written into the level two cache and into the level one load cache. The writing of the entry into the level two cache and into the level one load cache is executed at the speed of access of the level two cache.

Подробнее
13-02-2014 дата публикации

Memory-access-resource management

Номер: US20140047201A1
Автор: Bhavesh Mehta
Принадлежит: VMware LLC

The present application is directed to a memory-access-multiplexing memory controller that can multiplex memory accesses from multiple hardware threads, cores, and processors according to externally specified policies or parameters, including policies or parameters set by management layers within a virtualized computer system. A memory-access-multiplexing memory controller provides, at the physical-hardware level, a basis for ensuring rational and policy-driven sharing of the memory-access resource among multiple hardware threads, cores, and/or processors.

Подробнее
20-02-2014 дата публикации

Shared virtual memory

Номер: US20140049551A1
Принадлежит: Intel Corp

A method and system for shared virtual memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a system memory. A CPU virtual address space may be created, and the surface may be mapped to the CPU virtual address space within a CPU page table. The method also includes creating a GPU virtual address space equivalent to the CPU virtual address space, mapping the surface to the GPU virtual address space within a GPU page table, and pinning the surface.

Подробнее
20-02-2014 дата публикации

Data cache prefetch hints

Номер: US20140052927A1
Принадлежит: Advanced Micro Devices Inc

The present invention provides a method and apparatus for using prefetch hints. One embodiment of the method includes bypassing, at a first prefetcher associated with a first cache, issuing requests to prefetch data from a number of memory addresses in a sequence of memory addresses determined by the first prefetcher. The number is indicated in a request received from a second prefetcher associated with a second cache. This embodiment of the method also includes issuing, from the first prefetcher, a request to prefetch data from a memory address subsequent to the bypassed memory addresses.

Подробнее
27-02-2014 дата публикации

Method, apparatus, and system for speculative abort control mechanisms

Номер: US20140059333A1
Принадлежит: Intel Corp

An apparatus and method is described herein for providing robust speculative code section abort control mechanisms. Hardware is able to track speculative code region abort events, conditions, and/or scenarios, such as an explicit abort instruction, a data conflict, a speculative timer expiration, a disallowed instruction attribute or type, etc. And hardware, firmware, software, or a combination thereof makes an abort determination based on the tracked abort events. As an example, hardware may make an initial abort determination based on one or more predefined events or choose to pass the event information up to a firmware or software handler to make such an abort determination. Upon determining an abort of a speculative code region is to be performed, hardware, firmware, software, or a combination thereof performs the abort, which may include following a fallback path specified by hardware or software. And to enable testing of such a fallback path, in one implementation, hardware provides software a mechanism to always abort speculative code regions.

Подробнее
06-03-2014 дата публикации

Systems, methods, and interfaces for adaptive cache persistence

Номер: US20140068197A1
Принадлежит: Fusion IO LLC

A storage module may be configured to service I/O requests according to different persistence levels. The persistence level of an I/O request may relate to the storage resource(s) used to service the I/O request, the configuration of the storage resource(s), the storage mode of the resources, and so on. In some embodiments, a persistence level may relate to a cache mode of an I/O request. I/O requests pertaining to temporary or disposable data may be serviced using an ephemeral cache mode. An ephemeral cache mode may comprise storing I/O request data in cache storage without writing the data through (or back) to primary storage. Ephemeral cache data may be transferred between hosts in response to virtual machine migration.

Подробнее
06-03-2014 дата публикации

Free space collection in log structured storage systems

Номер: US20140068219A1
Автор: Bruce Mcnutt
Принадлежит: International Business Machines Corp

A mechanism is provided for optimizing free space collection in a storage system having a plurality of segments. A collection score value is calculated for least one of the plurality of segments. The collection score value is calculated by determining a sum, across tracks in the segment, of the amount of time over a predetermined period of time during which the track has been invalid due to a more recent copy being written in a different segment. Segments are chosen for free space collection based on the determined collection score value.

Подробнее
13-03-2014 дата публикации

Hybrid active memory processor system

Номер: US20140075119A1
Автор: Moon J. Kim
Принадлежит: IP Cube Partners (ICP) Co Ltd

In general, the present invention relates to data cache processing. Specifically, the present invention relates to a system that provides reconfigurable dynamic cache which varies the operation strategy of cache memory based on the demand from the applications originating from different external general processor cores, along with functions of a virtualized hybrid core system. The system includes receiving a data request, selecting an operational mode based on the data request and a predefined selection algorithm, and processing the data request based on the selected operational mode. The present invention is further configured to enable processing core and memory utilization by external systems through virtualization.

Подробнее
13-03-2014 дата публикации

Concurrent Control For A Page Miss Handler

Номер: US20140075123A1
Принадлежит: Intel Corp

In an embodiment, a page miss handler includes paging caches and a first walker to receive a first linear address portion and to obtain a corresponding portion of a physical address from a paging structure, a second walker to operate concurrently with the first walker, and a logic to prevent the first walker from storing the obtained physical address portion in a paging cache responsive to the first linear address portion matching a corresponding linear address portion of a concurrent paging structure access by the second walker. Other embodiments are described and claimed.

Подробнее
27-03-2014 дата публикации

Application-assisted handling of page faults in I/O operations

Номер: US20140089451A1
Принадлежит: MELLANOX TECHNOLOGIES LTD

A method for data transfer includes receiving in an operating system of a host computer an instruction initiated by a user application running on the host processor identifying a page of virtual memory of the host computer that is to be used in receiving data in a message that is to be transmitted over a network to the host computer but has not yet been received by the host computer. In response to the instruction, the page is loaded into the memory, and upon receiving the message, the data are written to the loaded page.

Подробнее
06-01-2022 дата публикации

HOST MANAGED HOTNESS DATA UTILIZED FOR CACHE EVICTIONS AND/OR INSERTIONS

Номер: US20220004495A1
Принадлежит: Intel Corporation

Systems, apparatuses, and methods provide for a memory controller to manage cache evictions and/or insertions in a data server environment based at least in part on host managed hotness data. For example, a memory controller includes logic to receive a plurality of read and write requests from a host, where the plurality of read and write requests include an associated hotness data. A valid unit count of operational memory cells is maintained on a block-by-block basis for a plurality of memory blocks. A hotness index count is also maintained based at least in part on the hotness data on a block-by-block basis for the plurality of memory blocks. One or more memory blocks of the plurality of memory blocks are selected for eviction from a single level cell region to an x-level cell region based at least in part on the valid unit count and the hotness index count. 1. A semiconductor apparatus comprising:one or more substrates; and receive, via a memory controller of a memory device, a plurality of read and write requests from a host, wherein the plurality of read and write requests include an associated hotness data,', 'maintain, via the memory controller, a valid unit count of operational memory cells on a block-by-block basis for a plurality of memory blocks,', 'maintain, via the memory controller, a hotness index count based at least in part on the hotness data, wherein the hotness index count is maintained on a block-by-block basis for the plurality of memory blocks, and', 'select, via the memory controller, one or more memory blocks of the plurality of memory blocks for eviction based at least in part on the valid unit count and the hotness index count, wherein the eviction is from a single level cell region to an x-level cell region., 'logic coupled to the one or more substrates, wherein the logic is implemented at least partly in one or more of configurable or fixed-functionality hardware logic, the logic to2. The semiconductor apparatus of claim 1 , wherein the ...

Подробнее
06-01-2022 дата публикации

MEMORY SYSTEM, MEMORY CONTROLLER, AND METHOD OF OPERATING MEMORY SYSTEM

Номер: US20220004496A1
Автор: HA Chan Ho
Принадлежит:

Disclosed are a memory system, a memory controller, and a method of operating a memory system. The memory system may control the memory device to store data into zones of memory blocks in the memory device by assigning each data to be written with an address subsequent to a most recently written address in a zone, store journal information including mapping information between a logical address and a physical address for one of the one or more zones in a journal cache, search for journal information corresponding to a target zone targeted to write data when mapping information for the target zone among the one or more zones is updated, and replace the journal information corresponding to the target zone with journal information including the updated mapping information. 1. A memory system comprising:a memory device including memory cells for storing data and operable to perform an operation on one or more memory cells including, a read operation for reading data stored in one or more memory cells, a program operation for writing new data into one or more memory cells, or an erase operation for deleting stored data in one or more memory cells; anda memory controller in communication with the memory device and configured to control the memory device to perform an operation.wherein the memory controller is further configured to:control the memory device to store data into zones of memory blocks in the memory device by assigning each data to be written with an address subsequent to a most recently written address in a zone, wherein the zones of memory blocks are split from a namespace in the memory device;storing, in a journal cache, journal information comprising mapping information between a logical address and a physical address for one of the one or more zones;search, in the journal cache, for journal information corresponding to a target zone targeted to write data, when mapping information for the target zone among the one or more zones is updated; andreplace the ...

Подробнее
06-01-2022 дата публикации

FLOW CACHE MANAGEMENT

Номер: US20220006737A1
Принадлежит: NOKIA SOLUTIONS AND NETWORKS OY

Packet-processing circuitry including one or more flow caches whose contents are managed using a cache-entry replacement policy that is implemented based on one or more updatable counters maintained for each of the cache entries. In an example embodiment, the implemented policy enables the flow cache to effectively catch and keep elephant flows by giving to the caught elephant flows appropriate preference in terms of the cache dwell time, which can beneficially improve the overall cache-hit ratio and/or packet-processing throughput. Some embodiments can be used to implement an Open Virtual Switch (OVS). Some embodiments are advantageously capable of implementing the cache-entry replacement policy with very limited additional memory allocation. 1. An apparatus comprising a network device that comprises packet-processing circuitry configured to apply sets of flow-specific actions to received packets based on identification of a respective flow for each of the received packets;wherein the packet-processing circuitry comprises a first flow cache and an electronic cache controller, the first flow cache being configured to aid in the identification by storing therein a plurality of entries, each of the entries pointing to a respective one of the sets, the electronic cache controller being configured to replace at least some of the entries based on corresponding first updatable counters;wherein, in response to a cache hit, the packet-processing circuitry is configured to increment the first updatable counter corresponding to a hit entry; andwherein the packet-processing circuitry comprises a plurality of flow caches configured to be accessed in a defined sequence in response to one or more cache misses, said plurality including the first flow cache.2. The apparatus of claim 1 , wherein claim 1 , in response to a cache miss claim 1 , the packet-processing circuitry is configured to:decrement the first updatable counter for a corresponding existing one of the entries; oradd ...

Подробнее
05-01-2017 дата публикации

SYSTEMS AND METHODS FOR STORAGE PARALLELISM

Номер: US20170003902A1
Принадлежит:

One method includes streaming a data segment to a write buffer corresponding to a virtual page including at least two physical pages. Each physical page is defined within a respective solid-state storage element. The method also includes programming contents of the write buffer to the virtual page, such that a first portion of the data segment is programmed to a first one of the physical pages, and a second portion of the data segment is programmed to a second one of the physical pages. 1. An apparatus comprising:a storage division selection module configured to select a storage division of a solid-state storage medium for recovery, wherein the solid-state storage medium comprises a plurality of storage divisions, and wherein each storage division comprises a plurality of storage locations;an erase module configured to erase the selected storage division; anda storage division recovery module configured to store a sequence indicator in the erased storage division, wherein the sequence indicator is indicative of an ordered sequence of the plurality of storage divisions.2. The apparatus of claim 1 , further comprising a storage module configured to store a data segment in a physical location of the erased storage division in response to a storage request.3. The apparatus of claim 2 , wherein the erased storage division is selected for writing in accordance with the stored sequence indicator.4. The apparatus of claim 2 , further comprising an index module configured to create an entry in an index claim 2 , the entry mapping a logical identifier associated with the data segment to the physical location of the erased storage division.5. The apparatus of claim 2 , further comprising an index reconstruction module configured to identify a most recent data segment associated with a logical identifier from a plurality of data segments associated with the logical identifier claim 2 , wherein the most recent data segment is identified in accordance with the sequence indicator ...

Подробнее
05-01-2017 дата публикации

MEMORY SYSTEM FOR CONTROLLING SEMICONDUCTOR MEMORY DEVICES THROUGH PLURALITY OF CHANNELS

Номер: US20170003909A1
Автор: CHO Sung Yeob
Принадлежит:

A memory system includes a plurality of channels; a plurality of semiconductor memory devices connected to the channels; and a controller that controls the semiconductor memory devices through the channels, wherein the controller writes program data in a first semiconductor memory device of the plurality of semiconductor memory devices, and wherein, when the writing of the program data fails, the program data is temporarily stored in a page buffer unit of a second semiconductor memory device of the plurality of semiconductor memory devices connected to a channel other than the channel corresponding to the first semiconductor memory device. 1. A memory system , comprising:a plurality of channels;a plurality of semiconductor memory devices connected to the channels; anda controller that controls the semiconductor memory devices through the channels,wherein the controller writes program data in a first semiconductor memory device of the plurality of semiconductor memory devices, and wherein, when the writing of the program data is failed, the program data is temporarily stored in a page buffer unit of a second semiconductor memory device of the plurality of semiconductor memory devices connected to a channel other than the channel corresponding to the first semiconductor memory device.2. The memory system of claim 1 , wherein the controller retrieves the program data from the page buffer unit of the second semiconductor memory device claim 1 , and re-writes the program data in one of the semiconductor memory devices.3. The memory system of claim 1 , wherein the first semiconductor memory device is connected to a first channel among the channels claim 1 ,the second semiconductor memory device is connected to a second channel among the channels, andwhen the writing of the program data is failed, the controller retrieves the program data from the first semiconductor memory device through the first channel, and temporarily stores the program data in the page buffer unit of ...

Подробнее
05-01-2017 дата публикации

SYSTEM RESOURCE BALANCE ADJUSTMENT METHOD AND DEVICE

Номер: US20170003912A1
Автор: Li Guining
Принадлежит:

The disclosure discloses a system resource balance adjustment method and device. The method comprises: categorizing high speed buffer memory resources according to a type of disk accessed, and respectively determining, according to service requirements, a target value for configuration parameters of the high speed buffer memory resources of each category; when a system is in operation, periodically detecting whether the high speed buffer memory resources are balanced, when it is determined that the high speed buffer memory resources are required to be adjusted, adjusting front-end page allocation and/or back-end resources corresponding to imbalanced high speed buffer memory resource categories according to the target value. The technical solution of the disclosure allows for each type of service to more reasonably occupy shared resources, and adjusts the performance of the entire system to a required mode. 1. A system resource balance adjustment method , comprising:categorizing high speed buffer memory resources according to a type of disk accessed, and respectively determining, according to service requirements, a target value for configuration parameters of the high speed buffer memory resources of each category; andwhen a system is in operation, periodically detecting whether the high speed buffer memory resources are balanced, when it is determined that the high speed buffer memory resources are required to be adjusted, adjusting front-end page allocation and/or back-end resources corresponding to imbalanced high speed buffer memory resource categories according to the target value.2. The method as claimed in claim 1 , wherein the configuration parameters comprise at least one of the following: a page resource required to be reserved by a service claim 1 , input/output operations per second (IOPS) corresponding to the service claim 1 , a bandwidth occupied by the service input output (IO) claim 1 , and an IO response time.3. The method as claimed in claim 1 , ...

Подробнее
05-01-2017 дата публикации

TECHNIQUES FOR HANDLING MEMORY ACCESSES BY PROCESSOR-INDEPENDENT EXECUTABLE CODE IN A MULTI-PROCESSOR ENVIRONMENT

Номер: US20170003988A9
Автор: Shatz Leonid
Принадлежит: RAVELLO SYSTEMS LTD.

A method and apparatus for virtual address mapping are provided. The method includes determining an offset value respective of at least a first portion of code stored on a code memory unit, generating a first virtual code respective of the first portion of code and a second virtual code respective of a second portion of code stored on the code memory unit; mapping the first virtual code to a first virtual code address and the second virtual code to a second virtual code address; generating a first virtual data respective of the first portion of data and a second virtual data respective of the second portion of data; and mapping the first virtual data to a first virtual data address and the second virtual data to a second virtual data address. 1. An apparatus for virtual address mapping , comprising:a first memory unit including a plurality of code portions mapped to a plurality of respective code virtual address starting points, wherein each code virtual address starting point of the plurality of respective code virtual address starting points is set apart from at least one other code virtual address starting point of the plurality of respective code virtual address starting points by an offset of a plurality of offsets;a second memory unit including a plurality of data portions, each data portion respective of a code portion of the plurality of code portions, mapped to a plurality of respective data virtual address starting points, wherein each data virtual address starting point of the plurality of respective data virtual address starting points is set apart from at least one other data virtual address starting point of the plurality of respective data virtual address starting points by the offset of the plurality of offsets used to set apart a code virtual address of the respective code portion; anda memory management unit configured to map each code portion of the plurality of code portions to a first memory unit address of the first memory unit, wherein the ...

Подробнее
05-01-2017 дата публикации

TRANSACTIONAL STORAGE ACCESSES SUPPORTING DIFFERING PRIORITY LEVELS

Номер: US20170004004A1
Принадлежит:

In at least some embodiments, a cache memory of a data processing system receives a transactional memory access request including a target address and a priority of the requesting memory transaction. In response, transactional memory logic detects a conflict for the target address with a transaction footprint of an existing memory transaction and accesses a priority of the existing memory transaction. In response to detecting the conflict, the transactional memory logic resolves the conflict by causing the cache memory to fail the requesting or existing memory transaction based at least in part on their relative priorities. Resolving the conflict includes at least causing the cache memory to fail the existing memory transaction when the requesting memory transaction has a higher priority than the existing memory transaction, the transactional memory access request is a transactional load request, and the target address is within a store footprint of the existing memory transaction. 1. A method of data processing in a data processing system , the method comprising:at a cache memory of a data processing system, receiving a transactional memory access request generated by execution by a processor core of a transactional memory access instruction within a requesting memory transaction, wherein the transactional memory access request includes a target address of data to be accessed and indicates a priority of the requesting memory transaction;in response to the transactional memory access request, transactional memory logic detecting a conflict for the target address between the transactional memory access request and a transaction footprint of an existing memory transaction;the transactional memory logic accessing a priority of the existing memory transaction; and 'causing the cache memory to fail the existing memory transaction when the requesting memory transaction has a higher priority than the existing memory transaction, the transactional memory access request is a ...

Подробнее
05-01-2017 дата публикации

DYNAMIC MEMORY EXPANSION BY DATA COMPRESSION

Номер: US20170004069A1

Dynamic memory expansion based on data compression is described. Data represented in at least one page to be written to a main memory of a computing device is received. The data is compressed in the at least one page to generate at least one compressed physical page and a metadata entry corresponding to each page of the at least one compressed physical page. The metadata entry is cached in a metadata cache deluding metadata entries and pointers to the uncompressed region of the at least one compressed physical page. 1. A method for dynamic memory expansion by data compression , the method comprising:receiving data represented in at least one page to be written to a main memory of a computing device, wherein the main memory stores data in blocks represented as a plurality of physical pages, and wherein each physical page from amongst the plurality of physical pages comprises at least one cache line;compressing the data in the at least one page to generate at least one compressed physical page, wherein each cache line within the at least one page is compressed to be stored in one of a compressed region and an uncompressed region of the at least one compressed physical page;generating a metadata entry, corresponding to each page of the at least one compressed physical page, wherein the metadata entry comprises information corresponding to one or more metadata parameters associated with each page of the at least one compressed physical page; andcaching the metadata entry in a two level metadata cache, wherein the two level metadata cache comprises metadata entries and pointers to the uncompressed region of the at least one compressed physical page.2. The method as claimed in claim 1 , where in the method further comprises segregating the main memory into a data section and a metadata section.3. The method as claimed in claim 2 , wherein the method further comprises storing the metadata entry onto the metadata section of the main memory.4. The method as claimed in claim ...

Подробнее
05-01-2017 дата публикации

METHODS FOR HOST-SIDE CACHING AND APPLICATION CONSISTENT WRITEBACK RESTORE AND DEVICES THEREOF

Номер: US20170004082A1
Автор: Basu Sourav, Sehgal Priya
Принадлежит:

A method, non-transitory computer readable medium, and device that assists with file-based host-side caching and application consistent write back includes receiving a write operation on a file from a client computing device. When the file for which the write operation has been received is determined when the file is present in the cache. An acknowledgement is sent back to the client computing device indicating the acceptance of the write operation when the file for which the write operation has been received is determined to be present within the cache. The write-back operation is completed for data present in the cache of the storage management computing device to one of the plurality of servers upon sending the acknowledgement. 1. A method for file-based host-side caching and application consistent write back , the method comprising:receiving, by a storage management computing device, a write operation on a file from a client computing device;determining, by the storage management computing device, when the file for which the write operation has been received is present within a cache of the storage management computing device;sending, by the storage management computing device, an acknowledgement indicating the acceptance of the write operation back to the client computing device when the file for which the write operation has been received is determined to be present within the cache; andcompleting, by the storage management computing device, a write-back operation for data present in the cache of the storage management computing device to one of the plurality of servers upon sending the acknowledgement.2. The method as set forth in wherein the determining further comprises claim 1 , obtaining and caching claim 1 , by the storage management computing device claim 1 , the file to the cache of the storage management computing device when the file for which the write operation has been received is not determined to be present within the cache.3. The method as set ...

Подробнее
05-01-2017 дата публикации

System, method and mechanism to efficiently coordinate cache sharing between cluster nodes operating on the same regions of a file or the file system blocks shared among multiple files

Номер: US20170004083A1
Принадлежит: Veritas Technologies LLC

Various systems, methods and apparatuses for coordinating the sharing of cache data between cluster nodes operating on the same data objects. One embodiment involves a first node in a cluster receiving a request for a data object, querying a global lock manager to determine if a second node in the cluster is the lock owner of the data object, receiving an indication identifying the second node as the lock owner and indicating that the data object is available in the second node's local cache, requesting the data object from the second node, and then receiving the data object from the second node's local cache. Other embodiments include determining whether the lock is a shared lock or an exclusive lock, and either pulling the data object from the local node of the second cache or receiving the data object that is pushed from the second node, as appropriate.

Подробнее
05-01-2017 дата публикации

Cache coherent system including master-side filter and data processing system including same

Номер: US20170004084A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

An application processor is provided. The application processor includes a cache coherent interconnect, a first master device connected to the cache coherent interconnect, a second master device, and a master-side filter connected between the cache coherent interconnect and the second master device. The master-side filter receives a snoop request from the first master device through the cache coherent interconnect, compares a second security attribute of the second master device with a first security attribute of the first master device which is included in the snoop request, and determines whether to transmit an address included in the snoop request to the second master device according to a comparison result.

Подробнее
05-01-2017 дата публикации

TRANSACTIONAL STORAGE ACCESSES SUPPORTING DIFFERING PRIORITY LEVELS

Номер: US20170004085A1
Принадлежит:

In at least some embodiments, a cache memory of a data processing system receives a transactional memory access request including a target address and a priority of the requesting memory transaction. In response, transactional memory logic detects a conflict for the target address with a transaction footprint of an existing memory transaction and accesses a priority of the existing memory transaction. In response to detecting the conflict, the transactional memory logic resolves the conflict by causing the cache memory to fail the requesting or existing memory transaction based at least in part on their relative priorities. Resolving the conflict includes at least causing the cache memory to fail the existing memory transaction when the requesting memory transaction has a higher priority than the existing memory transaction, the transactional memory access request is a transactional load request, and the target address is within a store footprint of the existing memory transaction. 16.-. (canceled)7. A processing unit , comprising:a processor core;a cache memory, coupled to the processor core, that receives a transactional memory access request generated by execution of a transactional memory access instruction within a requesting memory transaction, wherein the transactional memory access request includes a target address of data to be accessed and indicates a priority of the requesting memory transaction; responsive to the transactional memory access request, detect a conflict for the target address between the transactional memory access request and a transaction footprint of an existing memory transaction;', 'access a priority of the existing memory transaction; and', 'causing the cache memory to fail the existing memory transaction when the requesting memory transaction has a higher priority than the existing memory transaction, the transactional memory access request is a transactional load request, and the target address is within a store footprint of the ...

Подробнее
05-01-2017 дата публикации

CACHE MANAGEMENT METHOD FOR OPTIMIZING READ PERFORMANCE OF DISTRIBUTED FILE SYSTEM

Номер: US20170004086A1
Принадлежит:

A cache management method for optimizing read performance in a distributed file system is provided. The cache management method includes: acquiring metadata of a file system; generating a list regarding data blocks based on the metadata; and pre-loading data blocks into a cache with reference to the list. Accordingly, read performance in analyzing big data in a Hadoop distributed file system environment can be optimized in comparison to a related-art method. 1. A cache management method comprising:acquiring metadata of a file system;generating a list regarding data blocks based on the metadata; andpre-loading data blocks into a cache with reference to the list.2. The cache management method of claim 1 , wherein the pre-loading comprises pre-loading data blocks requested by a client into the cache.3. The cache management method of claim 2 , wherein the pre-loading comprises pre-loading other data blocks into the cache while a data block is being processed by the client.4. The cache management method of claim 1 , wherein the pre-loading comprises pre-loading claim 1 , into the cache claim 1 , data blocks which are requested by the client claim 1 , and data blocks which are referred to with the data blocks more than a reference number of times.5. The cache management method of claim 1 , wherein the file system is a Hadoop distributed file system claim 1 , andwherein the cache is implemented by using an SSD.6. A server comprising:a cache; anda processor configured to acquire metadata of a file system, generate a list regarding data blocks based on the metadata, and order to pre-load data blocks into the cache with reference to the list. The present application claims the benefit under 35 U.S.C. §119(a) to a Korean patent application filed in the Korean Intellectual Property Office on Jun. 30, 2015, and assigned Serial No. 10-2015-0092735, the entire disclosure of which is hereby incorporated by reference.The present invention relates generally to a cache management ...

Подробнее
05-01-2017 дата публикации

ADAPTIVE CACHE MANAGEMENT METHOD ACCORDING TO ACCESS CHARACTERISTICS OF USER APPLICATION IN DISTRIBUTED ENVIRONMENT

Номер: US20170004087A1
Принадлежит:

An adaptive cache management method according to access characteristic of a user application in a distributed environment is provided. The adaptive cache management method includes: determining an access pattern of a user application; and determining a cache write policy based on the access pattern. Accordingly, a delay in speed which may occur in an application can be minimized by efficiently using resources established in a distributed environment and using an adaptive policy. 1. An adaptive cache management method comprising:determining an access pattern of a user application; anddetermining a cache write policy based on the access pattern.2. The adaptive cache management method of claim 1 , wherein the determining the cache write policy comprises claim 1 , when the access pattern indicates that recently referred data is referred to again claim 1 , determining a cache write policy of storing data recorded on a cache in a storage medium afterward.3. The adaptive cache management method of claim 1 , wherein the determining the cache write policy comprises claim 1 , when the access pattern indicates that referred data is referred to again after a predetermined interval claim 1 , determining a cache write policy of immediately storing data recorded on a cache in a storage medium.4. The adaptive cache management method of claim 1 , wherein the determining the cache write policy comprises claim 1 , when the access pattern indicates that referred data is not referred to again claim 1 , determining a cache write policy of immediately storing data in a storage medium without recording on a cache.5. The adaptive cache management method of claim 1 , further comprising:selecting data which is most likely to be referred to based on the access pattern; andloading the selected data into a cache.6. A storage server comprising:a cache; anda processor configured to determine an access pattern of a user application and determine a cache write policy based on the access pattern. The ...

Подробнее
05-01-2017 дата публикации

Multi-Host Configuration for Virtual Machine Caching

Номер: US20170004090A1
Принадлежит:

Systems and methods disclosed herein are used to efficiently configure a plurality of memory caches. In one aspect, a method includes a server receiving or accessing a storage policy including a first caching mode for a first set of one or more virtual machine elements and a second caching mode for a second set of one or more virtual machine elements. If a virtual machine element requires configuration, the server determines whether the virtual machine element is a virtual machine element of the first set or the second set. If the virtual machine element is a virtual machine element of the first set, the server applies the first caching mode to a section of a logical solid state drive. If the virtual machine element is a virtual machine element of the second set, the server applies the second caching mode to the section of the logical solid state drive. 1. A method , performed by a server computing device , for configuring a plurality of memory caches , the method comprising: receiving or accessing a storage policy including a first caching mode for a first set of one or more virtual machine elements and a second caching mode for a second set of one or more virtual machine elements, wherein the one or more virtual machine elements of the first set are different from the one or more virtual machine elements of the second set;', 'determining that a virtual machine element, hosted by a first host computing device, requires configuration;', 'in response to determining that the virtual machine element requires configuration, determining whether the virtual machine element is a virtual machine element of the first set of one or more virtual machine elements or the second set of one or more virtual machine elements; and', 'in response to determining that the virtual machine element is a virtual machine element of the first set of one or more virtual machine elements, applying the first caching mode to a section of a logical solid state drive associated with the virtual ...

Подробнее
05-01-2017 дата публикации

TRANSLATION BUFFER UNIT MANAGEMENT

Номер: US20170004091A1
Принадлежит:

A data processing system incorporates a translation buffer unit and a translation control unit . The translation buffer unit responds to receipt of a memory access transaction for which translation data is unavailable in that translation buffer unit by issuing a request to the translation control unit to provide translation data for the memory access transaction. The translation control unit is responsive to disabling or enabling of address translation for a given type of memory access transaction to an issue invalidate command to all translation buffer units which may be holding translation data for that given type of memory access transaction. When the translation control unit receives a request for translation from the translation buffer unit for a memory access of the given type for which memory address translation is disabled, then the translation control unit responds to returning global translation data to be used by the translation buffer for all memory access translations of that given type. 1. Apparatus for processing data comprising:a translation buffer unit to store translation data to translate an input address of a memory access transaction to an output address; anda translation control unit to provide said translation data to said translation buffer unit, whereinsaid translation buffer unit is responsive to receipt of a memory access transaction for which translation data is unavailable in said translation buffer unit to issue a request to said translation control unit to provide translation data for said memory access transaction;said translation control unit is responsive to a change in enablement of address translation for a given type of memory access transaction to issue an invalidate command to said translation buffer unit to invalidate any translation data for said given type of memory access transaction stored in said translation buffer unit; andsaid translation control unit is responsive to receipt of a request for translation data from said ...

Подробнее