Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 6139. Отображено 100.
09-02-2012 дата публикации

Apparatus and methods to concurrently perform per-thread as well as per-tag memory access scheduling within a thread and across two or more threads

Номер: US20120036509A1
Принадлежит: Sonics Inc

A method, apparatus, and system in which an integrated circuit comprises an initiator Intellectual Property (IP) core, a target IP core, an interconnect, and a tag and thread logic. The target IP core may include a memory coupled to the initiator IP core. Additionally, the interconnect can allow the integrated circuit to communicate transactions between one or more initiator Intellectual Property (IP) cores and one or more target IP cores coupled to the interconnect. A tag and thread logic can be configured to concurrently perform per-thread and per-tag memory access scheduling within a thread and across multiple threads such that the tag and thread logic manages tags and threads to allow for per-tag and per-thread scheduling of memory accesses requests from the initiator IP core out of order from an initial issue order of the memory accesses requests from the initiator IP core.

Подробнее
29-03-2012 дата публикации

Method and apparatus for reducing processor cache pollution caused by aggressive prefetching

Номер: US20120079205A1
Автор: Patrick Conway
Принадлежит: Advanced Micro Devices Inc

A method and apparatus for controlling a first and second cache is provided. A cache entry is received in the first cache, and the entry is identified as having an untouched status. Thereafter, the status of the cache entry is updated to accessed in response to receiving a request for at least a portion of the cache entry, and the cache entry is subsequently cast out according to a preselected cache line replacement algorithm. The cast out cache entry is stored in the second cache according to the status of the cast out cache entry.

Подробнее
03-05-2012 дата публикации

Storage device cache

Номер: US20120110258A1
Автор: Jack Lakey, Ron WATTS
Принадлежит: SEAGATE TECHNOLOGY LLC

Implementations described and claimed herein provide a method and system for comparing a storage location related to a new write command on a storage device with storage locations of a predetermined number of write commands stored in a first table to determine frequency of write commands to the storage location. If the frequency is determined to be higher than a first threshold, the data related to the write command is stored in a write cache.

Подробнее
17-05-2012 дата публикации

Secondary Cache Memory With A Counter For Determining Whether to Replace Cached Data

Номер: US20120124291A1
Принадлежит: International Business Machines Corp

A selective cache includes a set configured to receive data evicted from a number of primary sets of a primary cache. The selective cache also includes a counter associated with the set. The counter is configured to indicate a frequency of access to data within the set. A decision whether to replace data in the set with data from one of the primary sets is based on a value of the counter.

Подробнее
07-06-2012 дата публикации

Recommendation based caching of content items

Номер: US20120144117A1
Принадлежит: Microsoft Corp

Content item recommendations are generated for users based on metadata associated with the content items and a history of content item usage associated with the users. Each content item recommendation identifies a user and a content item and includes a score that indicates how likely the user is to view the content item. Based on the content item recommendations, and constraints of one or more caches, the content items are selected for storage in one or more caches. The constraints may include users that are associated with each cache, the geographical location of each cache, the size of each cache, and/or costs associated with each cache such as bandwidth costs. The content items stored in a cache are recommended to users associated with the cache.

Подробнее
19-07-2012 дата публикации

Method and system for cache endurance management

Номер: US20120185638A1
Принадлежит: Sandisk IL Ltd

A system and method for cache endurance management is disclosed. The method may include the steps of querying a storage device with a host to acquire information relevant to a predicted remaining lifetime of the storage device, determining a download policy modification for the host in view of the predicted remaining lifetime of the storage device and updating the download policy database of a download manager in accordance with the determined download policy modification.

Подробнее
22-11-2012 дата публикации

Optimized flash based cache memory

Номер: US20120297113A1
Принадлежит: International Business Machines Corp

Embodiments of the invention relate to throttling accesses to a flash memory device. The flash memory device is part of a storage system that includes the flash memory device and a second memory device. The throttling is performed by logic that is external to the flash memory device and includes calculating a throttling factor responsive to an estimated remaining lifespan of the flash memory device. It is determined whether the throttling factor exceeds a threshold. Data is written to the flash memory device in response to determining that the throttling factor does not exceed the threshold. Data is written to the second memory device in response to determining that the throttling factor exceeds the threshold.

Подробнее
06-12-2012 дата публикации

Cache locking control

Номер: US20120311380A1
Автор: William C. Moyer
Принадлежит: FREESCALE SEMICONDUCTOR INC

Each cache line of a cache has a lockout state that indicates whether an error has been detected for data accessed at the cache line, and also has a data validity state, which indicates whether the data stored at the cache line is representative of the current value of data stored at a corresponding memory location. The lockout state of a cache line is indicated by a set of one or more lockout bits associate with the cache line. In response to a cache invalidation event, the state of the lockout indicators for each cache line can be maintained so that locked out cache lines remain in the locked out state even after a cache invalidation. This allows memory error management software executing at the data processing device to robustly manage the state of the lockout indicators.

Подробнее
17-01-2013 дата публикации

Systems and methods for memory region descriptor attribute override

Номер: US20130019081A1
Автор: William C. Moyer
Принадлежит: Individual

A memory protection unit (MPU) is configured to store a plurality of region descriptor entries, each region descriptor entry defining an address region of a memory, an attribute corresponding to the region, and an attribute override control corresponding to the attribute. A memory access request to a memory address is received and determined to be within a first address region defined by a first region descriptor entry and within a second address region defined by a second region descriptor entry. When the attribute override control of the first region descriptor entry indicates that override is to be performed, the value of the attribute of the first region descriptor entry is applied for the memory access. When the attribute override control of the second region descriptor entry indicates that override is to be performed, the value of the attribute of the second region descriptor entry is applied for the memory access.

Подробнее
21-02-2013 дата публикации

Processor with memory delayed bit line precharging

Номер: US20130044555A1
Принадлежит: MARVELL WORLD TRADE LTD

A processor includes an array of memory cells, a control module, a precharge circuit, and an amplifier module. The control module generates a clock signal at a first rate, reduces the first rate to a second rate for a predetermined period, and adjusts the second rate back to the first rate at an end of the predetermined period. The precharge circuit: based on the first rate, precharges first bit lines connected to memory cells in a first row of the array of memory cells; based on the second rate, refrains from precharging the first bit lines; and precharges the first bit lines subsequent to the end of the predetermined period. The amplifier module: based on the first rate, access first instructions stored in the first row; and based on the second rate, accesses second instructions stored in the first row or a second row of the array.

Подробнее
02-05-2013 дата публикации

Dynamically adjusted threshold for population of secondary cache

Номер: US20130111133A1
Принадлежит: International Business Machines Corp

The population of data to be inserted into secondary data storage cache is controlled by determining a heat metric of candidate data; adjusting a heat metric threshold; rejecting candidate data provided to the secondary data storage cache whose heat metric is less than the threshold; and admitting candidate data whose heat metric is equal to or greater than the heat metric threshold. The adjustment of the heat metric threshold is determined by comparing a reference metric related to hits of data most recently inserted into the secondary data storage cache, to a reference metric related to hits of data most recently evicted from the secondary data storage cache; if the most recently inserted reference metric is greater than the most recently evicted reference metric, decrementing the threshold; and if the most recently inserted reference metric is less than the most recently evicted reference metric, incrementing the threshold.

Подробнее
30-05-2013 дата публикации

Systems, methods, and devices for running multiple cache processes in parallel

Номер: US20130138865A1
Принадлежит: SEAGATE TECHNOLOGY LLC

Certain embodiments of the present disclosure related to systems, methods, and devices for increasing data access speeds. In certain embodiments, a method includes running multiple cache retrieval processes in parallel, in response to a read command. In certain embodiments, a method includes initiating a first cache retrieval process and a second cache retrieval process to run in parallel, in response to a single read command.

Подробнее
18-07-2013 дата публикации

Systems and methods for cache profiling

Номер: US20130185475A1
Принадлежит: Fusion IO LLC

A cache module leverages a logical address space and storage metadata of a storage module (e.g., virtual storage module) to cache data of a backing store. The cache module maintains access metadata to track access characteristics of logical identifiers in the logical address space, including accesses pertaining to data that is not currently in the cache. The access metadata may be separate from the storage metadata maintained by the storage module. The cache module may calculate a performance metric of the cache based on profiling metadata, which may include portions of the access metadata. The cache module may determine predictive performance metrics of different cache configurations. An optimal cache configuration may be identified based on the predictive performance metrics.

Подробнее
12-09-2013 дата публикации

Enhancing data retrieval performance in deduplication systems

Номер: US20130238571A1
Принадлежит: International Business Machines Corp

Various embodiments for processing data in a data deduplication system are provided. In one embodiment, a method for processing such data is disclosed. For data segments previously deduplicated by the data deduplication system, a supplemental hot-read link is established for those of the data segments determined to be read on at least one of a frequent and recently used basis. Other system and computer program product embodiments are disclosed and provide related advantages.

Подробнее
19-09-2013 дата публикации

Methods And Apparatuses For Efficient Load Processing Using Buffers

Номер: US20130246712A1
Принадлежит:

Various embodiments of the invention concern methods and apparatuses for power and time efficient load handling. A compiler may identify producer loads, consumer reuse loads, consumer forwarded loads, and producer/consumer hybrid loads. Based on this identification, performance of the load may be efficiently directed to a load value buffer, store buffer, data cache, or elsewhere. Consequently, accesses to cache are reduced, through direct loading from load value buffers and store buffers, thereby efficiently processing the loads. 1. An apparatus comprising:a processor, including first and second memories and a cache, to: (i) determine whether a new load operation instruction is one of a producer (P) load, consumer forwarded (F) load, and a consumer reuse (C) load; (ii) direct the new load to the cache and store a value related to the new load in the first memory when the new load is determined to be a P load; (iii) direct the new load to the first memory and bypass the cache when the new load is determined to be a C load; and (iv) direct the new load to the second memory and bypass the cache when the new load is determined to be a F load.2. The apparatus of claim 1 , wherein the processor is to:determine the new load is a hybrid (Q) load; andinitially direct the new load to the first memory bypassing the cache and subsequently direct data, corresponding to the new load, to the cache when the new load is not satisfied by the first memory.3. The apparatus of claim 1 , wherein the processor is to determine whether the new load is a C load based on an identifier corresponding to the new load; the identifier having been previously determined based on determining another load was a P load.4. The apparatus of claim 1 , wherein the processor is to determine whether the new load is a C load based on an identifier corresponding to the new load; the identifier having been previously determined in response to emulations of the first memory which includes a load value buffer.5. ...

Подробнее
19-09-2013 дата публикации

Conditional write processing for a cache structure of a coupling facility

Номер: US20130246713A1
Принадлежит: International Business Machines Corp

A method for managing a cache structure of a coupling facility includes receiving a conditional write command from a computing system and determining whether data associated with the conditional write command is part of a working set of data of the cache structure. If the data associated with the conditional write command is part of the working set of data of the cache structure the conditional write command is processed as an unconditional write command. If the data associated with the conditional write command is not part of the working set of data of the cache structure a conditional write failure notification is transmitted to the computing system.

Подробнее
03-10-2013 дата публикации

Apparatus and Method for Fast Cache Shutdown

Номер: US20130262780A1
Принадлежит: Advanced Micro Devices Inc

An apparatus and method to enable a fast cache shutdown is disclosed. In one embodiment, a cache subsystem includes a cache memory and a cache controller coupled to the cache memory. The cache controller is configured to, upon restoring power to the cache subsystem, inhibit writing of modified data exclusively into the cache memory.

Подробнее
10-10-2013 дата публикации

Apparatus and method for implementing a multi-level memory hierarchy having different operating modes

Номер: US20130268728A1
Принадлежит: Individual

A system and method are described for integrating a memory and storage hierarchy including a non-volatile memory tier within a computer system. In one embodiment, PCMS memory devices are used as one tier in the hierarchy, sometimes referred to as “far memory.” Higher performance memory devices such as DRAM placed in front of the far memory and are used to mask some of the performance limitations of the far memory. These higher performance memory devices are referred to as “near memory.” In one embodiment, the “near memory” is configured to operate in a plurality of different modes of operation including (but not limited to) a first mode in which the near memory operates as a memory cache for the far memory and a second mode in which the near memory is allocated a first address range of a system address space with the far memory being allocated a second address range of the system address space, wherein the first range and second range represent the entire system address space.

Подробнее
24-10-2013 дата публикации

STORAGE SYSTEM, STORAGE MEDIUM, AND CACHE CONTROL METHOD

Номер: US20130282952A1
Автор: MIYAMAE Takeshi
Принадлежит: FUJITSU LIMITED

A storage system includes a storage that stores a file; and a plurality of access control devices that control access to the storage and include a cache memory in which the file is stored in blocks, wherein when receiving an update request of a prescribed block and latest data of the prescribed block is not stored in the cache memory of a first access control device, the first access control device among the plurality of access control devices obtains a version number added to the latest data from a second access control device, in which the latest data is stored, among the plurality of access control devices, and wherein the first access control device stores update data that updates the prescribed block in the cache memory of the first access control device and adds a new version number to the update data based on the version number. 1. A storage system comprising:a storage that stores a file; anda plurality of access control devices that control access to the storage and include a cache memory in which the file to be stored in the storage is stored in blocks, wherein when receiving an update request of a prescribed block and latest data of the prescribed block is not stored in the cache memory of a first access control device, the first access control device among the plurality of access control devices obtains a version number added to the latest data from a second access control device, in which the latest data is stored in the cache memory thereof, among the plurality of access control devices, andwherein the first access control device stores update data that updates the prescribed block in the cache memory of the first access control device and adds a new version number to the update data based on the version number.2. The storage system according to claim 1 , wherein the second access control device reports the version number added to the latest data to the first access control device and permits the first access control device to update the prescribed ...

Подробнее
30-01-2014 дата публикации

Techniques to request stored data from a memory

Номер: US20140028693A1
Автор: Jianyu Li, Jun Ye, Kebing Wang
Принадлежит: Jianyu Li, Jun Ye, Kebing Wang

Techniques are described to configure a cache line structure based on attributes of a draw call and access direction of a texture. Attributes of textures (e.g., texture format and filter type), samplers, and shaders used by the draw call can be considered to determine the line size of a cache. Access direction can be considered to reduce the number of lines that are used to store texels required by a sample request.

Подробнее
13-02-2014 дата публикации

System and method of caching information

Номер: US20140047191A1
Принадлежит: Google LLC

A system and method is provided wherein, in one aspect, a currently-requested item of information is stored in a cache based on whether it has been previously requested and, if so, the time of the previous request. If the item has not been previously requested, it may not be stored in the cache. If the subject item has been previously requested, it may or may not be cached based on a comparison of durations, namely (1) the duration of time between the current request and the previous request for the subject item and (2) for each other item in the cache, the duration of time between the current request and the previous request for the other item. If the duration associated with the subject item is less than the duration of another item in the cache, the subject item may be stored in the cache.

Подробнее
06-03-2014 дата публикации

Systems, methods, and interfaces for adaptive cache persistence

Номер: US20140068197A1
Принадлежит: Fusion IO LLC

A storage module may be configured to service I/O requests according to different persistence levels. The persistence level of an I/O request may relate to the storage resource(s) used to service the I/O request, the configuration of the storage resource(s), the storage mode of the resources, and so on. In some embodiments, a persistence level may relate to a cache mode of an I/O request. I/O requests pertaining to temporary or disposable data may be serviced using an ephemeral cache mode. An ephemeral cache mode may comprise storing I/O request data in cache storage without writing the data through (or back) to primary storage. Ephemeral cache data may be transferred between hosts in response to virtual machine migration.

Подробнее
07-01-2016 дата публикации

Detecting cache conflicts by utilizing logical address comparisons in a transactional memory

Номер: US20160004643A1
Принадлежит: International Business Machines Corp

A processor in a multi-processor configuration is configured perform dynamic address translation from logical addresses to real address and to detect memory conflicts for shared logical memory in transactional memory based on logical (virtual) addresses comparisons.

Подробнее
04-01-2018 дата публикации

ACCESSING PHYSICAL MEMORY FROM A CPU OR PROCESSING ELEMENT IN A HIGH PERFOMANCE MANNER

Номер: US20180004671A1
Принадлежит:

A method and apparatus is described herein for accessing a physical memory location referenced by a physical address with a processor. The processor fetches/receives instructions with references to virtual memory addresses and/or references to physical addresses. Translation logic translates the virtual memory addresses to physical addresses and provides the physical addresses to a common interface. Physical addressing logic decodes references to physical addresses and provides the physical addresses to a common interface based on a memory type stored by the physical addressing logic. 1. A method comprising:receiving a first instruction with a microprocessor to read a first element from a first virtual memory address in a virtual memory, wherein the first instruction is generated by a first virtual machine;translating the first virtual memory address to a first physical address;fetching the first element from a first location referenced by the first physical address;receiving a second instruction with the microprocessor to store a second element at a second physical address, wherein the second instruction is generated by a second virtual machine; andstoring the second element in a second location referenced by the second physical address without disabling paging of the virtual memory.2. The method of claim 1 , wherein the first element is a data operand.3. The method of claim 1 , wherein the second element is the first element.4. The method of claim 2 , further comprising: operating on the first element with the microprocessor to obtain a first result claim 2 , wherein the second element is based on the first result.5. The method of claim 2 , wherein translating the first virtual memory address to a first physical address is done with a translation look-aside buffer (TLB).6. The method of claim 5 , wherein the first and second locations are in a system memory.7. The method of claim 1 , wherein the first and second virtual machines are the same virtual machine.8. The ...

Подробнее
07-01-2021 дата публикации

Cache Filtering

Номер: US20210004331A1
Принадлежит:

Techniques are disclosed relating to filtering cache accesses. In some embodiments, a control unit is configured to, in response to a request to process a set of data, determine a size of a portion of the set of data to be handled using a cache. In some embodiments, the control unit is configured to determine filtering parameters indicative of a set of addresses corresponding to the determined size. In some embodiments, the control unit is configured to process one or more access requests for the set of data based on the determined filter parameters, including: using the cache to process one or more access requests having addresses in the set of addresses and bypassing the cache to access a backing memory directly, for access requests having addresses that are not in the set of addresses. The disclosed techniques may reduce average memory bandwidth or peak memory bandwidth. 1. An apparatus , comprising:cache circuitry;one or more graphics processors configured to cache graphics surface data in the cache circuitry; determine a portion of the graphics surface to be handled using the cache circuitry;', 'determine filter parameters corresponding to the portion, wherein the filter parameters are indicative of a set of addresses of data chunks to be processed using the cache circuitry, wherein addresses in the set of addresses are distributed across the graphics surface, based on the filter parameters, according to one or more threshold distances between addresses in the set; and', use the cache circuitry to process one or more access requests having addresses in the set of addresses; and', 'bypass the cache circuitry to access a backing memory directly for access requests having addresses that are not in the set of addresses., 'process one or more access requests for the graphics surface based on the determined filter parameters, including to], 'control circuitry configured to2. The apparatus of claim 1 , wherein the graphics surface is a texture.3. The apparatus of ...

Подробнее
02-01-2020 дата публикации

Virtual Memory Management

Номер: US20200004688A1
Принадлежит: Imagination Technologies Ltd

A method of managing access to a physical memory formed of n memory page frames using a set of virtual address spaces having n virtual address spaces each formed of a plurality p of contiguous memory pages. The method includes receiving a write request to write a block of data to a virtual address within a virtual address space i of the n virtual address spaces, the virtual address defined by the virtual address space i, a memory page j within that virtual address space i and an offset from the start of that memory page j; translating the virtual address to an address of the physical memory using a virtual memory table having n by p entries specifying mappings between memory pages of the virtual address spaces and memory page frames of the physical memory, wherein the physical memory address is defined by: (i) the memory page frame mapped to the memory page j as specified by the virtual memory table, and (ii) the offset of the virtual address; and writing the block of data to the physical memory address.

Подробнее
13-01-2022 дата публикации

ASYMMETRIC LLR GENERATION USING ASSIST-READ

Номер: US20220012124A1
Автор: Bhatia Aman, Zhang Fan
Принадлежит:

A method of operating a storage system is provided. The storage system includes memory cells and a memory controller, wherein each memory cell is an m-bit multi-level cell (MLC), where m is an integer, and the memory cells are arranged in m pages. The method includes determining initial LLR (log likelihood ratio) values for each of the m pages, comparing bit error rates in the m pages, identifying a programmed state in one of the m pages that has a high bit error rate (BER), and selecting an assist-read threshold voltage of the identified page. The method also includes performing an assist-read operation on the identified page using the assist-read threshold voltage, determining revised LLR values for the identified page based on results from the assist-read operation, and performing soft decoding using the revised LLR values for the identified page and the initial LLR values for other pages. 1070. A method of operating a storage system , the storage system including memory cells and a memory controller coupled to the memory cells for controlling operations of the memory cells , wherein each memory cell is a 3-bit tri-level cell (TLC) , wherein the memory cells are arranged in LSB (least significant bit) pages , CSB (center significant bit) pages , and , MSB (most significant bit) pages , wherein each of the memory cells comprises eight programmed voltage (PV) levels (PV-PV) , wherein PV is an erased state , the method comprising:{'b': 1', '7', '11', '5', '2', '4', '6', '3', '7, 'performing a read operation on the memory cells in response to a read command from a host, wherein performing the soft read operation comprises reading the memory cells using seven read threshold values (Vr-Vr) to determine the programmed voltages of the memory cells, including using threshold values Vr and Vr for MSB, using threshold values Vr, Vr, and Vr for CSB, and using threshold values Vr and Vr for LSB;'}determining initial LLR (log likelihood ratio) values for the memory cells based ...

Подробнее
13-01-2022 дата публикации

METHODS AND SYSTEMS FOR TRANSLATING VIRTUAL ADDRESSES IN A VIRTUAL MEMORY BASED SYSTEM

Номер: US20220012183A1
Принадлежит:

An information handling system and method for translating virtual addresses to real addresses including a processor for processing data; memory devices for storing the data; and a memory controller configured to control accesses to the memory devices, where the processor is configured, in response to a request to translate a first virtual address to a second physical address, to send from the processor to the memory controller a page directory base and a plurality of memory offsets. The memory controller is configured to: read from the memory devices a first level page directory table using the page directory base and a first level memory offset; combine the first level page directory table with a second level memory offset; and read from the memory devices a second level page directory table using the first level page directory table and the second level memory offset. 1. A method of translating in a computing system a virtual address to a second address comprising:obtaining a page directory base;obtaining a first level memory offset from the virtual address;obtaining a first level page directory using the page directory base and the first level memory offset;obtaining a second level memory offset from the virtual address; andobtaining a second level page directory table using the first level page directory and the second level memory offset.2. The method according to claim 1 , wherein all the obtaining steps are performed by a memory controller not local to a processor.3. The method according to claim 1 , further comprising obtaining a memory line that contains the address of a page table entry (PTE) claim 1 , and extracting from the memory line containing the address of the page table entry (PTE) claim 1 , the page table entry (PTE) claim 1 , wherein the PTE contains the translation of the virtual address to the second address.4. The method according to claim 1 , further comprising determining that all of a plurality of memory offsets have been used with a ...

Подробнее
07-01-2016 дата публикации

Independently addressable memory array address spaces

Номер: US20160005447A1
Автор: Troy A. Manning
Принадлежит: Micron Technology Inc

Examples of the present disclosure provide devices and methods for accessing a memory array address space. An example memory array comprising a first address space comprising memory cells coupled to a first number of select lines and to a number of sense lines and a second address space comprising memory cells coupled to a second number of select lines and to the number of sense lines. The first address space is independently addressable relative to the second address space.

Подробнее
03-01-2019 дата публикации

Method and device for cache management

Номер: US20190004946A1
Принадлежит: EMC IP Holding Co LLC

Embodiments of the present disclosure relate to a method and device for cache management. The method includes: receiving an I/O request associated with a processor kernel; in response to first data that the I/O request is targeted for being missed in a cache, determining whether a first target address of the first data is recorded in one of a plurality of cache history lists; in response to the first target address not being recorded in the plurality of cache history lists, storing, in a first node of a first free cache history list, the first target address and an initial access count of the first target address, the first free cache history list being determined in association with the processor kernel in advance; and adding the first node to a first cache history list associated with the I/O request of the plurality of cache history lists.

Подробнее
07-01-2021 дата публикации

SEAMLESS ONE-WAY ACCESS TO PROTECTED MEMORY USING ACCESSOR KEY IDENTIFIER

Номер: US20210006395A1
Принадлежит: Intel Corporation

An apparatus including a processor comprising at least one core to execute instructions of a plurality of virtual machines and a virtual machine monitor; and a cryptographic engine comprising circuitry to protect data associated with the plurality of virtual machines through use of a plurality of private keys and an accessor key, wherein each of the plurality of private keys are to protect a respective virtual machine and the accessor key is to protect management structures of the plurality of virtual machines; and wherein the processor is to provide, to the virtual machine monitor, direct read access to the management structures of the plurality of virtual machines through the accessor key and indirect write access to the management structures of the plurality of virtual machines through a secure software module. 1. An apparatus comprising: at least one core to execute instructions of a plurality of virtual machines and a virtual machine monitor; and', 'a cryptographic engine comprising circuitry to protect data associated with the plurality of virtual machines through use of a plurality of private keys and an accessor key, wherein each of the plurality of private keys are to protect a respective virtual machine and the accessor key is to protect management structures of the plurality of virtual machines; and, 'a processor comprisingwherein the processor is to provide, to the virtual machine monitor, direct read access to the management structures of the plurality of virtual machines through the accessor key and indirect write access to the management structures of the plurality of virtual machines through a secure software module.2. The apparatus of claim 1 , wherein the management structures comprise page tables mapping guest physical addresses to physical addresses of a memory.3. The apparatus of claim 1 , wherein the cryptographic engine is to provide claim 1 , through the accessor key claim 1 , integrity protection of the management structures of the plurality ...

Подробнее
03-01-2019 дата публикации

Request management for hierarchical cache

Номер: US20190007515A1
Принадлежит: Amazon Technologies Inc

A computer implemented cache management system and method is provided for use with a service provider configured to communicate with one or more client devices and with a content provider. The system includes a cache hierarchy comprising multiple cache levels that maintain at least some resources for the content provider, and one or more request managers for processing client requests for resources and retrieving the resources from the cache hierarchy. In response to a resource request, the request manager selects a cache level from the cache hierarchy based on a popularity associated with the requested resource, and attempts to retrieve the resource from the selected cache level while bypassing cache level(s) inferior to the selected level.

Подробнее
27-01-2022 дата публикации

CACHE OPERATIONS IN A HYBRID DUAL IN-LINE MEMORY MODULE

Номер: US20220027271A1
Принадлежит:

A system includes a first memory device of a first memory type, a second memory device of a second memory type, and a third memory device of a third memory type. The system further includes a processing device to retrieve one or more sections of data from the first memory device comprising a first memory type, and retrieve one or more remaining sections of data from the second memory device comprising a second memory type, wherein the one or more remaining sections of data from the second memory device are associated with the one or more sections of data from the first memory device. The processing device is further to combine the one or more sections of data from the first memory device comprising the first memory type with the one or more remaining sections of each of data from the second memory device comprising the second memory type into a contiguous page, and copy the contiguous page to a third memory device comprising a third memory type. 1. A system comprising:a first memory device comprising a first memory type;a second memory device comprising a second memory type coupled to the first memory device;a third memory device comprising a third memory type coupled to the first memory device and the second memory device, wherein the third memory type has a higher access latency than the first memory device, and wherein the second memory device has a higher access latency than the first and third memory devices; and retrieving one or more sections of data from the first memory device comprising the first memory type;', 'retrieving one or more remaining sections of data from the second memory device comprising the second memory type, wherein the one or more remaining sections of data from the second memory device are associated with the one or more sections of data from the first memory device;', 'combining the one or more sections of data from the first memory device comprising the first memory type with the one or more remaining sections of each of data from the ...

Подробнее
27-01-2022 дата публикации

Memory pipeline control in a hierarchical memory system

Номер: US20220027275A1
Принадлежит: Texas Instruments Inc

In described examples, a processor system includes a processor core generating memory transactions, a lower level cache memory with a lower memory controller, and a higher level cache memory with a higher memory controller having a memory pipeline. The higher memory controller is connected to the lower memory controller by a bypass path that skips the memory pipeline. The higher memory controller: determines whether a memory transaction is a bypass write, which is a memory write request indicated not to result in a corresponding write being directed to the higher level cache memory; if the memory transaction is determined a bypass write, determines whether a memory transaction that prevents passing is in the memory pipeline; and if no transaction that prevents passing is determined to be in the memory pipeline, sends the memory transaction to the lower memory controller using the bypass path.

Подробнее
27-01-2022 дата публикации

Flash memory controller mechanism capable of generating host-based cache information or flash-memory-based cache information to build and optimize binary tree with fewer nodes when cache stores data from host

Номер: US20220027282A1
Автор: Kuan-Hui Li
Принадлежит: Silicon Motion Inc

A flash memory controller includes a processor and a cache. When the processor receives a specific write command and specific data a host, the processor stores the specific data into a region of the cache, and the processor generates host-based cache information or flash-memory-based cache information to build or update/optimize a binary tree with fewer number of nodes to improve the searching speed of the binary tree, reducing computation overhead of multiple cores in the flash memory controller, and minimizing the number of accessing the cache to reduce the total latency wherein the host-based cache information may indicate dynamic data length and flash-memory-based cache information indicates the data length of one writing unit such as one page in flash memory chip.

Подробнее
27-01-2022 дата публикации

Hybrid Column Store Providing Both Paged and Memory-Resident Configurations

Номер: US20220027354A1
Принадлежит:

Disclosed herein are system, method, and computer-program product embodiments for generating a paged and in-memory representation of a database object. An embodiment operates by maintaining in-memory and paged form primitives unique to the database object or a substructure thereof in a database such that the in-memory and paged form primitives are capable of providing the in-memory and paged representations of the database objects, respectively. Thereafter, a load configuration for the database object is determined. Based on the load configuration, the in-memory and/or paged representations of the database object are generated using the in-memory form primitive or the paged form primitive unique to the database object, respectively. Subsequently, the in-memory and/or paged representations of the database object are stored in the database. 1. A database system , comprising: an in-memory store configured to store a representation of a database object;', 'an on-disk store comprising an in-memory primitive store and a paged primitive store, and configured to store a primitive for the representation of the database object, wherein the primitive is saved as an in-memory form primitive in the in-memory primitive store or a paged form primitive in the paged primitive store, wherein the in-memory form primitive and the paged form primitive are a byte-compatible representation of the database object to provide a unified persistence format for the database object., 'a hybrid column store, comprising2. The database system of claim 1 , wherein the representation of the database object includes the database object or a substructure of the database object.3. The database system of claim 2 , wherein the substructure of the database object includes a dictionary claim 2 , a data vector claim 2 , an index claim 2 , one or more values claim 2 , or one or more value identifiers corresponding to the one or more values.4. The database system of claim 1 , wherein the database object ...

Подробнее
14-01-2016 дата публикации

Storage device and control method thereof

Номер: US20160011983A1
Автор: Hiroaki Inoue
Принадлежит: Toshiba Corp

A storage device includes a magnetic storage unit storing data, a semiconductor storage unit, and a controller configured to determine whether or not to control the semiconductor storage unit to store a portion of the data, based on history of access to the data, and control the semiconductor storage unit to store the portion of the data according to the determination.

Подробнее
11-01-2018 дата публикации

SUPPORTING FAULT INFORMATION DELIVERY

Номер: US20180011793A1
Принадлежит:

A processor implementing techniques to supporting fault information delivery is disclosed. In one embodiment, the processor includes a memory controller unit to access an enclave page cache (EPC) and a processor core coupled to the memory controller unit. The processor core to detect a fault associated with accessing the EPC and generate an error code associated with the fault. The error code reflects an EPC-related fault cause. The processor core is further to encode the error code into a data structure associated with the processor core. The data structure is for monitoring a hardware state related to the processor core. 1. A processor comprising: identify an error code associated with an access of an enclave page cache (EPC);', 'determine a source of a fault related to the access based on the error code;', 'encode the error code into a data structure for monitoring a hardware state related to the processor; and, 'a memory handler circuit toprovide information from the data structure indicating the source of the fault.2. The processor of claim 1 , wherein the error code reflects an EPC-related fault cause.3. The processor of claim 3 , wherein the information comprises a resolution of the EPC-related fault cause.4. The processor of claim 1 , wherein the information comprises several bit indicators indicating at least one of a memory page of the EPC that is accessed at an incorrect type claim 1 , that the memory page access violated the EPC access permissions claim 1 , and that the memory page of the EPC is write protected.5. The processor of claim 1 , wherein the memory handler circuit is further to:check a data source associated with a kernel executed by the processor; anddetermine the source of the fault based on at least the data source associated with the kernel and the information of the data structure.6. The processor of claim 1 , wherein the memory handler circuit is further to transmit an alert comprising the information of the data structure to an ...

Подробнее
11-01-2018 дата публикации

METHOD AND SYSTEM FOR EFFICIENT COMMUNICATION AND COMMAND SYSTEM FOR DEFERRED OPERATION

Номер: US20180011794A1
Автор: Kipp Timothy James
Принадлежит:

A method and system for efficiently executing a delegate of a program by a processor coupled to an external memory. A payload including state data or command data is bound with a program delegate. The payload is mapped with the delegate via the payload identifier. The payload is pushed to a repository buffer in the external memory. The payload is flushed by reading the payload identifier and loading the payload from the repository buffer. The delegate is executed using the loaded payload. 1. A processing system for efficient execution of program functions , the system comprising:a processor unit including a processor and cache memoryan external memory coupled to the processing unit, the external memory including a payload repository including one repository buffer;a direct command module to load a payload in the payload repository and bind the payload with a program delegate to flush the payload from the cache memory when the associated program delegate is to be executed by the processing unit bypassing accesses to the cache memory.218-. (canceled) A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.The present invention relates generally to an efficient processor core operation, and more particularly, to a command system to efficiently use memory cache of a processor unit.Current processing systems have multiple processing cores to provide parallel processing of computational tasks, which increases the speed of completing such tasks. In multi-core systems, it is desirable to perform multi-threading in order to accomplish parallel processing of programs. Multi-threading is a widespread programming and execution model that allows multiple software threads to exist within the ...

Подробнее
10-01-2019 дата публикации

Buffer Management in a Data Storage Device

Номер: US20190012114A1
Принадлежит: SEAGATE TECHNOLOGY LLC

Method and apparatus for managing data buffers in a data storage device. In some embodiments, a write manager circuit stores user data blocks in a write cache pending transfer to a non-volatile memory (NVM). The write manager circuit sets a write cache bit value in a forward map describing the NVM to a first value upon storage of the user data blocks in the write cache, and subsequently sets the write cache bit value to a second value upon transfer of the user data blocks to the NVM. A read manager circuit accesses the write cache bit value in response to a read command for the user data blocks. The read manager circuit searches the write cache for the user data blocks responsive to the first value, and retrieves the requested user data blocks from the NVM without searching the write cache responsive to the second value.

Подробнее
14-01-2021 дата публикации

Cache and memory content management

Номер: US20210014324A1
Принадлежит: Intel Corp

Examples described herein relate to a network interface apparatus that includes an interface; circuitry to determine whether to store content of a received packet into a cache or into a memory, at least during a configuration of the network interface to store content directly into the cache, based at least in part on a fill level of a region of the cache allocated to receive copies of packet content directly from the network interface; and circuitry to store content of the received packet into the cache or the memory based on the determination, wherein the cache is external to the network interface. In some examples, the network interface is to determine to store content of the received packet into the memory based at least in part on a fill level of the region of the cache being identified as full or determine to store content of the received packet into the cache based at least in part on a fill level of the region of the cache being identified as not filled. In some examples, the network interface is to indicate a complexity level of content of the received packet to cause adjustment of a power usage level of a processor that is to process the content of the received packet.

Подробнее
03-02-2022 дата публикации

MAINTAINING AND RECOMPUTING REFERENCE COUNTS IN A PERSISTENT MEMORY FILE SYSTEM

Номер: US20220035717A1
Принадлежит:

Techniques are provided for maintaining and recomputing reference counts in a persistent memory file system of a node. Primary reference counts are maintained for pages within persistent memory of the node. In response to receiving a first operation to link a page into a persistent memory file system of the persistent memory, a primary reference count of the page is incremented before linking the page into the persistent memory file system. In response to receiving a second operation to unlink the page from the persistent memory file system, the page is unlinked from the persistent memory file system before the primary reference count is decremented. Upon the node recovering from a crash, the persistent memory file system is traversed in order to update shadow reference counts for the pages with correct reference count values, which are used to overwrite the primary reference counts with the correct reference count values. 1. A method comprising:maintaining primary reference counts for pages within a persistent memory of a node;in response to receiving a first operation to link a page into a persistent memory file system of the persistent memory, incrementing a primary reference count of the page before linking the page into the persistent memory file system; andin response to receiving a second operation to unlink the page from the persistent memory file system, unlinking the page from the persistent memory file system before decrementing the primary reference count.2. The method of claim 1 , comprising:maintaining shadow reference counts for the pages, wherein the primary reference count and a shadow reference count are maintained for the page.3. The method of claim 1 , comprising:in response to the node recovering from a crash, traversing the persistent memory file system to modify shadow reference counts of the pages.4. The method of claim 3 , comprising:in response to encountering the page during the traversal, incrementing shadow reference counts of children ...

Подробнее
03-02-2022 дата публикации

TECHNIQUES FOR CROSS-VALIDATING METADATA PAGES

Номер: US20220035785A1
Принадлежит:

A method of validating metadata pages that map to user data in a data storage system is provided. The method includes (a) obtaining first information stored for a first metadata page and second information stored for a second metadata page, the first and second metadata pages having a relationship to each other within a hierarchy of metadata pages for accessing user data; (b) performing a consistency check between the first information and the second information, the consistency check producing a first result in response to the relationship being verified and a second result otherwise; and (c) in response to the consistency check yielding the second result, performing a corrective action to restore consistency between the first and second information. An apparatus, system, and computer program product for performing a similar method are also provided. 1. A method of validating metadata pages that map to user data in a data storage system , the method comprising:obtaining first information stored for a first metadata page and second information stored for a second metadata page, the first and second metadata pages having a relationship to each other within a hierarchy of metadata pages for accessing user data;performing a consistency check between the first information and the second information, the consistency check producing a first result in response to the relationship being verified and a second result otherwise; andin response to the consistency check yielding the second result, performing a corrective action to restore consistency between the first and second information.2. The method of wherein performing the consistency check includes determining whether the first information and the second information both include a same group identifier that identifies the first and second metadata pages as belonging to a same group.3. The method of wherein the first metadata page is a parent node to the second metadata page within a B-tree.4. The method of claim 3 , ...

Подробнее
19-01-2017 дата публикации

Data access control apparatus

Номер: US20170017573A1
Автор: Seiji Maeda
Принадлежит: Toshiba Corp

A data access control apparatus of an embodiment includes an update region management apparatus including an update region management unit configured to record, in response to a writing request for data from an input apparatus, management information of a first address region in which the data is stored, a reading request management unit configured to record a second address specified in a reading request from a storage apparatus and a control unit configured to receive the writing request and the reading request, and control processing of the reading request and updating of the update region management unit and the reading request management unit.

Подробнее
19-01-2017 дата публикации

Apparatus and Method of Performing Agentless Remote IO Caching Analysis, Prediction, Automation, and Recommendation in a Computer Environment

Номер: US20170017575A1
Автор: Razin Sergey A, To Yokuki
Принадлежит: SIOS Technology Corporation

A host device includes a controller configured to receive input/output (TO) access information associated with an IO workload, the IO access information identifying at least one of a read action and a write action associated with the IO workload over a period of time. Based upon the received IO access information associated with the storage element, the controller is configured to derive a predicted cache access ratio associated with the IO workload and relating a predicted number of cache accesses associated with the IO workload with at least one of a total number of read actions and a total number of write actions associated with the IO workload. When the predicted cache access ratio reaches a threshold cache access ratio value, the controller is configured to identify the IO workload as an IO workload candidate for caching by the host device. 1. In a host device , a method for identifying input/output (TO) workload patterns within a computer infrastructure , comprising:receiving, by the host device, input/output (TO) access information associated with an IO workload, the IO access information identifying at least one of a read action and a write action associated with the IO workload over a period of time;based upon the received IO access information associated with the IO workload, deriving, by the host device, a predicted cache access ratio associated with the IO workload, the predicted cache access ratio relating a predicted number of cache accesses associated with the IO workload with at least one of a total number of read actions and a total number of write actions associated with the IO workload;when the predicted cache access ratio reaches a threshold cache access ratio value identifying, by the host device, the IO workload as an IO workload candidate for caching by the host device.2. The method of claim 1 , wherein the predicted number of cache accesses associated with the IO workload relates to a difference between the total number of read actions ...

Подробнее
15-01-2015 дата публикации

MEMORY DEVICE WITH PAGE EMULATION MODE

Номер: US20150019806A1
Автор: Alam Syed M., Andre Thomas
Принадлежит: Everspin Technologies, Inc.

In some examples, a memory device is configured to load multiple pages of an internal page size into a cache in response to receiving an activate command and to write multiple pages of the internal page size into a memory array in response to receiving a precharge command. In some implementations, the memory array is arranged to store multiple pages of the internal page size in a single physical row. 1. A method comprising:receiving, at a memory device, an activate command from an external source, the activate command to cause the memory device to load a page of an external page size into cache bits;performing a first set of activate operations to load a first set of data bits associated with a first page of an internal page size from a memory array into the cache bits, the external page size is greater than the internal page size; andperforming a second set of activate operations to load a second set of data bits associated with a second page of the internal page size from the memory array into the cache bits.2. The method as recited in claim 1 , wherein the external page size is twice the internal page size.3. The method as recited in claim 1 , further comprising:receiving, at the memory device, a precharge command from the external source;performing a first set of precharge operations to write the first set of data bits associated with the first page to the memory array; andperforming a second set of precharge operations to write the second set of data bits associated with the second page to the memory array.4. The method as recited in claim 3 , wherein performing the second set of activate operations and the second set of precharge operations are in response to the memory device being in a page emulation mode.5. The method as recited in claim 1 , further comprising:receiving, at the memory device, a first write command from the external source, the first write command to edit the cache bits associated with the first set of data bits;performing a first set of ...

Подробнее
18-01-2018 дата публикации

METHOD AND APPARATUS FOR ERASING DATA IN FLASH MEMORY

Номер: US20180018126A1
Автор: Li Yan
Принадлежит:

A data erasing method and apparatus applied to a flash memory. The method includes receiving a data erasing instruction, where the data erasing instruction instructs to erase data or at least one data section of data sections corresponding to data, when the data erasing instruction instructs to erase the data, searching for recorded storage addresses of all the data sections corresponding to the data, and erasing all the data sections corresponding to the data according to the storage addresses that are found; and when the data erasing instruction instructs to erase the at least one data section of the data sections corresponding to the data, searching for a recorded storage address of the at least one data section, and erasing the at least one data section according to the storage address that is found. 1. A data erasing method applied to a flash memory , comprising:receiving a data erasing instruction, wherein the data erasing instruction includes a logical block address corresponding to latest version data stored in a latest data section and to historical version data stored in a historical data section, wherein the latest version data is obtained by modifying one of one or more historical version data including the historical data section;searching in a mapping table for a physical storage address of the latest data section based on the logical block address;searching in a trace log for a physical storage address of the historical data section based on the logical block address; anderasing all data in the latest data section identified by the physical storage address of the latest data section and in the historical data section identified by the physical storage address of the historical data section.2. The method according to claim 1 , further comprising:storing the historical version data to the physical storage address of the historical data section;recording the physical storage address of the historical data section in the mapping table;storing the latest ...

Подробнее
18-01-2018 дата публикации

APPARATUS, SYSTEM, AND METHOD OF BYTE ADDRESSABLE AND BLOCK ADDRESSABLE STORAGE AND RETRIVAL OF DATA TO AND FROM NON-VOLATILE STORAGE MEMORY

Номер: US20180018171A1
Принадлежит:

A hybrid memory system provides rapid, persistent byte-addressable and block-addressable memory access to a host computer system by providing direct access to a both a volatile byte-addressable memory and a volatile block-addressable memory via the same parallel memory interface. The hybrid memory system also has at least a non-volatile block-addressable memory that allows the system to persist data even through a power-loss state. The hybrid memory system can copy and move data between any of the memories using local memory controllers to free up host system resources for other tasks. 1. A hybrid memory apparatus , comprising:a volatile memory logically divided into a volatile byte-addressable memory and a volatile block-addressable memory;a non-volatile block-addressable memory; (a) the host system bus and the volatile byte-addressable memory,', '(b) the host system bus and the volatile block-addressable memory,', '(c) the volatile byte-addressable memory and the volatile block-addressable memory, and', '(d) the volatile block-addressable memory and the non-volatile block-addressable memory; and, 'a host parallel memory interface that receives commands from a host system bus to exchange data between each ofa traffic controller that manages data traffic as a function of a host address received by the host parallel memory interface.2. The hybrid memory apparatus of claim 1 , wherein the host parallel memory interface routes the host address to the traffic controller when the host address refers to a byte-addressable address and routes the host address to an address translation circuit when the host address refers to a block-addressable address.3. The hybrid memory apparatus of claim 2 , wherein the traffic controller routes the host address to the volatile byte-addressable memory as a physical byte-addressable address when the host address refers to a byte-addressable address.4. The hybrid memory apparatus of claim 2 , wherein the address translation circuit routes ...

Подробнее
18-01-2018 дата публикации

Speculative reads in buffered memory

Номер: US20180018267A1
Принадлежит: Intel Corp

A speculative read request is received from a host device over a buffered memory access link for data associated with a particular address. A read request is sent for the data to a memory device. The data is received from the memory device in response to the read request and the received data is sent to the host device as a response to a demand read request received subsequent to the speculative read request.

Подробнее
18-01-2018 дата публикации

LIMITING ACCESS OPERATIONS IN A DATA STORAGE DEVICE

Номер: US20180018269A1
Принадлежит:

A hybrid data storage device disclosed herein includes a main data store, one or more data storage caches, and a data storage cache management sub-system. The hybrid data storage device is configured to limit write operations on the one or more data storage caches to less than an endurance value for the data storage cache. In one implementation, the data storage cache management sub-system limits or denies requests for promotion of data from the main data store to the one or more data storage caches. In another implementation, the data storage cache management sub-system limits garbage collection operations on the data storage cache. 1. A method comprising:dividing a remaining lifetime of a data storage cache into a series of time periods;receiving a write request from a storage controller to write one or more clusters to the data storage cache during a first time period in the series of time periods;determining a maximum number of write operations allowable on the data storage cache during the first time period; anddeclining the write request if allowing the write request would exceed the maximum number of write operations allowable on the data storage cache during the first time period.2. The method of claim 1 , further comprising allowing the write request if allowing the write request would not exceed the maximum number of write operations permitted on the storage cache during the first time period.3. The method of claim 1 , wherein the determining operation comprises dividing a number of remaining allowable write operations over the remaining lifetime of the storage cache by a number of periods remaining in the series of time periods.4. The method of claim 3 , wherein the number of remaining allowable write operations over the remaining lifetime of the storage cache is determined by the difference between an expected lifetime write operations endurance of the storage device minus a number of write operations already performed on the storage device over the life ...

Подробнее
17-01-2019 дата публикации

DATA STORAGE DEVICE AND OPERATING METHOD THEREOF

Номер: US20190018767A1
Автор: Jin Yong, KOO Duck Hoi
Принадлежит:

A data storage device includes a first nonvolatile memory device including first LSB, CSB and MSB pages; a second nonvolatile memory device including second LSB, CSB and MSB pages; a data cache memory is configured to store data write-requested from a host device; and a control unit suitable for configuring the first and second LSB pages as an LSB super page, configuring the first and second CSB pages as a CSB super page, and configuring the first and second MSB pages as an MSB super page, wherein the control unit is configured to one-shot programs the data stored in the data cache memory in the first LSB, CSB and MSB pages when determination is made as a data stability mode, and is configured to one-shot programs data stored in the data cache memory in the LSB, CSB and MSB super pages in a performance-improving mode. 1. A data storage device comprising:a first nonvolatile memory device including a first least significant bit (LSB) page, a first central significant bit (CSB) page and a first most significant bit (MSB) page;a second nonvolatile memory device including a second LSB page, a second CSB page and a second MSB page;a data cache memory is configured to store data write-requested from a host device; anda control unit suitable for configuring the first LSB page and the second LSB page as an LSB super page, configuring the first CSB page and the second CSB page as a CSB super page, and configuring the first MSB page and the second MSB page as an MSB super page,wherein the control unit is configured to one-shot programs the data stored in the data cache memory in the first LSB page, the first CSB page and the first MSB page in a data stability mode, and is configured to one-shot programs data stored in the data cache memory in the LSB super page, the CSB super page and the MSB super page in a performance-improving mode.2. The data storage device according to claim 1 , wherein the data stability mode is enabled according to a force unit access (FUA) command or a ...

Подробнее
17-01-2019 дата публикации

FILTERING OF REDUNDENTLY SCHEDULED WRITE PASSES

Номер: US20190018779A1
Принадлежит:

Improving access to a cache by a processing unit. One or more previous requests to access data from a cache are stored. A current request to access data from the cache is retrieved. A determination is made whether the current request is seeking the same data from the cache as at least one of the one or more previous requests. A further determination is made whether the at least one of the one or more previous requests seeking the same data was successful in arbitrating access to a processing unit when seeking access. A next cache write access is suppressed if the at least one of previous requests seeking the same data was successful in arbitrating access to the processing unit. 1. A method of improving access to a cache by a processing unit , the method comprising:Storing one or more previous requests to access data from a cache;Retrieving a current request to access data from the cache;Determining whether the current request is seeking the same data from the cache as at least one of the one or more previous requests;Determining whether the at least one of the one or more previous requests seeking the same data was successful in arbitrating access to a processing unit when seeking access; andSuppressing a next cache write access if the at least one of the one or more previous requests seeking the same data was successful in arbitrating access to the processing unit.2. The method of claim 1 , wherein when determining whether the current request is seeking the same data as at least one of the one or more previous requests claim 1 , the processing unit determines whether the current request and at least one of the one or more previous requests are each seeking to access a same logical address of a cache directory.3. The method of claim 1 , wherein the cache is selected from a group consisting of a first level cache claim 1 , a second level cache claim 1 , a third level cache claim 1 , and a fourth level cache.4. The method of claim 1 , wherein the next cache write ...

Подробнее
16-01-2020 дата публикации

MEMORY SYSTEM AND OPERATING METHOD THEREOF

Номер: US20200019497A1
Автор: LEE Jong-Min, NA Hyeong-Ju
Принадлежит:

A memory system includes: a memory device; a candidate logical block address (LBA) sensor suitable for detecting a start LBA of a sequential workload as a candidate LBA, and, when a ratio of the number of update blocks to a total sum of valid page decrease amounts is less than a first threshold value, caching the candidate LBA in a loop cache; and a garbage collector suitable for performing a garbage collection operation on a victim block, when the number of free blocks in the memory device is less than a second threshold value and greater than or equal to a third threshold value and a start LBA of a subsequent sequential workload is not the same as the cached candidate LBA. 1. A memory system , comprising:a memory device;a candidate logical block address (LBA) sensor suitable for detecting a start LBA of a sequential workload as a candidate LBA, and, when a ratio of the number of update blocks to a total sum of valid page decrease amounts is less than a first threshold value, caching the candidate LBA in a loop cache; anda garbage collector suitable for performing a garbage collection operation on a victim block, when the number of free blocks in the memory device is less than a second threshold value and greater than or equal to a third threshold value and a start LBA of a subsequent sequential workload is not the same as the cached candidate LBA.2. The memory system of claim 1 , further comprising:a valid page counter suitable for counting the number of valid pages of each closed memory block in the memory device before and after a map update operation.3. The memory system of claim 1 , wherein the total sum of the valid page decrease amounts is obtained by summing all valid page decrease amounts calculated for each closed memory block.4. The memory system of claim 3 , wherein each of the valid page decrease amount for each closed memory block represents a difference between the number of valid pages of the corresponding closed memory block counted after a map ...

Подробнее
16-01-2020 дата публикации

MEMORY DEVICE

Номер: US20200019508A1
Принадлежит:

A memory device includes a plurality of bit lines extending in a first direction and arranged in a second direction perpendicular to the first direction; a page buffer circuit including a plurality of page buffers which are electrically coupled to the plurality of bit lines; and a cache circuit including a plurality of caches which are electrically coupled to the plurality of page buffers, wherein the page buffer circuit is divided into a plurality of page buffer regions and is laid out at both sides of the cache circuit in the first direction. 1. A memory device comprising:a plurality of bit lines extending in a first direction and arranged in a second direction perpendicular to the first direction;a page buffer circuit including a plurality of page buffers which are electrically coupled to the plurality of bit lines; anda cache circuit including a plurality of caches which are electrically coupled to the plurality of page buffers,wherein the page buffer circuit is divided into a plurality of page buffer regions and is disposed at both sides of the cache circuit in the first direction.2. The memory device according to claim 1 ,wherein the cache circuit is divided into at least two cache regions, andwherein the page buffer regions are disposed at both sides of each of the cache regions in the first direction.3. The memory device according to claim 2 ,wherein the page buffer circuit is divided into page buffer regions the number of which is two times the number of the cache regions, andwherein a pair of page buffer regions are respectively disposed at both sides of one cache region in the first direction.4. The memory device according to claim 3 , wherein the cache circuit is divided into two cache regions claim 3 , and the page buffer circuit is divided into four page buffer regions.5. The memory device according to claim 1 ,wherein a page buffer respectively corresponds to a cache, andwherein each page buffer is coupled with a corresponding cache through a separate ...

Подробнее
16-01-2020 дата публикации

CLIENT-SIDE CACHING FOR DEDUPLICATION DATA PROTECTION AND STORAGE SYSTEMS

Номер: US20200019514A1
Автор: Desai Keyur
Принадлежит:

A system performing client-side caching of data in a deduplication backup system by maintaining an Adaptive Replacement Cache (ARC) to pre-populate cached data and flush incrementals of the cached data in a client coupled to a backup server in the system. The system maintains cache consistency among clients by a time-to-live (TTL) measurement associated with each entry in a respective client cache, and a retry on stale entry mechanism to signal from the server when a cache entry is stale in a client due to change of a corresponding cache entry in another client. The ARC cache keeps track of both frequently used and recently used pages, and a recent eviction history for both the frequently used and recently used pages. 1. A computer-implemented method of performing client-side caching of data in a deduplication backup system , comprising:maintaining an Adaptive Replacement Cache (ARC) to pre-populate cached data and to flush incrementals of the cached data in a client coupled to a backup server in the system;maintaining cache consistency among clients in the system by at least one of a time-to-live (TTL) measurement associated with each entry in a respective client cache, and a retry on stale entry mechanism to signal from the server when a cache entry is stale in a client due to change of a corresponding cache entry in another client.2. The method of wherein the ARC caches most frequently used (MFU) data blocks and most recently used (MRU) data blocks used by the client.3. The method of further comprising:storing fingerprints of the MFU and MRU data blocks in respective ghost lists associated with each of the MFU and MRU data to form GMFU and GMRU data;pre-populating the ARC with the MFU and MRU data and associated GMFU and GMRU data after a flush operation by a first client upon request to use a common server file by a second client; andsending incremental data comprising only changed data as determined by the deduplication backup system from the ARC during a ...

Подробнее
28-01-2016 дата публикации

Selective mirroring in caches for logical volumes

Номер: US20160026575A1

Methods and structure for selective cache mirroring. One embodiment includes a control unit and a memory. The memory is able to store indexing information for a multi-device cache for a logical volume. The control unit is able to receive an Input/Output (I/O) request from a host directed to a Logical Block Address (LBA) of the logical volume, to consult the indexing information to identify a cache line for storing the I/O request, and to store the I/O request at the cache line on a first device of the cache. The control unit is further able to mirror the I/O request to another device of the cache if the I/O request is a write request, and to complete the I/O request without mirroring the I/O request to another device of the cache if the I/O request is a read request.

Подробнее
25-01-2018 дата публикации

IMAGE FORMING APPARATUS

Номер: US20180024491A1
Принадлежит: KONICA MINOLTA, INC.

An image forming device that executes an image forming program for real-time mechanical control and another program, using a single cache memory for the image forming program and said another program. The image forming device includes: a cache lockdown unit that executes a cache lockdown to lock down a storage area of the cache memory that stores at least a portion of the image forming program necessary for image formation processing; and a print unit that executes the image formation processing while the storage area is locked down. 1. An image forming device that executes an image forming program for real-time mechanical control and another program , using a single cache memory for the image forming program and said another program , the image forming device comprising:a cache lockdown unit that executes a cache lockdown to lock down a storage area of the cache memory that stores at least a portion of the image forming program necessary for image formation processing; anda print unit that executes the image formation processing while the storage area is locked down.2. The image forming device of claim 1 , further comprising:a simulated execution unit that, prior to the cache lockdown, executes the image formation processing while suppressing access to and timing control of a target of the mechanical control, causing storing of the portion of the image forming program necessary for image formation processing in the cache memory, whereinthe cache lockdown unit executes the cache lockdown after completion of execution of the image formation processing by the simulated execution unit.3. The image forming device of claim 2 , further comprising:a print preparation unit that, prior to the image formation processing, executes preparation processing required for the image formation processing, whereinthe simulated execution unit executes the image formation processing during the preparation processing.4. The image forming device of claim 3 , further comprising:a fixing ...

Подробнее
10-02-2022 дата публикации

ASYNCHRONOUS POWER LOSS RECOVERY FOR MEMORY DEVICES

Номер: US20220043746A1
Принадлежит:

An example memory sub-system includes a memory device and a processing device, operatively coupled to the memory device. The processing device is configured to maintain a logical-to-physical (L2P) table, wherein a region of the L2P table is cached in a volatile memory; maintain a write count reflecting a number of bytes written to the memory device; maintain a cache miss count reflecting a number of cache misses with respect to a cache of the L2P table; responsive to determining that a value of a predetermined function of the write count and the cache miss count exceeds a threshold value, copy the region of the L2P table to a non-volatile memory. 1. A system comprising:a memory device; and maintain a logical-to-physical (L2P) table, wherein a region of the L2P table is cached in a volatile memory;', 'maintain a write count reflecting a number of bytes written to the memory device;', 'maintain a cache miss count reflecting a number of cache misses with respect to a cache of the region of the L2P table;', 'responsive to determining that the write count exceeds a first threshold value and the cache miss count exceeds a second threshold value, copy the region of the L2P table to a non-volatile memory., 'a processing device, operatively coupled to the memory device, the processing device to2. The system of claim 1 , wherein the processing device is further to:store, in the non-volatile memory, a metadata page associated with the region of the L2P table.3. The system of claim 1 , wherein the write count is a sum of a first number of bytes written by a host to the memory device and a second number of bytes written by a garbage collector (GC) process to the memory device.4. The system of claim 1 , wherein the processing device is further to:reconstruct the L2P table after an asynchronous power loss (APL) event.5. The system of claim 1 , wherein the processing device is further to:maintain an L2P journal comprising a plurality of L2P journal entries, wherein each L2P journal ...

Подробнее
10-02-2022 дата публикации

EXECUTABLE MEMORY PAGE VALIDATION SYSTEM AND METHOD

Номер: US20220043754A1
Принадлежит:

An executable memory page validation system for validating one or more executable memory pages on a given endpoint, the executable memory page validation system comprising at least one processing resource configured to: obtain a plurality of vectors, each vector of the vectors being a bitmask indicative of valid hash values calculated for a plurality of executable memory pages available on the endpoint, the valid hash values being calculated using a respective distinct hash function; calculate one or more validation hash values for a given executable memory page to be loaded to a computerized memory of the endpoint for execution thereof, using one or more selected hash functions of the distinct hash functions; and determine that the given executable memory page is invalid, upon one or more of the validation hash values not being indicated as valid in the corresponding one or more vectors. 1. An executable memory page validation system for validating one or more executable memory pages on a given endpoint , the executable memory page validation system comprising at least one processing resource configured to:obtain a plurality of vectors, each vector of the vectors being a bitmask indicative of valid hash values calculated for a plurality of executable memory pages available on the endpoint, the valid hash values being calculated using a respective distinct hash function;calculate one or more validation hash values for a given executable memory page to be loaded to a computerized memory of the endpoint for execution thereof, using one or more selected hash functions of the distinct hash functions; anddetermine that the given executable memory page is invalid, upon one or more of the validation hash values not being indicated as valid in the corresponding one or more vectors.2. The executable memory page validation system of claim 1 , wherein the given executable memory page is to be loaded from a non-volatile memory or from a cache memory.3. (canceled)4. The ...

Подробнее
24-01-2019 дата публикации

Data writing method, memory control circuit unit and memory storage device

Номер: US20190026227A1
Автор: Chih-Kang Yeh
Принадлежит: Phison Electronics Corp

A data writing method, a memory control circuit unit, and a memory storage device are provided. The method includes: transmitting a first data request command to a host system to obtain a plurality of data, wherein the plurality of data are arranged in a sequence order in the host system; obtaining first data among the plurality of data from the host system according to the first data request command and obtaining second data among the plurality of data from the host system after obtaining the first data; writing the first data to a corresponding physical page on a first word line among a plurality of word lines; and writing the second data to another corresponding physical page on a second word line among the plurality of word lines, wherein the first word line belongs to a first memory sub-module in a plurality of memory sub-modules, the second word line belongs to a second memory sub-module in the plurality of memory sub-modules, and the first data and the second data are discontinuously arranged in the sequence order.

Подробнее
24-01-2019 дата публикации

PRIVATE CACHING FOR THREAD LOCAL STORAGE DATA ACCESS

Номер: US20190026228A1
Автор: Jiang Xiaowei
Принадлежит:

A multi-core CPU includes a Last-Level Cache (LLC) interconnected with a plurality of cores. The LLC may include a shared portion and a private portion. The shared portion may be shared by the plurality of cores. The private portion may be connected to a first core of the plurality of cores and may be exclusively assigned to the first core. The first core may be configured to initiate a data access request to access data stored in the LLC and initiate a data access request to access data stored in the LLC. The first core may route the data access request to the private portion based on the determination that the data access request is the TLS type of access request and route the data access request to the shared portion based on the determination that the data access request is not the TLS type of access request. 1. A central processing unit (CPU) , comprising:a plurality of cores; and the shared portion is shared by the plurality of cores, and', 'the private portion is connected to a first core of the plurality of cores and is exclusively assigned to the first core., 'a Last-Level Cache (LLC) interconnected with the plurality of cores, the LLC including a shared portion and a private portion, wherein2. The CPU of claim 1 , wherein the first core is configured to:initiate a data access request to access data stored in the LLC;determine whether the data access request is a Thread Local Storage (TLS) type of access request based on an annotation associated with the data access request;route the data access request to the private portion based on the determination that the data access request is the TLS type of access request; androute the data access request to the shared portion based on the determination that the data access request is not the TLS type of access request.3. The CPU of claim 2 , wherein the annotation includes an instruction annotation associated with a load instruction or a store instruction.4. The CPU of claim 2 , wherein the annotation includes a ...

Подробнее
23-01-2020 дата публикации

METHODS AND APPARATUS FOR ACCELERATING VIRTUAL MACHINE MIGRATION

Номер: US20200026556A1
Принадлежит: Intel Corporation

A server having a host processor coupled to a programmable coprocessor is provided. One or more virtual machines may run on the host processor. The coprocessor may be coupled to an auxiliary memory that stores virtual machine (VM) states. During live migration, the coprocessor may determine when to move the VM states from the auxiliary memory to a remote server node. The coprocessor may include a coherent protocol home agent and state tracking circuitry configured to track data modification at a cache line granularity. Whenever a particular cache line has been modified, only the data associated with that cache line will be moved to the remote server without having to copy over the entire page, thereby substantially reducing the amount of data that needs to be transferred during migration events. 1. An integrated circuit , comprising:a memory controller configured to access an external memory storing virtual machine (VM) state information, wherein the VM state information is organized into a plurality of pages each of which includes a plurality of cache lines;a coherency protocol circuit configured to expose the external memory as an operating system (OS) managed system memory to an external host processor coupled to the integrated circuit, to service transactions issued from the external host processor, and to monitor the state of individual cache lines in the plurality of pages at a cache line granularity; anda state tracker circuit configured to analyze the state of individual cache lines and to determine when it is appropriate to migrate individual cache lines to a remote server node to optimize total migration time.2. The integrated circuit of claim 1 , wherein the transactions received at the coherency protocol circuit is issued in accordance with a cache coherency protocol.3. The integrated circuit of claim 1 , further comprising a coherence memory controller coupled between the coherency protocol circuit on the integrated circuit and the external memory ...

Подробнее
23-01-2020 дата публикации

Method, apparatus and computer program product for managing address in storage system

Номер: US20200026658A1
Принадлежит: EMC IP Holding Co LLC

Techniques manage addresses in a storage system. In such techniques, an address page of an address pointing to target data in the storage system is determined in response to receiving an access request for accessing data in the storage system. A transaction for managing the address page is generated on the basis of the address page, here the transaction at least comprises an indicator of the address page and a state of the transaction. A counter describing how many times the address page is referenced is set. The transaction is executed at a control node of the storage system on the basis of the counter. With such techniques, the access speed for addresses in the storage system can be accelerated, and then the overall response speed of the storage system can be increased.

Подробнее
28-01-2021 дата публикации

Memory controller and initialization method for use in data storage device

Номер: US20210026718A1
Автор: Sheng-Yuan Huang
Принадлежит: Silicon Motion Inc

A memory controller is provided. The memory controller is coupled to a flash memory that includes a plurality of physical blocks, and each physical block includes a plurality of physical pages, and some of the physical pages are defective physical pages. The memory controller includes a processor that is configured to set a total target initialization time for an initialization process of the flash memory. The processor sequentially selects a current physical block from among all the physical blocks to perform the initialization process, and it performs a read operation of the initialization process on the current physical block using a read-operation threshold. In response to the read operation of the current physical block being completed, the processor dynamically adjusts the read-operation threshold of the read operation of the physical blocks, so that the initialization process is completed within the total target initialization time.

Подробнее
28-01-2021 дата публикации

MEMORY SYSTEM, DATA PROCESSING SYSTEM AND OPERATION METHOD OF THE SAME

Номер: US20210026733A1
Автор: LEE Jong-Min
Принадлежит:

A memory system includes a memory device including a plurality of memory blocks, each block having a plurality of pages to store data; and a controller suitable for: selecting error-prone pages each having a number of errors, which exceeds a threshold, among the plurality of pages, based on the number of errors of each of the plurality of pages; ranking the error-prone pages based on the numbers of errors therein; and performing a test read operation on the error-prone pages based on the ranking. 1. A memory system , comprising:a memory device including a plurality of memory blocks, each block having a plurality of pages to store data; anda controller suitable for:selecting error-prone pages each having a number of errors, which exceeds a threshold, among the plurality of pages, based on the number of errors of each of the plurality of pages;ranking the error-prone pages based on the numbers of errors therein; andperforming a test read operation on the error-prone pages based on the ranking.2. The memory system of claim 1 , wherein the controller comprises:a read disturbance test component suitable for selecting specific memory blocks among the plurality of memory blocks, and acquiring the number of errors of each of the plurality of pages in each of the specific memory blocks;a buffer memory component suitable for storing the numbers of errors;an error management component suitable for performing the selecting and the ranking; anda test read component suitable for performing the test read operation.3. The memory system of claim 2 , wherein the read disturbance test component repeatedly performs a certain number of read disturbance test operations on the specific memory blocks claim 2 , and acquires the numbers of errors by accumulating the numbers of errors of the specific memory blocks with respect to the respective pages.4. The memory system of claim 3 , wherein the controller terminates an operation of the read disturbance test component when a number of the ...

Подробнее
28-01-2021 дата публикации

CONTROLLER AND MEMORY SYSTEM INCLUDING THE SAME

Номер: US20210026764A1
Принадлежит:

A controller and a memory system including the same are disclosed. The controller receives a write command for storing write data, which is stored in at least one among a plurality of memory regions included in a host memory, in a nonvolatile memory device, generates a host memory map table by mapping virtual addresses to host memory physical addresses corresponding to the at least one memory region, and transmits the write data stored in the at least one memory region to the nonvolatile memory device by converting the virtual addresses into the host memory physical addresses based on the host memory map table. 1. A memory system comprising:a nonvolatile memory device; anda controller configured to control the nonvolatile memory device,wherein the controller is further configured to:receive a write command for storing write data, currently stored in at least one among a plurality of memory regions in a host memory, in the nonvolatile memory, generate a host memory map table by mapping virtual addresses to host memory physical addresses corresponding to the at least one memory region, andtransmit the write data to the nonvolatile memory device from the host memory based on the host memory map table.2. The memory system of claim 1 , wherein the controller generates the host memory map table by mapping the virtual addresses to the host memory physical addresses corresponding to a plurality of sub memory regions within the at least one memory region claim 1 , each of the plurality of sub memory regions having a set size.3. The memory system of claim 2 , wherein the set size is a data size unit to be processed in the memory system.4. The memory system of claim 2 , wherein the nonvolatile memory device includes:a memory cell array including a plurality of data storage regions; anda page buffer configured to temporarily store the transmitted write data,wherein the set size is a data size to be stored in the page buffer.5. The memory system of claim 2 , wherein the ...

Подробнее
28-01-2021 дата публикации

Data processing system and operating method thereof

Номер: US20210026774A1
Автор: Min Soo LIM
Принадлежит: SK hynix Inc

A data processing system may include a memory apparatus and a controller configured to control the memory apparatus. The memory apparatus includes a plurality of pages and is accessible in units of the pages. The controller may include a mode control component configured to generate an activation mode control signal for setting the memory apparatus in a partial page activation mode based on a type of a processing task requested by a host and address information requested to be accessed, and wherein less than all of a page of the memory apparatus being accessed is activated when the memory apparatus is in the partial page activation mode.

Подробнее
28-01-2021 дата публикации

ENFORCING CODE INTEGRITY USING A TRUSTED COMPUTING BASE

Номер: US20210026785A1
Автор: AMIT Nadav, Wei Michael
Принадлежит:

One or more kernel-modifying procedures are stored in a trusted computing base (TCB) when bringing up a guest operating system (OS) on a virtual machine (VM) on a virtualization platform. When the guest OS invokes an OS-level kernel-modifying procedure, a call is made to the hypervisor. If the hypervisor determines the TCB to be valid, the kernel-modifying procedure in the TCB that corresponds to the OS-level kernel-modifying procedure is invoked so that the kernel code can be modified. 1. A method comprising:invoking an operating system procedure that is directed to modifying memory locations in an operating system memory, the memory locations storing kernel code of an operating system (OS);in response to invoking the OS procedure, validating at least a portion of a protected memory that stores program code comprising a kernel-modifying procedure; andin response to a determination that the portion of the protected memory that stores the program code comprising the kernel-modifying procedure is deemed to be valid, causing the kernel-modifying procedure to execute, wherein execution of the kernel-modifying procedure includes modifying the kernel code stored in the memory locations in the OS memory.2. The method of claim 1 , further comprising storing the kernel-modifying procedure in the protected memory during bringing up of the OS.3. The method of claim 1 , wherein the kernel-modifying procedure is one procedure among a plurality of procedures that modify the kernel code.4. The method of claim 1 , wherein an address space of the kernel code is separate from an address space of the protected memory.5. The method of claim 1 , wherein the kernel code and the protected memory are on different computer systems.6. The method of claim 1 , wherein write operations on OS memory pages that contain the kernel code are disabled during initialization of the OS claim 1 , the method further comprising:enabling write operations on the OS memory pages prior to causing the kernel- ...

Подробнее
28-01-2021 дата публикации

MEMORY DEVICE AND METHOD OF OPERATING THE MEMORY DEVICE

Номер: US20210027849A1
Принадлежит: SK HYNIX INC.

A memory device according to an embodiment includes a memory cell block including a plurality of pages with each page corresponding to a word line of a plurality of word lines, a peripheral circuit configured to perform a program operation on the plurality of pages, and control logic configured to control the peripheral circuit to perform the program operation. The control logic changes and sets a bit line voltage applied to bit lines of the memory cell block during a program verify operation of the program operation according to a program order of each of the plurality of pages. 1. A memory device , comprising:a memory cell block including a plurality of pages, wherein each of the plurality of pages corresponds to a word line of a plurality of word lines;a peripheral circuit configured to perform a program operation on the plurality of pages; andcontrol logic configured to control the peripheral circuit to perform the program operation, the control logic changing and setting a bit line voltage applied to bit lines of the memory cell block during a program verify operation of the program operation according to a program order of each of the plurality of pages.2. The memory device of claim 1 , wherein the control logic controls the peripheral circuit to perform the program operation by sequentially selecting the plurality of pages.3. The memory device of claim 2 , wherein the control logic controls the peripheral circuit to perform the program operation according to a normal program order in which the pages corresponding to the each of the plurality of word lines are sequentially programmed from pages adjacent to a source line.4. The memory device of claim 2 , wherein the control logic controls the peripheral circuit to perform the program operation according to a reverse program order in which the pages corresponding to the each of the plurality of word lines are sequentially programmed from pages adjacent to a bit line.5. The memory device of claim 1 , wherein the ...

Подробнее
29-01-2015 дата публикации

IMPLEMENTING SELECTIVE CACHE INJECTION

Номер: US20150032968A1

A method, system and memory controller for implementing memory hierarchy placement decisions in a memory system including direct routing of arriving data into a main memory system and selective injection of the data or computed results into a processor cache in a computer system. A memory controller, or a processing element in a memory system, selectively drives placement of data into other levels of the memory hierarchy. The decision to inject into the hierarchy can be triggered by the arrival of data from an input output (IO) device, from computation, or from a directive of an in-memory processing element. 1. A method for implementing memory hierarchy placement decisions in a memory system in a computer system comprising:routing arriving data directly into a memory system;selectively injecting the data into a processor cache; andusing one of a memory controller and a processing element in the memory system, selectively driving placement of the data into a level of the memory hierarchy.2. The method as recited in wherein selectively injecting the data into the processor cache includes holding the injected data until accessed or deallocated by a processor.3. The method as recited in includes providing a directory bit with the injected data to prevent eviction claim 2 , and clearing the directory bit with the injected data being accessed or deallocated by the processor.4. The method as recited in includes one of triggering a decision for selectively injecting the data or computed results into the processor cache based upon arrival of data from an IO device; triggering a decision for selectively injecting the data or computed results into the processor cache based upon a result of computation using the data; or triggering a decision for selectively injecting the data or computed results into the processor cache based upon a directive of an in-memory processing element.5. The method as recited in wherein selectively injecting the data into the processor cache includes ...

Подробнее
02-02-2017 дата публикации

APPARATUS AND METHOD FOR IMPLEMENTING A MULTI-LEVEL MEMORY HIERARCHY HAVING DIFFERENT OPERATING MODES

Номер: US20170031821A1
Принадлежит:

A system and method are described for integrating a memory and storage hierarchy including a non-volatile memory tier within a computer system. In one embodiment, PCMS memory devices are used as one tier in the hierarchy, sometimes referred to as “far memory.” Higher performance memory devices such as DRAM placed in front of the far memory and are used to mask some of the performance limitations of the far memory. These higher performance memory devices are referred to as “near memory.” In one embodiment, the “near memory” is configured to operate in a plurality of different modes of operation including (but not limited to) a first mode in which the near memory operates as a memory cache for the far memory and a second mode in which the near memory is allocated a first address range of a system address space with the far memory being allocated a second address range of the system address space, wherein the first range and second range represent the entire system address space. 134.-. (canceled)35. A multi-level memory system comprising:a processor having a plurality of cores to execute instructions and process data and one or more processor caches to cache instructions and data according to a first cache management policy;a first-level memory having a first set of characteristics associated therewith, the first set of characteristics including a first read access speed and a first write access speed; anda second-level memory having a second set of characteristics associated therewith, the second set of characteristics including second read and write access speeds at least one of which is relatively lower than either the first read access speed or first write access speed, respectively, non-volatility such that the second level memory is to maintain its content when power is removed, random access and memory subsystem addressability such that instructions or data stored therein may be accessed at a granularity equivalent to a memory subsystem of a computer system;a ...

Подробнее
02-02-2017 дата публикации

ADDRESS CACHING IN SWITCHES

Номер: US20170031835A1
Автор: SEREBRIN Benjamin C.
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for storing an address in a memory of a switch. One of the systems includes a switch that receives packets from and delivers packets to devices connected to a bus without any components on the bus between the switch and each of the devices, a memory integrated into the switch to store a mapping of virtual addresses to physical addresses, and a storage medium integrated into the switch storing instructions executable by the switch to cause the switch to perform operations including receiving a response to an address translation request for a device connected to the switch by the bus, the response including a mapping of a virtual address to a physical address, and storing, in the memory, the mapping of the virtual address to the physical address in response to receiving the response. 1. A system comprising:a switch that receives packets from and delivers packets to one or more devices connected to a bus without any components on the bus between the switch and each of the devices;a memory integrated into the switch to store a mapping of virtual addresses to physical addresses; and receiving, by the switch, a response to an address translation request for a device connected to the switch by the bus, the response including a mapping of a virtual address to a physical address; and', 'storing, in the memory, the mapping of the virtual address to the physical address in response to receiving the response to the address translation request for the device., 'a non-transitory computer readable storage medium integrated into the switch storing instructions executable by the switch and upon such execution cause the switch to perform operations comprising2. The system of claim 1 , comprising:an input/output memory management unit (IOMMU) integrated into the switch and including an IOMMU memory, wherein:the memory comprises the IOMMU memory;receiving, by the switch, the response to the ...

Подробнее
04-02-2016 дата публикации

ACCESS SUPPRESSION IN A MEMORY DEVICE

Номер: US20160034403A1
Принадлежит:

A memory device and a method of operating the memory device are provided. The memory device comprises a plurality of storage units and access control circuitry. The access control is configured to receive an access request and in response to the access request to initiate an access procedure in each of the plurality of storage units. The access control circuitry is configured to receive an access kill signal after the access procedure has been initiated and, in response to the access kill signal, to initiate an access suppression to suppress the access procedure in at least one of the plurality of storage units. Hence, by initiating the access procedures in all storage units in response to the access request, e.g. without waiting for a further indication of a specific storage unit in which to carry out the access procedure, the overall access time for the memory device kept low, but by enabling at least one of the access procedures later to be suppressed in response to the access kill signal dynamic power consumption of the memory device can be reduced. 1. A memory device comprising:a plurality of storage units; andaccess control circuitry configured to receive an access request and in response to the access request to initiate an access procedure in each of the plurality of storage units,wherein the access control circuitry is configured to receive an access kill signal after the access procedure has been initiated,and the access control circuitry is configured, in response to the access kill signal, to initiate an access suppression to suppress the access procedure in at least one of the plurality of storage units.2. The memory device as claimed in claim 1 , wherein each of the plurality of storage units comprises wordline circuitry claim 1 , the wordline circuitry configured to activate a selected wordline in response to the access request as part of the access procedure claim 1 , and the memory device further comprises wordline suppression circuitry configured ...

Подробнее
17-02-2022 дата публикации

DISASSOCIATING MEMORY UNITS WITH A HOST SYSTEM

Номер: US20220050775A1
Принадлежит:

A command pertaining to a non-volatile memory device on a memory sub-system is received from a host system. A portion of the non-volatile memory device has an association with the host system. In response to determining that the command is a dissociate instruction to dissociate the portion of the non-volatile memory device on the memory sub-system with the host system, remove the association of the portion of the non-volatile memory device on the memory sub-system with the host system. 1. A method comprising:receiving, from a host system, a command pertaining to a non-volatile memory device on a memory sub-system, wherein a portion of the non-volatile memory device has an association with the host system;determining whether the command is a dissociate instruction to dissociate the portion of the non-volatile memory device on the memory sub-system with the host system; andin response to determining that the command is the dissociate instruction, removing the association of the portion of the non-volatile memory device on the memory sub-system with the host system.2. The method of claim 1 , further comprising:overwriting the portion of the non-volatile memory device with a default set of data.3. The method of claim 1 , further comprising:receiving, from the host system, a range of logical addresses associated with the host system, the range of logical addresses indicating logical addresses that the host system can access.4. The method of claim 3 , wherein determining whether the command is the dissociate instruction comprises:identifying a logical address specified in the command; andin response to determining that the logical address specified in the command is not within the range of logical addresses associated with the host system, determining that the command is the dissociate instruction.5. The method of claim 1 , wherein removing the association of the portion of the non-volatile memory device comprises:identifying a payload specified in the command;identifying ...

Подробнее
17-02-2022 дата публикации

METHOD AND SYSTEM FOR LOGICAL TO PHYSICAL (L2P) MAPPING FOR DATA-STORAGE DEVICE COMPRISING NON-VOLATILE MEMORY

Номер: US20220050784A1
Принадлежит:

The present disclosure provides a method of logical to physical mapping for a data-storage device comprising a non-volatile memory device. The method comprises maintaining a first type of information representing at least a part of a logical-to-physical address translation map. Further, the method comprises maintaining a second type of information pertaining to the logical-to-physical translation map as a part of a physical page. Further, the method comprises completing a logical-to-physical mapping based on the first and second type of information to thereby determine a physical location, within one or more of the physical pages, of the data stored in each logical page. 1. A method of logical to physical mapping for a data storage device comprising a non-volatile memory device , said method comprising:defining a first type of information within the non-volatile memory as representing at least partly a logical-to-physical (L2P) address translation map;defining a second type of information pertaining to the L2P translation map as a part of a physical page within the non-volatile memory; andmapping the first and second type of information to draw a logical to physical mapping and thereby determine a physical location, within one or more physical pages, of data stored in each logical page.2. The method as claimed in claim 1 , wherein the second type of information is stored within a spare area of the non-volatile memory to thereby enable an L2P table and the spare area at rendering the complete logical to physical map.3. The method as claimed in claim 1 , the method further comprises:writing data in a plurality of logical pages corresponding to a single physical page of the non-volatile memory, each of the plurality of logical pages being associated with a logical page number configured to enable a controller to logically reference data in one corresponding physical page;updating a spare area within the non-volatile memory to indicate logical page number;updating the ...

Подробнее
17-02-2022 дата публикации

SYSTEM PROBE AWARE LAST LEVEL CACHE INSERTION BYPASSING

Номер: US20220050785A1
Принадлежит:

Systems, apparatuses, and methods for employing system probe filter aware last level cache insertion bypassing policies are disclosed. A system includes a plurality of processing nodes, a probe filter, and a shared cache. The probe filter monitors a rate of recall probes that are generated, and if the rate is greater than a first threshold, then the system initiates a cache partitioning and monitoring phase for the shared cache. Accordingly, the cache is partitioned into two portions. If the hit rate of a first portion is greater than a second threshold, then a second portion will have a non-bypass insertion policy since the cache is relatively useful in this scenario. However, if the hit rate of the first portion is less than or equal to the second threshold, then the second portion will have a bypass insertion policy since the cache is less useful in this case. 1. A system comprising:a probe filter; anda cache;{'claim-text': ['monitor a recall probe rate of the probe filter;', 'responsive to the recall probe rate being greater than a first threshold, partition the cache into two portions;', 'apply a first insertion policy for a first portion of the cache and monitor a hit rate of the first portion; and', 'apply a second insertion policy for a second portion of the cache, wherein the second insertion policy is selected based on a comparison of the hit rate of the first portion to a second threshold.'], '#text': 'wherein the system is configured to:'}2. The system as recited in claim 1 , wherein the second insertion policy is a bypass policy if the hit rate is less than the second threshold.3. The system as recited in claim 1 , wherein the second insertion policy is a non-bypass policy if the hit rate is greater than or equal to the second threshold.4. The system as recited in claim 1 , wherein the first insertion policy is a non-bypass policy.5. The system as recited in claim 1 , wherein a size of the first portion is less than a size of the second portion.6. The ...

Подробнее
31-01-2019 дата публикации

CACHE MANAGEMENT SYSTEM AND METHOD

Номер: US20190034345A1
Принадлежит:

A method, computer program product, and computing system for identifying, at the computing device, one or more cache pages in a cache system. One or more cache pages may be refactored into one or more cache units within the one or more cache pages. A plurality of parallel IO requests may be executed on the one or more cache units within the one or more cache pages. 1. A computer-implemented method , executed on a computing device , comprising:identifying, at the computing device, one or more cache pages in a cache system;refactoring the one or more cache pages into one or more cache units within the one or more cache pages; andexecuting a plurality of parallel IO requests on the one or more cache units within the one or more cache pages.2. The computer-implemented method of claim 1 , wherein refactoring the one or more cache pages is based upon claim 1 , at least in part claim 1 , an alignment pattern of the plurality of parallel IO requests and a length of the one or more cache pages.3. The computer implemented method of claim 1 , wherein executing the plurality of parallel IO requests includes:locking at least one cache unit of the one or more cache units within the one or more cache pages during execution of the plurality of parallel IO requests on the at least one cache unit.4. The computer-implemented method of claim 1 , wherein refactoring the one or more cache pages includes:generating one or more bitmaps for the one or cache pages, wherein one or more bits in the one or more bitmaps represent the one or more cache units within the one or more cache pages.5. The computer-implemented method of claim 4 , wherein the one or more bits in the one or more bitmaps indicate which cache units within the one or more cache pages are valid.6. The computer-implemented method of claim 4 , wherein the one or more bits in the one or more bitmaps indicate which cache units within the one or more cache pages are dirty.7. The computer-implemented method of claim 6 , further ...

Подробнее
31-01-2019 дата публикации

System and method for negative feedback cache data flush in primary storage systems

Номер: US20190034346A1
Автор: Shuo Lv, Wenjun Wang
Принадлежит: EMC IP Holding Co LLC

A method, computer program product, and computer system for determining, by a computing device, a number of dirty pages capable of being generated per process on a backing device. It may be determined whether the number of dirty pages capable of being generated per process on the backing device exceeds a threshold set point of actual dirty pages currently generated per process on the backing device. A variable amount of time to sleep may be determined. Sleep may be executed for the variable amount of time, wherein generation of additional dirty pages is paused.

Подробнее
30-01-2020 дата публикации

Cascading pre-filter to improve caching efficiency

Номер: US20200034305A1
Принадлежит: Cisco Technology Inc

Aspects of the subject technology relate to a system configured to receive a request for data associated with a key, identify the key in a first pre-filter in a set of cascading pre-filters for the cache memory, wherein the set of cascading pre-filters is arranged in an order, and store, in response to the key being in the first pre-filter, the key in a second pre-filter in the set of cascading pre-filters, wherein the second pre-filter is next in the order after the first pre-filter.

Подробнее
31-01-2019 дата публикации

DYNAMIC RANDOM ACCESS MEMORY APPLIED TO AN EMBEDDED DISPLAY PORT

Номер: US20190035440A1
Принадлежит:

A dynamic random access memory applied to an embedded display port includes a memory core unit, a peripheral circuit unit, and an input/output unit. The memory core unit is used for operating in a first predetermined voltage. The peripheral circuit unit is electrically connected to the memory core unit for operating in a second predetermined voltage, where the second predetermined voltage is lower than 1.1V. The input/output unit is electrically connected to the memory core unit and the peripheral circuit unit for operating in a third predetermined voltage, where the third predetermined voltage is lower than 1.1V. 1. A dynamic random access memory applied to an interface port , comprising:a memory core cell, wherein the memory core cell is supplied with a first voltage within a first voltage range to make the memory core cell operate at the first voltage, and the memory core cell is a volatile memory cell; anda peripheral circuit electrically connected to the memory core cell, wherein the peripheral circuit is supplied with a second voltage within a second voltage range to make the peripheral circuit operate at the second voltage, wherein the second voltage is lower than 1.1V, andwherein the memory core cell and the peripheral circuit are formed on a single chip, and the peripheral circuit is external to the memory core cell.2. The dynamic random access memory of claim 1 , wherein the first voltage is lower than 1.1V.3. The dynamic random access memory of claim 2 , further comprising:an input/output unit electrically connected to the peripheral circuit and the memory core cell for operating in a third voltage, wherein the third voltage is lower than 1.1V. This is a continuation application of U.S. patent application Ser. No. 13/922,242, filed on Jun. 19, 2013, which claims the benefit of U.S. Provisional Application No. 61/672,287, filed on Jul. 17, 2012 and entitled “Flexible Memory Power Supply Architecture,” and the benefit of U.S. Provisional Application No. 61/ ...

Подробнее
05-02-2015 дата публикации

Data Bus Efficiency Via Cache Line Usurpation

Номер: US20150039839A1
Принадлежит:

Embodiments of the current invention permit a user to allocate cache memory to main memory more efficiently. The processor or a user allocates the cache memory and associates the cache memory to the main memory location, but suppresses or bypassing reading the main memory data into the cache memory. Some embodiments of the present invention permit the user to specify how many cache lines are allocated at a given time. Further, embodiments of the present invention may initialize the cache memory to a specified pattern. The cache memory may be zeroed or set to some desired pattern, such as all ones. Alternatively, a user may determine the initialization pattern through the processor. 114-. (canceled)15. A device comprising:a cache memory, the cache memory comprising one or more cache memory locations;a connection to a main memory, the main memory being external to the device and comprising one or more main memory locations; and determine whether a main memory location is already associated with a cache memory location,', 'access data from the main memory location,', 'determine whether the data from the main memory location is required for a future computation,', 'write the data from the main memory location to the cache memory location, if the data is required for a future computation; and', 'initialize the cache memory location, if the data is not required for a future computation., 'a control unit operably coupled to the cache memory, wherein the control unit is operable to1624-. (canceled)25. The device of claim 15 , wherein the control unit is operable to initialize the cache memory location through hardware.26. The device of claim 15 , wherein the control unit is operable to initialize the cache memory location if the main memory location is already associated with cache memory.27. The device of claim 15 , wherein the control unit is operable to allocate a cache line between the cache memory and the external memory.28. The device of claim 15 , wherein the control ...

Подробнее
04-02-2021 дата публикации

SYSTEMS AND METHODS FOR APPLYING CHECKPOINTS ON A SECONDARY COMPUTER IN PARALLEL WITH TRANSMISSION

Номер: US20210034465A1
Принадлежит:

The disclosure relates to a method of checkpointing. The method may include determining, by the primary computer, when to initiate a checkpoint point operation; dividing, at the primary computer, checkpoint data into two or more groups, wherein each group includes one or more pages of memory; transmitting a first group to the secondary computer; upon receiving, by the secondary computer, the first group, correlating memory pages in the first group with pages in memory on the secondary computer; determining, at the secondary computer, which bytes of memory pages of the first group differ from the correlated pages stored in memory in the secondary computer; and applying data from the first group by swapping differences between the memory pages of the first group and the correlated memory pages stored in the secondary computer. Where at least some of these multiple operations are performed in parallel during a subset of the overall checkpoint operation. The simultaneous performance of various memory manage checkpoint operations is advantageous in various fault tolerant systems. The differences may be N-byte differences such as 8-byte differences. 1. A method of checkpointing in a system have a primary computer and a secondary computer , wherein each of the primary computer and the secondary computer comprise available memory and reserved memory , the method comprising:determining, by the primary computer, when to initiate a checkpoint point operation;dividing, at the primary computer, a set of checkpoint data into a plurality of subsets, wherein each subset includes one or more memory pages of checkpoint data;transmitting a subset of the plurality of subsets to the secondary computer;upon receiving, by the secondary computer, the subset, storing the subset of the plurality of subsets in reserved memory;correlating, by the secondary computer, the subset with pages in available memory on the secondary computer;determining, by the secondary computer, which bytes of the ...

Подробнее
04-02-2021 дата публикации

Network Interface Device

Номер: US20210034526A1
Принадлежит: Solarflare Communications, Inc.

A network interface device comprises a programmable interface configured to provide a device interface with at least one bus between the network interface device and a host device. The programmable interface is programmable to support a plurality of different types of a device interface. 1. A network interface device comprising:a programmable interface configured to provide a device interface with at least one bus between the network interface device and a host device, the programmable interface being programmable to support a plurality of different types of a device interface.2. The network interface device as claimed in claim 1 , wherein said programmable interface is configured to support at least two instances of device interfaces at the same time.3. The network interface device as claimed in claim 2 , wherein the programmable interface comprises a common descriptor cache claim 2 , said common descriptor cache configured to store respective entries for transactions for the plurality of device interface instances.4. The network interface device as claimed in claim 3 , wherein an entry in said common descriptor cache comprises one or more of:pointer information;adapter instance and/or opaque endpoint index; ormetadata.5. The network interface device as claimed in claim 4 , wherein said metadata comprises one or more of:an indication if said pointer is a pointer, is a pointer to a data location or to a further pointer;a size associated with at least a part of said entry;an indication of an adaptor associated with said entry;an indication of one or more queues; anda location in one or more queues.6. The network interface device as claimed in claim 3 , wherein the common descriptor cache is at least partly partitioned with different partitions being associated with different device interface instances.7. The network interface device as claimed in claim 3 , wherein the common descriptor cache is shared between different device interface instances.8. The network ...

Подробнее
04-02-2021 дата публикации

SYSTEM AND METHOD FOR DUAL NODE PARALLEL FLUSH

Номер: US20210034534A1
Принадлежит:

A method, computer program product, and computer system for identifying a first node that has written a first page of a plurality of pages to be flushed. A second node that has written a second page of the plurality of pages to be flushed may be identified. It may be determined whether the first page of the plurality of pages is to be flushed by one of the first node and the second node and whether the second page of the plurality of pages is to be flushed by one of the first node and the second node based upon, at least in part, one or more factors. The first node may allocate the first page of the plurality of pages and the second page of the plurality of pages to be flushed in parallel by one of the first node and the second node based upon, at least in part, the one or more factors. 1. A computer-implemented method comprising:identifying a first node that has written a first page of a plurality of pages to be flushed;identifying a second node that has written a second page of the plurality of pages to be flushed;determining whether the first page of the plurality of pages is to be flushed by one of the first node and the second node and whether the second page of the plurality of pages is to be flushed by one of the first node and the second node based upon, at least in part, one or more factors; andallocating, by the first node, the first page of the plurality of pages and the second page of the plurality of pages to be flushed in parallel by one of the first node and the second node based upon, at least in part, the one or more factors.2. The computer-implemented method of further comprising sending claim 1 , by the first node to the second node claim 1 , a page ID and a log offset of the plurality of pages to be flushed.3. The computer-implemented method of further comprising receiving claim 1 , by the first node from the second node claim 1 , the page ID and the log offset of the plurality of pages that are flushed.4. The computer-implemented method of ...

Подробнее
04-02-2021 дата публикации

STORAGE DEVICE, MEMORY SYSTEM COMPRISING THE SAME, AND OPERATING METHOD THEREOF

Номер: US20210034536A1
Принадлежит:

A memory system includes a storage device including a nonvolatile memory device and a storage controller configured to control the nonvolatile memory device, and a host that accesses the storage device. The storage device transfers map data, in which a physical address of the nonvolatile memory device and a logical address provided from the host are mapped, to the host depending on a request of the host. The host stores and manages the transferred map data as map cache data. The map cache data are managed depending on a priority that is determined based on a corresponding area of the nonvolatile memory device. 1. A memory system , comprising:a storage device comprising a nonvolatile memory device and a storage controller configured to control the nonvolatile memory device; anda host configured to access the storage device,wherein the storage device transfers map data, in which a physical address of the nonvolatile memory device and a logical address provided from the host are mapped, to the host depending on a request of the host,wherein the host stores and manages the transferred map data as map cache data, andwherein the map cache data are managed depending on a priority that is determined based on a corresponding area of the nonvolatile memory device.2. The memory system of claim 1 , wherein the nonvolatile memory device comprises a user storage area and a turbo write buffer accessible at a higher speed than the user storage area claim 1 ,wherein the turbo write buffer comprises:a first buffer area in which first stored data are prohibited from moving to the user storage area; anda second buffer area in which second stored data are allowed to move to the user storage area.3. The memory system of claim 2 , wherein the request of the host comprises an access request claim 2 , andwherein the storage device transfers, to the host, first map data corresponding to the first buffer area, second map data corresponding to the second buffer area, and third map data ...

Подробнее
04-02-2021 дата публикации

VOLATILE READ CACHE IN A CONTENT ADDRESSABLE STORAGE SYSTEM

Номер: US20210034538A1
Принадлежит: EMC IP Holding Company LLC

A distributed storage system comprises a first module and a second module. The first module processes read requests for an address range, to send to the second module. The first module receives an address associated with a read request for a data page stored on the second module. A method searches a table on the first module for a content-based signature of the data page based on the address and provides the data page from a first module read cache if the content-based signature is in the read cache, where content-based signatures in the table are associated with the address range. 1. A method for maintaining read caches in a distributed storage system , the distributed storage system comprising a first module and a second module , the first module processing read requests for an address range , to send to the second module , the method comprising:receiving, at the first module, an address associated with a read request for a data page stored on the second module, wherein the first module is selected based on an address associated with the data page;searching a table on the first module for a content-based signature of the data page based on the address; andproviding the data page from a first module read cache if the content-based signature is in the read cache, wherein content-based signatures in the table are associated with the address range.2. The method of further comprising:maintaining a plurality of read caches for the distributed storage system wherein each of the plurality of read caches maintains cache consistency for a respective address range associated with each of the plurality of read caches.3. The method of further comprising:transmitting the read request to the second module if the content-based signature is not in the read cache.4. The method of wherein the first module read cache is added to the first module claim 1 , wherein a plurality of modules comprises the first module claim 1 , wherein each of the plurality of modules is associated with a ...

Подробнее
04-02-2021 дата публикации

Memory-aware pre-fetching and cache bypassing systems and methods

Номер: US20210034539A1
Автор: David Andrew Roberts
Принадлежит: Micron Technology Inc

Systems, apparatuses, and methods related to memory management are described. For example, these may include a first memory level including memory pages in a memory array, a second memory level including a cache, a pre-fetch buffer, or both, and a memory controller that determines state information associated with a memory page in the memory array targeted by a memory access request. The state information may include a first parameter indicative of a current activation state of the memory page and a second parameter indicative of statistical likelihood (e.g., confidence) that a subsequent memory access request will target the memory page. The memory controller may disable storage of data associated with the memory page in the second memory level when the first parameter associated with the memory page indicates that the memory page is activated and the second parameter associated with the memory page is greater than or equal to a threshold.

Подробнее
04-02-2021 дата публикации

AVOID CACHE LOOKUP FOR COLD CACHE

Номер: US20210034540A1
Принадлежит: Intel Corporation

Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to receive, in a read/modify/write (RMW) pipeline, a cache access request from a requestor, wherein the cache request comprises a cache set identifier associated with requested data in the cache set, determine whether the cache set associated with the cache set identifier is in an inaccessible invalid state, and in response to a determination that the cache set is in an inaccessible state or an invalid state, to terminate the cache access request. Other embodiments are also disclosed and claimed. 120-. (canceled)21. An apparatus comprising: receive, in a graphics pipeline, a cache access request from a requestor, wherein the cache access request comprises a cache set identifier associated with requested data in a cache set; and', 'terminate the cache access request.', 'determine whether the cache set associated with the cache set identifier is in an inaccessible state or an invalid state, and in response to a determination that the cache set is in an inaccessible state or an invalid state, to], 'a processor to22. The apparatus of claim 21 , the processor to:retrieve a cache state tag associated with the cache set; anddetermine a value of the cache state tag.23. The apparatus of claim 22 , the processor to determine whether the cache set is in at least one of a cold state or an invalid state.24. The apparatus of claim 21 , the processor to:launch a memory access request for the requested data from a memory module coupled to the cache.25. The apparatus of claim 24 , the processor to:return the requested data to the requestor.26. The apparatus of claim 21 , the processor to:determine whether the graphics pipeline is empty; and 'terminate the cache access request.', 'in response to a determination that the graphics pipeline is empty, to27. The apparatus of claim 21 , the processor to:determine ...

Подробнее
11-02-2016 дата публикации

Cache Bypassing Policy Based on Prefetch Streams

Номер: US20160041914A1
Автор: Eckert Yasuko, Loh Gabriel
Принадлежит: Advanced Micro Devices, Inc.

Embodiments include methods, systems, and computer readable medium directed to cache bypassing based on prefetch streams. A first cache receives a memory access request. The request references data in the memory. The data comprises non-reuse data. After a determination of a miss in the first cache, the first cache forwards the memory access request to a cache control logic. The detection of the non-reuse data instructs the cache control logic to allocate a block only in a second cache and bypass allocating a block in the first cache. The first cache is closer to the memory than the second cache. 1. A method , comprising:receiving a memory access request by a first cache, wherein the request references data in a memory;detecting that the data comprises non-reuse data;forwarding the memory access request, by the first cache, responsive to a determination that the data does not exist in the first cache; andallocating, by a cache control logic, a block in a second cache based on the detecting of the non-reuse data to bypass allocating a second block in the first cache, wherein the first cache is closer to the memory than the second cache.2. The method of claim 1 , wherein the detecting further comprises:detecting that the request indicates that the data comprises non-reuse data.3. The method of claim 1 , further comprising:making a local note, by a cache-miss control logic associated with the first cache, that the data comprises the non-reuse data;instructing the first cache to bypass allocating a second block in the first cache based on the local note.4. The method of claim 1 , further comprising:copying the data in the memory to the block in the second cache.5. The method of claim 1 , further comprising:identifying that the data comprises streaming data.6. The method of claim 1 , wherein the memory access request comprises a prefetch request indicating that the data comprises the non-reuse data based on a criteria of a streaming data having sufficient length.7. The ...

Подробнее
24-02-2022 дата публикации

Memory controller and method of operating the same

Номер: US20220057953A1
Автор: Ji Hoon Lee
Принадлежит: SK hynix Inc

A memory controller includes a meta data memory configured to store mapping information of data stored in a plurality of memory blocks included in a memory device and valid data information indicating whether the data stored in the plurality of memory blocks is valid data, and a migration controller configured to control the memory device to perform a migration operation of moving a plurality of valid data stored in a source memory block among the plurality of memory blocks from the source memory block to a target memory block based on the mapping information and the valid data information.

Подробнее
24-02-2022 дата публикации

ENHANCED DATA RELIABILITY IN MULTI-LEVEL MEMORY CELLS

Номер: US20220058124A1
Принадлежит:

Methods, systems, and devices for enhanced data reliability in multi-level memory cells are described. For a write operation, a host device may identify a first set of data to be stored by a set of memory cells at a memory device. Based on a quantity of bits within the first set of data being less than a storage capacity of the set of memory cells, the host device may generate a second set of data and transmit a write command including the first and second sets of data to the memory device. For a read operation, the host device may receive a first set of data from the memory device in response to transmitting a read command. The memory device may extract a second set of data from the first set of data and validate a portion of the first set of data using the second set of data. 1. A non-transitory computer-readable medium storing code comprising instructions executable by a processor to:identify a first set of data to store in a memory device using one or more memory cells that are each configured to store a first quantity of bits;identify that the first set of data is configured to cause each memory cell associated with the first set of data to store a second quantity of hits that is less than the first quantity of bits based at least in part on identifying the first set of data;generate a second set of data for storing with the first set of data in the memory device based at least in part on identifying that the first set of data is configured to cause each memory cell associated with the first set of data to store the second quantity of bits that is less than the first quantity of bits; andtransmit, to the memory device, a write command that comprises the first set of data and the second set of data.2. The non-transitory computer-readable medium of claim 1 , wherein the first set of data and the second set of data are configured to cause the first quantity of bits to be stored in each memory cell.3. The non-transitory computer-readable medium of claim 1 , wherein ...

Подробнее
24-02-2022 дата публикации

MEMORY SYSTEM, MEMORY CONTROLLER AND METHOD FOR OPERATING MEMORY SYSTEM

Номер: US20220058125A1
Принадлежит:

A memory system may transfer a reference write size for a memory device to a host, and, when receiving, from the host, a write request for first data having a size corresponding to a multiple of the reference write size, may directly write the first data to the memory device without caching the first data in a write cache. 1. A memory system comprising:a memory device including a plurality of memory blocks; anda memory controller configured to communicate with the memory device, and execute a firmware to control the memory device,wherein the memory controller transfers a reference write size for the memory device to a host, andwherein, when receiving, from the host, a write request for first data having a size corresponding to a multiple of the reference write size, the memory controller directly writes the first data to the memory device without caching the first data in a write cache.2. The memory system of claim 1 , wherein the reference write size is determined based on a page size corresponding to a first memory block to which user data is written claim 1 , among the plurality of memory blocks.3. The memory system of claim 2 , wherein a memory cell included in the first memory block is a TLC.4. The memory system of claim 1 , wherein claim 1 , when receiving claim 1 , from the host claim 1 , a write request for second data having a size not corresponding to a multiple of the reference write size claim 1 , the memory controller caches the second data in the write cache.5. The memory system of claim 1 ,wherein the memory controller transfers the reference write size to the host through a response message to a parameter command received from the host, andwherein the parameter command is a command which requests at least one parameter for the memory system.6. The memory system of claim 5 , wherein the response message includes a separate field that indicates the reference write size.7. The memory system of claim 1 , wherein claim 1 , after completing an operation of ...

Подробнее
24-02-2022 дата публикации

MEMORY SYSTEM, OPERATION METHOD THEREOF, AND DATABASE SYSTEM INCLUDING THE MEMORY SYSTEM

Номер: US20220058130A1
Автор: OH Yong-Seok
Принадлежит:

A method for operating a multi-transaction memory system, the method includes: storing Logical Block Address (LBA) information changed in response to a request from a host and a transaction identification (ID) of the request into one page of a memory block; and performing a transaction commit in response to a transaction commit request including the transaction ID from the host, wherein the performing of the transaction commit includes: changing a valid block bitmap in a controller of the multi-transaction memory system based on the LBA information. 1. A multi-transaction memory system comprising:a memory block suitable for storing an i-node changed in response to a request from a host and a transaction identification (ID) of the request into one page of the memory block; anda controller suitable for committing a transaction, in response to a transaction commit request including the transaction ID from the host, by reading an existing i-node page data, generating a new i-node page data by merging the existing i-node page data with the changed i-node, storing the new i-node page data into the memory block, and updating mapping information of the new i-node page data.2. The multi-transaction memory system of claim 1 , wherein the one page of the memory block further includes position information of the changed i-node in the existing i-node page data.3. The multi-transaction memory system of claim 2 , wherein the one page of the memory block further includes a dummy data based on a size of the changed i-node and a size of the position information.4. The multi-transaction memory system of claim 1 , wherein the existing i-node page data is an i-node page data stored in the memory block due to a commit-completed transaction before the transaction commit request.5. The multi-transaction memory system of claim 1 , wherein the controller further provides the host with the updated mapping information.6. A method for operating a multi-transaction memory system claim 1 , the ...

Подробнее
12-02-2015 дата публикации

Memory system and information processing device

Номер: US20150046634A1
Принадлежит: Toshiba Corp

According to embodiments a memory system is connectable to a host which includes a host controller and a host memory including a first memory area and a second memory area. The memory system includes an interface unit, a non-volatile memory, and a controller unit. The interface unit receives a read command and a write command. The controller unit writes write-data to the non-volatile memory according to the write command. The controller unit determines whether read-data requested by the read command is in the first memory area. If the read-data is in the first memory area, the controller unit causes the host controller to copy the read-data from the first memory area to the second memory area. If the read-data is not in the first memory area, the controller unit reads the read-data from the non-volatile memory and causes the host controller to store the read-data in the second memory area.

Подробнее
12-02-2015 дата публикации

Controlling a dynamically instantiated cache

Номер: US20150046654A1
Принадлежит: NetApp Inc

A change in workload characteristics detected at one tier of a multi-tiered cache is communicated to another tier of the multi-tiered cache. Multiple caching elements exist at different tiers, and at least one tier includes a cache element that is dynamically resizable. The communicated change in workload characteristics causes the receiving tier to adjust at least one aspect of cache performance in the multi-tiered cache. In one aspect, at least one dynamically resizable element in the multi-tiered cache is resized responsive to the change in workload characteristics.

Подробнее
07-02-2019 дата публикации

DYNAMICALLY PROGRAMMABLE MEMORY TEST TRAFFIC ROUTER

Номер: US20190042131A1
Принадлежит:

In a computer system, a multilevel memory includes a near memory device and a far memory device, which are byte addressable. The multilevel memory includes a controller that receives a data request including original tag information. The controller includes routing hardware to selectively provide alternate tag information for the data request to cause a cache hit or a cache miss to selectively direct the request to the near memory device or to the far memory device, respectively. The controller can include selection circuitry to select between the original tag information and the alternate tag information to control where the data request is sent. 1. A controller device , comprising:a data pathway including signal lines to transfer a data request according to original tag information for the data request;routing hardware to selectively provide alternate tag information for the data request to cause a cache hit or a cache miss to selectively direct the request to a near memory device or to a far memory device separate from the near memory device, respectively; andselection circuitry to select between the original tag information and the alternate tag information.2. The controller device of claim 1 , wherein the routing hardware is to provide alternate tag information to cause a request directed to data stored in the near memory device to be routed for processing by the far memory device.3. The controller device of claim 1 , wherein the routing hardware is to provide alternate tag information to cause a request directed to data not stored in the near memory device to be routed for processing by the near memory device.4. The controller device of claim 1 , wherein the routing hardware is to modify a field to select a different way than a way identified in the original tag information.5. The controller device of claim 1 , wherein the routing hardware is to modify a field to select a different channel than a channel identified in the original tag information.6. The ...

Подробнее
07-02-2019 дата публикации

Circuitry with adaptive memory assistance capabilities

Номер: US20190042306A1
Принадлежит: Individual

A system for running one or more applications is provided. Each application may require memory services that can be accelerated using configurable memory assistance circuits associated with different levels of a memory hierarchy. Integrated circuit design tools may be used to generate configuration data for programming the configurable memory assistance circuits. During compile time, the design tools may identify memory service patterns in a source code, match the identified memory service patterns to corresponding templates, parameterize the matching templates, and then synthesize the parameterized templates to produce the configuration data. During run time, a memory assistance scheduler may map the memory services required by each application to available memory assistance circuits in the system. The mapped memory assistance circuits are programmed by the configuration data to provide the desired memory service capability.

Подробнее
07-02-2019 дата публикации

INFORMATION PROCESSING APPARATUS AND METHOD

Номер: US20190042426A1
Автор: ARAI Masaki
Принадлежит: FUJITSU LIMITED

An information processing apparatus includes a first memory and a processor coupled to the first memory. The processor is configured to acquire a first address in the first memory, at which data indicating an instruction included in a target program is stored. The processor is configured to simulate access to a second memory corresponding to an access request for access to the first address on a basis of configuration information of the second memory. The processor is configured to generate first information that indicates whether the access to the second memory is successful regarding the instruction. 1. An information processing apparatus , comprising:a first memory; anda processor coupled to the first memory and the processor configured to:acquire a first address in the first memory, at which data indicating an instruction included in a target program is stored;simulate access to a second memory corresponding to an access request for access to the first address on a basis of configuration information of the second memory; andgenerate first information that indicates whether the access to the second memory is successful regarding the instruction.2. The information processing apparatus according to claim 1 , whereinthe processor is configured to:add additional information to the target program at a position of the instruction; andacquire an address corresponding to the additional information as the first address.3. The information processing apparatus according to claim 1 , whereinthe processor is configured to:acquire a number of cache misses for each of a plurality of pieces of arrangement information in which changes are made to the first address and an address of data utilized by the target program; andselect a piece of arrangement information corresponding to a case where the number of cache misses is smallest, as the first information.4. The information processing apparatus according to claim 1 , whereinthe processor is configured to:generate, in a case where ...

Подробнее
07-02-2019 дата публикации

CACHE FILTER

Номер: US20190042450A1
Автор: Walker Robert M.
Принадлежит:

The present disclosure includes apparatuses and methods related to a memory system including a filter. An example apparatus can include a filter to store a number flags, wherein each of the number of flags corresponds to a cache entry and each of the number of flags identifies a portion of the memory device where data of a corresponding cache entry is stored in the memory device. 1. An apparatus , comprising:a cache controller; and 'store a number flags, wherein each of the number of flags corresponds to a cache entry and each of the number of flags identifies a portion of the memory device where data of a corresponding cache entry is stored in the memory device.', 'a cache and a memory device coupled to the cache controller, wherein the cache controller includes a filter configured to2. The apparatus of claim 1 , wherein the memory device is a non-volatile memory device.3. The apparatus of claim 1 , wherein the cache is a DRAM memory device.4. The apparatus of claim 1 , wherein the cache is configured to store a portion of the data stored in the memory device.5. The apparatus of claim 1 , wherein the number of flags indicate a portion of the memory device where data corresponding to a request is located.6. The apparatus of claim 1 , wherein a setting of the number of flags indicate whether the cache is storing valid data corresponding to a request.7. The apparatus of claim 1 , wherein each of the number of flags identifies at least a partial location of data stored in the memory device of a corresponding cache entry.8. An apparatus claim 1 , comprising:a cache controller; anda cache and a memory device coupled to the cache controller, wherein the cache controller includes a filter, wherein the filter includes a number of flags that correspond to a number of cache entries in the cache and wherein the cache controller is configured to determine whether to search the cache for data corresponding to a request based on the number of flags.9. The apparatus of claim 8 , ...

Подробнее
07-02-2019 дата публикации

EFFICIENT USAGE OF BANDWIDTH OF DEVICES IN CACHE APPLICATIONS

Номер: US20190042451A1
Принадлежит:

A memory storage control apparatus, system, and method are described. An apparatus can include a memory controller configured to couple to a primary memory resource (PMR) and to a cache memory resource (CMR) and is configured to receive a read or write data request associated with particular data. For a read data request, the memory controller is configured to perform a lookup of a cache table mapped to the CMR for a copy of the particular data, and determine, if the lookup returns a hit and the particular data is not altered compared to the copy of the particular data, whether the CMR is saturated. For a write data request, the memory controller is configured to determine whether the CMR is saturated with data requests. In accordance with a determination that the CMR is saturated with data requests, the bypass the CMR, and send the data request to the PMR. 1. A memory storage control apparatus , comprising: receive a read data request associated with particular data;', 'perform a lookup of a cache table mapped to the cache memory resource for a copy of the particular data; and', 'determine, if the lookup returns a hit and the particular data is not altered compared to the copy of the particular data, whether the cache memory resource is saturated with data requests; and', 'in accordance with a determination that the cache memory resource is saturated with data requests, bypass the cache memory resource, and send the data request to the primary memory resource., 'a memory controller configured to communicatively couple to a primary memory resource and to a cache memory resource, the memory controller including circuitry configured to2. The apparatus of claim 1 , wherein claim 1 , where the lookup returns a hit and the particular data is altered compared to the copy of the particular data claim 1 , the circuitry is further configured to send the data request to the cache memory resource.3. The apparatus of claim 1 , further comprising the primary memory resource and ...

Подробнее