Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 8479. Отображено 100.
19-01-2012 дата публикации

Caching using virtual memory

Номер: US20120017039A1
Автор: Julien MARGETTS
Принадлежит: PLX Technology Inc

In a first embodiment of the present invention, a method for caching in a processor system having virtual memory is provided, the method comprising: monitoring slow memory in the processor system to determine frequently accessed pages; for a frequently accessed page in slow memory: copy the frequently accessed page from slow memory to a location in fast memory; and update virtual address page tables to reflect the location of the frequently accessed page in fast memory.

Подробнее
26-01-2012 дата публикации

Memory page management in a tiered memory system

Номер: US20120023300A1
Принадлежит: International Business Machines Corp

Memory page management in a tiered memory system including a system that includes at least one page table for storing a plurality of entries, each entry associated with a page of memory and each entry including an address of the page and a memory tier of the page. The system also includes a control program configured for allocating pages associated with the entries to a software module, the allocated pages from at least two different memory tiers. The system further includes an agent of the control program capable of operating independently of the control program, the agent configured for receiving an authorization key to the allocated pages, and for migrating the allocated pages between the different memory tiers responsive to the authorization key.

Подробнее
23-02-2012 дата публикации

Virtualization with fortuitously sized shadow page tables

Номер: US20120047348A1
Принадлежит: VMware LLC

One or more embodiments provides a shadow page table used by a virtualization software wherein at least a portion of the shadow page table shares computer memory with a guest page table used by a guest operating system (OS) and wherein the virtualization software provides a mapping of guest OS physical pages to machine pages.

Подробнее
08-03-2012 дата публикации

Hardware assistance for shadow page table coherence with guest page mappings

Номер: US20120059973A1
Автор: Keith Adams, Sahil Rihan
Принадлежит: VMware LLC

Some embodiments of the present invention include a memory management unit (MMU) configured to, in response to a write access targeting a guest page mapping of a guest virtual page number (GVPN) to a guest physical page number (GPPN) within a guest page table, identify a shadow page mapping that associates the GVPN with a physical page number (PPN). The MMU is also configured to determine whether a traced write indication is associated with the shadow page mapping and, if so, record update information identifying the targeted guest page mapping. The update information is used to reestablish coherence between the guest page mapping and the shadow page mapping. The MMU is further configured to perform the write access.

Подробнее
29-03-2012 дата публикации

Microprocessor with dual-level address translation

Номер: US20120079164A1
Принадлежит: Individual

A processor includes a first translation look-aside buffer to support a guest operating mode. A second translation look-aside buffer supports a root operating mode. Hardware resources support the guest operating mode as controlled by guest mode control registers defining guest context. The guest context is used by the hardware resources to access the first translation look-aside buffer to translate a guest virtual address to a guest physical address. The hardware resources access the second translation look-aside buffer to translate the guest physical address to a physical address.

Подробнее
10-05-2012 дата публикации

Invalidating a Range of Two ro More Translation Table Entries and Instruction Therefore

Номер: US20120117356A1
Принадлежит: International Business Machines Corp

An instruction is provided to perform invalidation of an instruction specified range of segment table entries or region table entries. The instruction can be implemented by software emulation, hardware, firmware or some combination thereof.

Подробнее
31-05-2012 дата публикации

Dynamic Address Translation With Translation Table Entry Format Control for Identifying Format of the Translation Table Entry

Номер: US20120137106A1
Принадлежит: International Business Machines Corp

What is provided is an enhanced dynamic address translation facility. In one embodiment, a virtual address to be translated and an initial origin address of a translation table of the hierarchy of translation tables are obtained. An index portion of the virtual address is used to reference an entry in the translation table. If a format control field contained in the translation table entry is enabled, the table entry contains a frame address of a large block of data of at least 1M byte in size. The frame address is then combined with an offset portion of the virtual address to form the translated address of a small 4K byte block of data in main storage or memory.

Подробнее
07-06-2012 дата публикации

Memory address re-mapping of graphics data

Номер: US20120139927A1
Принадлежит: Individual

A method and apparatus for creating, updating, and using guest physical address (GPA) to host physical address (HPA) shadow translation tables for translating GPAs of graphics data direct memory access (DMA) requests of a computing environment implementing a virtual machine monitor to support virtual machines. The requests may be sent through a render or display path of the computing environment from one or more virtual machines, transparently with respect to the virtual machine monitor. The creating, updating, and using may be performed by a memory controller detecting entries sent to existing global and page directory tables, forking off shadow table entries from the detected entries, and translating GPAs to HPAs for the shadow table entries.

Подробнее
12-07-2012 дата публикации

Remapping of data addresses for large capacity low-latency random read memory

Номер: US20120179890A1
Принадлежит: Individual

Described herein are method and apparatus for using an LLRRM device as a storage device in a storage system. At least three levels of data structures may be used to remap storage system addresses to LLRRM addresses for read requests, whereby a first-level data structure is used to locate a second-level data structure corresponding to the storage system address, which is used to locate a third-level data structure corresponding to the storage system address. An LLRRM address may comprise a segment number determined from the second-level data structure and a page number determined from the third-level data structure. Update logs may be produced and stored for each new remapping caused by a write request. An update log may specify a change to be made to a particular data structure. The stored update logs may be performed on the data structures upon the occurrence of a predetermined event.

Подробнее
20-09-2012 дата публикации

Flash storage device with read disturb mitigation

Номер: US20120239990A1
Принадлежит: Stec Inc

A method for managing a flash storage device includes initiating a read request and reading requested data from a first storage block of a plurality of storage blocks in the flash storage device based on the read request. The method further includes incrementing a read count for the first storage block and moving the data in the first storage block to an available storage block of the plurality of storage blocks when the read count reaches a first threshold value.

Подробнее
04-10-2012 дата публикации

Program, control method, and control device

Номер: US20120254499A1
Принадлежит: Ubiquitous Corp

Provided are a program, a control method, and a control device by which an activation time can be shortened. In a computer system which is equipped with a Memory Management Unit (MMU), with respect to a table of the MMU, page table entries are rewritten so that page faults occur at each page necessary for operation of software. At the time of activating, stored memory images are read page by page for the page faults which occurred in the RAM to be accessed. By reading as described above, reading of unnecessary pages is not performed, and thus, the activation time can be shortened. The present invention can be applied to a personal computer and an electronic device provided with an embedded computer.

Подробнее
02-05-2013 дата публикации

Cache Memory That Supports Tagless Addressing

Номер: US20130111132A1
Принадлежит: RAMBUS INC

Embodiments related to a cache memory that supports tagless addressing are disclosed. Some embodiments receive a request to perform a memory access, wherein the request includes a virtual address. In response, the system performs an address-translation operation, which translates the virtual address into both a physical address and a cache address. Next, the system uses the physical address to access one or more levels of physically addressed cache memory, wherein accessing a given level of physically addressed cache memory involves performing a tag-checking operation based on the physical address. If the access to the one or more levels of physically addressed cache memory fails to hit on a cache line for the memory access, the system uses the cache address to directly index a cache memory, wherein directly indexing the cache memory does not involve performing a tag-checking operation and eliminates the tag storage overhead.

Подробнее
16-05-2013 дата публикации

METHOD OF MANAGING COMPUTER MEMORY, CORRESPONDING COMPUTER PROGRAM PRODUCT, AND DATA STORAGE DEVICE THEREFOR

Номер: US20130124821A1
Принадлежит: ALCATEL LUCENT

The invention concerns a method of managing computer memory, the method comprising the steps of maintaining () a page table entry for mapping a virtual address to a physical address and a cache comprising a plurality of data blocks and, in response to a reference to the virtual address, translating () the virtual address into the physical address by means of the page table entry and fetching () data from the physical address into the cache, wherein the page table entry comprises a plurality of indicators, each data block corresponding to an indicator, and, upon fetching the-data into the cache, the method comprises the further step of, in response to an indicator being set, zeroing () the corresponding data block. The invention further concerns a computer program product and a device therefor. 1. A method of managing computer memory , the method comprising the steps of:maintaining a page table entry for mapping a virtual address to a physical address and a cache comprising a plurality of data blocks and,in response to a reference to the virtual address, translating the virtual address into the physical address by means of the page table entry and fetching data from the physical address into the cache,characterized in that the page table entry comprises a plurality of indicators, each data block corresponding to one of the plurality of indicators, and, once fetching the data into the cache has started, the method comprises the further step of,in response to an indicator, selected from said plurality of indicators, being set, zeroing the corresponding data block.2. A method according to claim 1 , characterized in that the page table entry comprises a bitmask and the indicators are bits contained in the bitmask.3. A method according to claim 1 , characterized in that the page table entry is associated with a memory page comprising the plurality of data blocks and the method comprises the intermediate steps of:receiving a data block,storing the data block in the memory ...

Подробнее
23-05-2013 дата публикации

Table lookup operation on masked data

Номер: US20130132706A1
Принадлежит: SPANSION LLC

Processing of masked data using table lookups is described. A mask is applied to input data to generate masked input data. The mask and the masked input data are used in combination to locate an entry in a lookup table. The entry corresponds to a transformed version of the input data.

Подробнее
30-05-2013 дата публикации

Efficient Memory and Resource Management

Номер: US20130138840A1

The present system enables passing a pointer, associated with accessing data in a memory, to an input/output (I/O) device via an input/output memory management unit (IOMMU). The I/O device accesses the data in the memory via the IOMMU without copying the data into a local I/O device memory. The I/O device can perform an operation on the data in the memory based on the pointer, such that I/O device accesses the memory without expensive copies.

Подробнее
20-06-2013 дата публикации

MEMORY MANAGEMENT UNIT FOR A MICROPROCESSOR SYSTEM, MICROPROCESSOR SYSTEM AND METHOD FOR MANAGING MEMORY

Номер: US20130159663A1
Автор: Levenglick Dov
Принадлежит: Freescale Semiconductor, Inc.

The invention pertains to a memory management unit for a microprocessor system, the memory management unit being connected or connectable to at least one processor core of the microprocessor system and being connected or connectable to a physical memory of the microprocessor system. The memory management unit is adapted to selectively operate in a hypervisor mode or in a supervisor mode, the hypervisor mode and the supervisor mode having different privilege levels of access to hardware The memory management unit comprises a first register table indicating physical address information for mapping at least one logical physical address and at least one actual physical address onto each other; a second register table indicating an allowed address range of physical addresses accessible to a process running in or under supervisor mode; wherein the memory management unit is adapted to prevent write access to the second register table by a process not in hypervisor mode. The memory management unit is further adapted to allow write access to the first register table of a process running in or under supervisor mode to reconfigure the physical address information indicated in the first register table with memory mapping information relating to at least one physical address, if the at least one physical address is in the allowed address range, and to prevent write access to the first register table of the process running in or under supervisor mode if the at least one physical address is not in the allowed address range. The invention also pertains to a microprocessor system and a method for managing memory. 1. A memory management unit for a microprocessor system , the memory management unit being connected to at least one processor core of the microprocessor system and being connected to a physical memory of the microprocessor system , the memory management unit being adapted to selectively operate in a hypervisor mode or in a supervisor mode , the hypervisor mode and the ...

Подробнее
04-07-2013 дата публикации

Application processor and a computing system having the same

Номер: US20130173883A1
Автор: Il-Ho Lee, Kyong-Ho Cho
Принадлежит: SAMSUNG ELECTRONICS CO LTD

An application processor includes a system memory unit, peripheral devices, a control unit and a central processing unit (CPU). The system memory unit includes one page table. The peripheral devices share the page table and perform a DMA (Direct Memory Access) operation on the system memory unit using the page table, where each of the peripheral devices includes a memory management unit having a translation lookaside buffer. The control unit divides a total virtual address space corresponding to the page table into sub virtual address spaces, assigns the sub virtual address spaces to the peripheral devices, respectively, allocates and releases a DMA buffer in the system memory unit, and updates the page table, where at least two of the sub virtual address spaces have different sizes from each other. The CPU controls the peripheral devices and the control unit. The application processor reduces memory consumption.

Подробнее
25-07-2013 дата публикации

SUBSTITUTE VIRTUALIZED-MEMORY PAGE TABLES

Номер: US20130191611A1
Принадлежит:

Embodiments of techniques and systems for using substitute virtualized-memory page tables are described. In embodiments, a virtual machine monitor (VMM) may determine that a virtualized memory access to be performed by an instruction executing on a guest software virtual machine is not allowed in accordance with a current virtualized-memory page table (VMPT). The VMM may select a substitute VMPT that permits the virtualized memory access, In scenarios where a data access length for the instruction is known, the substitute VMPT may include full execute, read, and write permissions for the entire guest software address space. In scenarios where a data access length for the instruction is not known, the substitute VMPT may include less than full execute, read, and write permissions for the entire guest software address space, and may be modified to allow the requested virtualized memory access. Other embodiments may be described and claimed. 1. One or more computer-readable storage media comprising first instructions that , in response to execution by a computing device , cause the computing device to:determine that a second instruction to be executed on a computer processor of the computing device attempts to perform an access to a virtualized memory address location, the access to the virtualized memory address location being not permitted in accordance with a current virtualized-memory page table; andexecute the second instruction on the computer processor in accordance with a substitute virtualized-memory page table, the substitute virtualized-memory page table configured to permit the access to the virtualized memory address location.2. The computer-readable media of claim 1 , wherein the first instructions claim 1 , in response to execution by the computer processor claim 1 , further cause the computing device to determine whether a total number of data bytes to be accessed by the second instruction can be determined prior to execution of the second instruction.3 ...

Подробнее
08-08-2013 дата публикации

SYSTEM AND METHOD TO PRIORITIZE LARGE MEMORY PAGE ALLOCATION IN VIRTUALIZED SYSTEMS

Номер: US20130205062A1
Принадлежит: VMWARE, INC.

The prioritization of large memory page mapping is a function of the access bits in the L1 page table. In a first phase of operation, the number of set access bits in each of the L1 page tables is counted periodically and a current count value is calculated therefrom. During the first phase, no pages are mapped large even if identified as such. After the first phase, the current count value is used to prioritize among potential large memory pages to determine which pages to map large. The system continues to calculate the current count value even after the first phase ends. When using hardware assist, the access bits in the nested page tables are used and when using software MMU, the access bits in the shadow page tables are used for large page prioritization. 1. A computer program product stored on a tangible computer readable medium and configured to perform a computer-implemented method of selectively mapping shadow memory pages as large in a system having a virtual memory system that utilizes both guest page tables and corresponding shadow page tables , the method comprising:during a translation of virtual memory pages to physical memory pages, determining that a particular guest page table is a lowest level guest page table, wherein the particular guest page table comprises a plurality of table entries in which each table entry includes an access field that is set as a response to accessing a memory address associated with the table entry;determining whether a particular guest memory page associated with the particular guest page table is mapped large;determining whether a count which is indicative of the access fields that are set within the particular guest page table exceeds a threshold; andif the particular guest memory page is mapped large and the count exceeds the threshold, mapping a shadow page as large within a shadow page table that corresponds to the particular guest page table.2. The computer program product of wherein determining the count includes ...

Подробнее
22-08-2013 дата публикации

Multi-Core Online Patching Method and Apparatus

Номер: US20130219147A1
Принадлежит: Huawei Technologies Co., Ltd.

A multi-core online patching method and an apparatus for mapping patch data to a patch area of a shared memory are disclosed. A method of the embodiment of the present invention includes: separating shared global variables and private global variables defined in a patch; mapping the shared global variables to a shared data segment in a patch area by using a mapping mode of a direct memory address, and mapping the private global variables to private data segments in the patch area by using a mapping mode of a variable address specified by a user. The embodiments of the present invention may be used in a multi-core DSP system of telecom-grade software. 1. A multi-core online patching method comprising:separating shared global variables and private global variables defined in a patch;mapping the shared global variables to a shared data segment in a patch area by using a mapping mode of a direct memory address; andmapping the private global variables to private data segments in the patch area by using a mapping mode of a variable address specified by a user.2. The method according to claim 1 , wherein separating the shared global variables and the private global variables defined in the patch comprises:defining the private global variables in the patch as structure variables, wherein a size of a structure is a size of a private data segment of each core that is in the patch area;segmenting remaining space by using a magic number after defining member variables of the structure variables is completed, where data before the magic number is valid data; andplacing the shared global variables and the private global variables in the patch into different data segments when a patch file is written.3. The method according to claim 2 , wherein mapping the private global variables to the private data segments in the patch area by using the mapping mode of the variable address specified by the user comprises:extracting valid data of the private global variables; andmapping, ...

Подробнее
26-09-2013 дата публикации

Shared Virtual Memory Between A Host And Discrete Graphics Device In A Computing System

Номер: US20130249925A1
Автор: Ginzburg Boris
Принадлежит:

In one embodiment, the present invention includes a device that has a device processor and a device memory. The device can couple to a host with a host processor and host memory. Both of the memories can have page tables to map virtual addresses to physical addresses of the corresponding memory, and the two memories may appear to a user-level application as a single virtual memory space. Other embodiments are described and claimed. 1. An apparatus comprising:a multicore processor including a plurality of cores and to couple to a host memory; anda device coupled to the multicore processor and having a device processor and a device memory, the device and the multicore processor having a shared virtual address space, wherein on a page fault in the device, the device is to request a missing page from the host memory via a host page table that maps first virtual addresses to physical addresses of the host memory, the device having a device page table to map second virtual addresses to physical addresses of the device memory.2. The apparatus of claim 1 , wherein the host memory and the device memory appear to a user-level application as a single virtual memory space.3. The apparatus of claim 1 , wherein the device memory is to act as a page-based cache memory of the host memory.4. The apparatus of claim 3 , wherein coherency between the device memory and the host memory is maintained implicitly without programmer interaction.5. The apparatus of claim 1 , wherein the multicore processor is to provide the missing page from the host memory to the device if present therein claim 1 , and to set a not present indicator in the host memory for the corresponding page if the missing page is write enabled claim 1 , wherein when the not present indicator is set claim 1 , the multicore processor is prevented from accessing the corresponding page in the host memory.6. The apparatus of claim 1 , wherein the multicore processor is to provide the missing page from a mass storage coupled ...

Подробнее
26-09-2013 дата публикации

Memory management method and information processing device

Номер: US20130254512A1
Автор: Akira Takeda
Принадлежит: Toshiba Corp

According to one embodiment, a memory management method implemented by a computer includes managing each block of a memory region included in the computer based on a buddy allocation algorithm. The method includes managing a correspondence relation between a virtual address and a physical address of one block using one entry of a page table. Each block has a size of a super page. The method includes allocating an empty first block to a process so that the number of empty blocks does not exceed the number of empty entries of a translation look-aside buffer (TLB).

Подробнее
03-10-2013 дата публикации

Translation lookaside buffer for multiple context compute engine

Номер: US20130262816A1
Принадлежит: Intel Corp

Some implementations disclosed herein provide techniques and arrangements for an specialized logic engine that includes translation lookaside buffer to support multiple threads executing on multiple cores. The translation lookaside buffer enables the specialized logic engine to directly access a virtual address of a thread executing on one of the plurality of processing cores. For example, an acceleration compute engine may receive one or more instructions from a thread executed by a processing core. The acceleration compute engine may retrieve, based on an address space identifier associated with the one or more instructions, a physical address associated with the one or more instructions from the translation lookaside buffer to execute the one or more instructions using the physical address.

Подробнее
17-10-2013 дата публикации

Remote memory management when switching optically-connected memory

Номер: US20130275705A1
Принадлежит: International Business Machines Corp

A remote memory superpage is retained in a remote memory of the memory blade when reading the remote memory super page of the remote memory into a local memory.

Подробнее
24-10-2013 дата публикации

Virtualization with Multiple Shadow Page Tables

Номер: US20130283004A1
Принадлежит:

A computing system includes virtualization software including a guest operating system (OS). A method maintains, by the virtualization software layer, a first shadow page table for use in a kernel mode and a second shadow page table for use in a user mode. The virtualization software switches between using the first shadow page table and the second shadow page table when the guest OS switches between operating in the kernel mode and the user mode. 1. A method for operating in a computing system comprised of virtualization software including a guest operating system (OS) , the method comprising:maintaining, by the virtualization software layer, a first shadow page table for use in a kernel mode;maintaining, by the virtualization software layer, a second shadow page table for use in a user mode; andswitching, by the virtualization software, between using the first shadow page table and the second shadow page table when the guest OS switches between operating in the kernel mode and the user mode.21. The method of further comprising:receiving a kernel mode guest OS instruction; andusing the first shadow page table to process the kernel mode guest OS instruction.3. The method of further comprising:receiving a user mode guest OS instruction; andusing the second shadow page table to process the user mode guest OS instruction.41. The method of wherein the first shadow page table maintains entries only valid in the kernel mode and the second shadow page table maintains entries only valid in the user mode. /51. The method of further comprising:marking guest OS pages having all user mode guest OS instructions as user mode pages;marking guest OS pages having kernel mode guest OS instructions as kernel mode pages; andmarking all pages in virtualization software address space as kernel mode pages, wherein the first shadow page table is maintained for the kernel mode pages and the second shadow page table is maintained for the user mode pages.61. The method of further comprising: ...

Подробнее
21-11-2013 дата публикации

System and Method for Storing Data in a Virtualized Memory System With Destructive Reads

Номер: US20130311748A1
Принадлежит: MEMOIR SYSTEMS, INC.

A system and method for providing high-speed memory operations is disclosed. The technique uses virtualization of memory space to map a virtual address space to a larger physical address space wherein no memory bank conflicts will occur. The larger physical address space is used to prevent memory bank conflicts from occurring by moving the virtualized memory addresses of data being written to memory to a different location in physical memory that will eliminate a memory bank conflict. To improve memory performance destructive read operations are used when reading data but the data is written back into the physical memory in a later cycle. 1. A method of handling memory access requests in a digital memory system , said method comprising:receiving a first memory access request, said first memory access request identifying a first virtualized memory address in a virtualized memory address space;translating said first virtualized memory address into a first physical memory address using a virtualized translation table wherein said physical memory address space is larger than said virtualized memory address space; destructively reading a first data word from said first physical memory address into a temporary register,', 'responding to said first memory access request with said first data word; and, 'handling said first memory access request with a physical memory system using said first physical memory address, said handling comprising'} writing said first data word back into a second physical memory address, and', 'updating said virtualized translation table to associate said first virtualized memory address with said second physical memory address., 'storing said first data word back into said physical memory, said storing comprising'}2. The method of handling memory access requests as set forth in wherein translating said first virtualized memory address into said first physical memory address comprises:accessing said virtualized translation table using a first ...

Подробнее
21-11-2013 дата публикации

METHOD FOR DISTRIBUTING DATA IN A TIERED STORAGE SYSTEM

Номер: US20130311749A1
Принадлежит:

A method for assigning data in a plurality of physical storage resources for an information handling system is disclosed. The plurality of physical storage resources includes a first tier and a second tier with a lower performance and cost relative to capacity than the first tier. A tier manager hosted on the information handling system and in electronic communication with the plurality of physical storage resources is configured to: determine a seek distance value, operation rate, operation size value, and elapsed time value for each page; and calculate a relative randomness value for each page using the seek distance value, operation rate, operation size value, and elapsed time value determined for each page. A classification module may assign a physical location for each page such that the relative randomness value for each page in the first tier is greater than the relative randomness value for each page in the second tier. 1. A system controller for an information handling system , the system controller comprising:a tier manager in electronic communication with a plurality of physical storage resources arranged in a first tier and a second tier;a combined logical address space of the plurality of physical storage resources divided into pages;each page occupying a predetermined and an equivalent portion of combined logical address space; calculating a relative randomness value for each page, the relative randomness value being a measure of the fragmentation of data stored on each page; and', 'comparing the relative randomness values for each page; and, 'the tier manager configured to perform operations including assigning a physical location for each page such that the relative randomness value for each page in the first tier is greater than the relative randomness value for each page in the second tier; and', 'automatically relocating the pages according to the assigned physical location for each page., 'a classification module in electronic communication with ...

Подробнее
21-11-2013 дата публикации

TRANSACTION LOG RECOVERY

Номер: US20130311750A1
Автор: Jeddeloh Joseph M.
Принадлежит: MICRON TECHNOLOGY, INC.

The present disclosure includes methods for transaction log recovery in memory. One such method includes examining a number of entries saved in a transaction log to determine a write pattern, reading the memory based on the write pattern, updating the transaction log with information associated with data read from the memory based on the write pattern, and updating a logical address (LA) table using the transaction log. 1. A method for transaction log recovery , comprising:updating a transaction log with information read from pages, wherein the information is read from pages in an order based on a write pattern, andupdating a logical address (LA) table with information missing from the LA table after a power interruption using the updated transaction log.2. The method of claim 1 , including storing a copy of the updated LA table in volatile memory.3. The method of claim 2 , including continually updating the updated LA table in the volatile memory and storing a copy of the continually updated LA table in non-volatile memory at least once every 300 seconds.4. The method of claim 1 , including determining the write pattern using information stored the transaction log.5. The method of claim 4 , including determining where data would have been written next in the memory system using the write pattern.6. A memory system claim 4 , comprising:non-volatile memory configured to store a logical address (LA) table and a transaction log; and read pages from a partially written block in the memory system, wherein the pages are read in an order based on a write pattern;', 'update a transaction log with information associated with the pages read based on the write pattern; and', 'update a logical address (LA) table in non-volatile memory using the updated transaction log., 'a controller configured to7. The memory system of claim 6 , wherein the controller is configured to claim 6 , prior to a power interrupt claim 6 , store the LA table in the non-volatile memory on a periodic ...

Подробнее
28-11-2013 дата публикации

Apparatus and method for accelerating operations in a processor which uses shared virtual memory

Номер: US20130318323A1
Принадлежит: Intel Corp

An apparatus and method are described for coupling a front end core to an accelerator component (e.g., such as a graphics accelerator). For example, an apparatus is described comprising: an accelerator comprising one or more execution units (EUs) to execute a specified set of instructions; and a front end core comprising a translation lookaside buffer (TLB) communicatively coupled to the accelerator and providing memory access services to the accelerator, the memory access services including performing TLB lookup operations to map virtual to physical addresses on behalf of the accelerator and in response to the accelerator requiring access to a system memory.

Подробнее
12-12-2013 дата публикации

Information processing apparatus and method and program

Номер: US20130332662A1
Принадлежит: Sony Corp

There is provided an information processing apparatus including a table saving unit configured to copy an address conversion table stored in a first storage area of a memory to a storage area other than the first storage area and save the copied address conversion table, a table recovery unit configured to recover the address conversion table of a saving time point by copying the saved address conversion table to the first storage area of the memory, and a rewrite control unit configured to, when there is a rewrite request for data of a virtual address associated with a physical address on the address conversion table after the address conversion table has been saved, change the physical address associated with the virtual address, and cause the rewritten data to be stored in a storage area corresponding to the changed physical address.

Подробнее
19-12-2013 дата публикации

Radix Table Translation of Memory

Номер: US20130339654A1

A method includes receiving a request to access a desired block of memory. The request includes an effective address that includes an effective segment identifier (ESID) and a linear address, the linear address comprising a most significant portion and a byte index. Locating an entry, in a buffer, the entry including the ESID of the effective address. Based on the entry including a radix page table pointer (RPTP), performing, using the RPTP to locate a translation table of a hierarchy of translation tables, using the located translation table to translate the most significant portion of the linear address to obtain an address of a block of memory, and based on the obtained address, performing the requested access to the desired block of memory. 1. A computer implemented method for accessing memory locations , the method comprising:receiving a request to access a desired block of memory, the request comprising an effective address that includes an effective segment identifier (ESID) and a linear address, the linear address comprising a most significant portion and a byte index;locating, by a processor, an entry, in a buffer, the entry including the ESID of the effective address;based on the entry including a radix page table pointer (RPTP), performing;using the RPTP to locate a translation table of a hierarchy of translation tables;using the located translation table to translate the most significant portion of the linear address to obtain an address of a block of memory; andbased on the obtained address, performing the requested access to the desired block of memory.2. The method of claim 1 , wherein based on the entry including a VSID claim 1 , performing locating a page table entry of a group of translation table entries using a hash function to obtain an address of a block of memory.3. The method of claim 1 , wherein the using the obtained address comprises using the byte index of the linear address and the obtained address to form an address of the desired block ...

Подробнее
19-12-2013 дата публикации

MANAGING PAGE TABLE ENTRIES

Номер: US20130339658A1

A method includes identifying, by a processor, a first page table entry (PTE) of a page table for translating virtual addresses to main storage addresses, the page table comprising a second page table entry contiguous with the second page table entry, determining with the processor whether the first PTE may be joined with the second PTE, the determining based on the respective pages of main storage being contiguous, and setting a marker in the page table for indicating that the main storage pages of identified by the first PTE and second PTEs are contiguous. 1. A computer implemented method for accessing memory locations , the method comprising:identifying, by a processor, a first page table entry (PTE) of a page table for translating virtual addresses to main storage addresses, the page table comprising a second page table entry contiguous with the second page table entry;determining, with the processor, whether the first PTE may be joined with the second PTE, the determining based on the respective pages of main storage being contiguous; andsetting a marker in the page table for indicating that the main storage pages of identified by the first PTE and second PTEs are contiguous.2. The method of claim 1 , wherein the method the method further comprises performing an address translation of a virtual address comprising:based on the virtual address, obtaining the first PTE; andbased on the marker, using the first PTE to translate virtual addresses to both the first page and the second page absent accessing the second PTE.3. The method of claim 1 , wherein the method further comprises executing a translation lookaside buffer (TLB) invalidate instruction for invalidating TLB entries associated with the first PTE and second PTE.4. The method of claim 1 , wherein the method further comprises starting a memory access routine for the first virtual address stored in the first page table entry (PTE) in the page table claim 1 , wherein the memory access routine performs: ...

Подробнее
16-01-2014 дата публикации

Methods of cache preloading on a partition or a context switch

Номер: US20140019689A1
Принадлежит: International Business Machines Corp

A scheme referred to as a “Region-based cache restoration prefetcher” (RECAP) is employed for cache preloading on a partition or a context switch. The RECAP exploits spatial locality to provide a bandwidth-efficient prefetcher to reduce the “cold” cache effect caused by multiprogrammed virtualization. The RECAP groups cache blocks into coarse-grain regions of memory, and predicts which regions contain useful blocks that should be prefetched the next time the current virtual machine executes. Based on these predictions, and using a simple compression technique that also exploits spatial locality, the RECAP provides a robust prefetcher that improves performance without excessive bandwidth overhead or slowdown.

Подробнее
16-01-2014 дата публикации

DISPERSED STORAGE NETWORK VIRTUAL ADDRESS SPACE

Номер: US20140019711A1
Принадлежит: CLEVERSAFE, INC.

A dispersed storage network utilizes a virtual address space to store data. The dispersed storage network includes a dispersed storage device for receiving a request relating to a data object stored in the dispersed storage network and determining a virtual memory address assigned to the data object. The virtual memory address is within a virtual memory address range of the virtual address space that is allocated to a vault associated with a user of the data object. The virtual memory address is further assigned to a data slice of a plurality of data slices of the data object. The dispersed storage device uses the virtual memory address to determine an identifier of a storage unit within the dispersed storage network that has the data slice stored therein. 1. A dispersed storage device for use within a dispersed storage network , comprising:an interface;a directory including a virtual address space;a storage unit table; and receive a request relating to a data object stored within the dispersed storage network via the interface, the request including an object name of the data object and a user identifier of a user associated with the data object;', 'index into the directory using the object name and the user identifier to determine a virtual memory address assigned to the data object, the virtual memory address being within a virtual memory address range of the virtual address space allocated to a vault associated with the user, the virtual memory address further being assigned to a data slice of a plurality of data slices of the data object; and', 'index into the storage unit table using the virtual memory address to determine an identifier of a storage unit within the dispersed storage network that has the data slice stored therein., 'a processing module operable to2. The dispersed storage device of claim 1 , wherein the data object includes data segments claim 1 , a number of data slices within each of the data segments corresponding to a number of pillars per ...

Подробнее
23-01-2014 дата публикации

MEMORY CONTROL METHOD UTILIZING MAIN MEMORY FOR ADDRESS MAPPING AND RELATED MEMORY CONTROL CIRCUIT

Номер: US20140025921A1
Принадлежит: JMicron Technology Corp.

A memory control method, including: writing a write-in data which has a logical address into a write-in cache buffer; generating a write-in address mapping table which maps the logical address of the data to a physical address of a main memory, and writing the write-in address mapping table into a cached data mapping table write buffer; writing the write-in data into the main memory according to the write-in address mapping table; and when an available storage space of the cached data mapping table write buffer is reduced to reach a predetermined threshold, writing the address mapping table in the cached data mapping table write buffer into the main memory, and storing a corresponding main memory write-in address mapping table into a global mapping table buffer. 1. A memory control method , comprising:writing a write-in data which has a logical address into a write-in cache buffer;generating a write-in address mapping table which maps the logical address of the write-in data to a physical address of a main memory, and writing the write-in address mapping table into a cached data mapping table write buffer;writing the write-in data into the main memory according to the write-in address mapping table; andwhen an available storage space of the cached data mapping table write buffer is reduced to reach a predetermined threshold, writing the write-in address mapping table in the cached data mapping table write buffer into the main memory, and storing a corresponding main memory write-in address mapping table into a global mapping table buffer.2. The memory control method of claim 1 , wherein the main memory is a NAND flash memory.3. The memory control method of claim 1 , wherein the memory control method is a page-level memory control method.4. The memory control method of claim 1 , wherein the step of writing the write-in address mapping table in the cached data mapping table write buffer into the main memory comprises:writing the write-in address mapping table in the ...

Подробнее
23-01-2014 дата публикации

STORAGE SYSTEM INCLUDING MULTIPLE STORAGE APPARATUSES AND POOL VIRTUALIZATION METHOD

Номер: US20140025924A1
Принадлежит:

There are a plurality of storage apparatuses including a first storage apparatus and a second storage apparatus. The first storage apparatus has a virtual volume composed of a plurality of virtual segments. At least the second storage apparatus has a pool composed of a plurality of real pages (real storage areas). The plurality of storage apparatuses each manage one or more pools including at least the pool in the second storage apparatus as one virtual pool. The virtual pool is composed of a plurality of virtual pages, and each virtual page corresponds to any of the real pages. The first storage apparatus receives a write command that specifies an address belonging to an unallocated virtual segment to which no virtual page is allocated, allocates a free virtual page from the virtual pool to the unallocated virtual segment, and writes data accompanying the write command to the real page corresponding to the allocated virtual page. 1. A storage system , comprising:a plurality of storage apparatuses including a first storage apparatus and a second storage apparatus,wherein said first storage apparatus is configured to have a first virtual volume composed of a plurality of virtual segments,at least said second storage apparatus is configured to have a pool composed of a plurality of real pages,said plurality of storage apparatuses are each configured to manage one or more pools including at least said pool in said second storage apparatus as one virtual pool,said virtual pool is composed of a plurality of virtual pages, each virtual page corresponding to any of the real pages, andsaid first storage apparatus is configured to receive a write command that specifies an address belonging to an unallocated virtual segment to which no virtual page is allocated, allocate a free virtual page from said virtual pool to said unallocated virtual segment, and write data accompanying said write command to the real page corresponding to the allocated virtual page.2. A storage system ...

Подробнее
30-01-2014 дата публикации

Computing device and virtual device control method for controlling virtual device by computing system

Номер: US20140032874A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

A virtual device control method of a computing device which includes a nonvolatile memory is provided. The virtual device control method includes receiving a virtualization request; assigning a first part of the nonvolatile memory to a virtual memory; assigning a second part of the nonvolatile memory to a virtual storage; and generating a virtual device including the assigned virtual memory and virtual storage.

Подробнее
20-02-2014 дата публикации

Shared virtual memory

Номер: US20140049551A1
Принадлежит: Intel Corp

A method and system for shared virtual memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a system memory. A CPU virtual address space may be created, and the surface may be mapped to the CPU virtual address space within a CPU page table. The method also includes creating a GPU virtual address space equivalent to the CPU virtual address space, mapping the surface to the GPU virtual address space within a GPU page table, and pinning the surface.

Подробнее
27-02-2014 дата публикации

Load Page Table Entry Address Instruction Execution Based on an Address Tralsnation Format Control Field

Номер: US20140059321A1

What is provided is a load page table entry address function defined for a machine architecture of a computer system. In one embodiment, a machine instruction is obtained which contains an opcode indicating that a load page table entry address function is to be performed. The machine instruction contains an M field, a first field identifying a first general register, and a second field identifying a second general register. Based on the contents of the M field, an initial origin address of a hierarchy of address translation tables having at least one segment table is obtained. Based on the obtained initial origin address, dynamic address translation is performed until a page table entry is obtained. The page table entry address is saved in the identified first general register. 1. A computer program product for performing a load page table entry address (LPTEA) function in a computer system of a machine architecture , said computer system configured to translate a virtual address into a translated address of a block of data in main storage , the computer system having a hierarchy of translation tables for translation of said virtual address , said load page table entry address function defined for said machine architecture , the computer program product comprising:a storage medium readable by said computer system, said computer readable medium storing instructions for performing:executing a machine instruction, said machine instruction comprising an opcode for a Load Page Table Entry Address (LPTEA) instruction, the executing comprising:obtaining an address to be translated, the translation accessing associated translation tables of the hierarchy of translation tables;determining, based on a format control field of a translation table entry, one associated translation table of said associated translation tables;obtaining an address field from an associated entry of the one associated translation table; andsaving the address field in a general register specified by ...

Подробнее
27-03-2014 дата публикации

VIRTUAL ADDRESSING

Номер: US20140089630A1
Автор: Pignatelli David J.
Принадлежит:

A method of relating the user logical block address (LBA) of a page of user data to the physical block address (PBA) where the data is stored in a RAIDed architecture reduces to size of the tables by constraining the location to which data of a plurality of LBAs may be written. Chunks of data from a plurality of LBAs may be stored in a common page of memory and the common memory pages is described by a virtual block address (VBA) referencing the PBA, and each of the LBAs uses the same VBA to read the data. 1. A method of managing a RAIDed memory system , comprising: maintaining a first lookup table representing a mapping of a user LBA to a virtual block (VBA) address;', 'mapping the VBA to a RAID stripe wherein a chunk of data of the LBA is stored on a memory device of a plurality of memory devices;', 'maintaining a second lookup table representing a mapping of the VBA to a physical block address (PBA) of a memory device of a plurality of memory devices;', 'selecting chunks from a plurality of LBAs to form a page of data having a VBA value; and', 'writing the data to the plurality of memory devices in a RAID stripe;', 'wherein each LBA of the plurality of LBAs forming the page of data map to the VBA value of the page., 'configuring a processor for2. The method of claim 1 , further comprising:reading the data corresponding to a user LBA by performing a lookup of the corresponding VBA address in the first lookup table;performing a lookup of the correspondence of the VBA and the PBA in the second lookup table; andreading the data from the PBA.3. The method of claim 1 , wherein the memory devices are managed by a controller and operations are scheduled so as to provide erase hiding.4. The method of claim 1 , wherein the VBA metadata includes a mapping of the chunk location to one of the plurality of LBAs referencing the second table.5. The method of claim 1 , wherein the first and the second lookup tables are maintained in volatile memory.6. The method of claim 1 , ...

Подробнее
03-04-2014 дата публикации

Solid state memory device logical and physical partitioning

Номер: US20140095773A1
Принадлежит: International Business Machines Corp

Embodiments relate to solid state memory device including a storage array having a plurality of physical storage devices and the storage array includes a plurality of partitions. The solid state memory device also includes a controller comprising a plurality of mapping tables, wherein each of the plurality of mapping tables corresponds to one of the plurality of partitions. Each of the plurality of mapping tables is configured to store a physical location and a logical location of data stored in its corresponding partition.

Подробнее
10-04-2014 дата публикации

Adjunct component to provide full virtualization using paravirtualized hypervisors

Номер: US20140101360A1
Автор: Michael K. Gschwind
Принадлежит: International Business Machines Corp

A system configuration is provided with a paravirtualizing hypervisor that supports different types of guests, including those that use a single level of translation and those that use a nested level of translation. When an address translation fault occurs during a nested level of translation, an indication of the fault is received by an adjunct component. The adjunct component addresses the address translation fault, at least in part, on behalf of the guest.

Подробнее
10-04-2014 дата публикации

SYSTEM SUPPORTING MULTIPLE PARTITIONS WITH DIFFERING TRANSLATION FORMATS

Номер: US20140101402A1
Автор: Gschwind Michael K.

A system configuration is provided with multiple partitions that supports different types of address translation structure formats. The configuration may include partitions that use a single level of translation and those that use a nested level of translation. Further, differing types of translation structures may be used. The different partitions are supported by a single hypervisor. 1. A method of facilitating memory access , said method comprising:providing a first partition within a system configuration, the first partition configured to support an operation system (OS) designed for a first address translation architecture, the first partition not supporting an OS designed for a second address translation architecture; andproviding a second partition within the system configuration, the second partition configured to support the OS designed for the second address translation architecture, the second partition not supporting the OS designed for the first address translation architecture, wherein the first address translation architecture is structurally different from the second address translation architecture.2. The method of claim 1 , wherein the first address translation architecture uses a hash structure and the second address translation architecture uses a hierarchical table structure.3. The method of claim 1 , wherein the first partition is a paravirtualized partition in which a guest of the first partition assists in handling address translation faults corresponding to host translations claim 1 , and the second partition is a virtualized partition in which handling of address translation faults corresponding to host translations is independent of assistance from a guest of the second partition.4. The method of claim 1 , wherein the first partition uses a single level address translation mechanism for translating guest virtual addresses to host physical addresses claim 1 , and the second partition uses a nested level address translation mechanism for ...

Подробнее
06-01-2022 дата публикации

MEMORY SYSTEM, MEMORY CONTROLLER, AND METHOD OF OPERATING MEMORY SYSTEM

Номер: US20220004496A1
Автор: HA Chan Ho
Принадлежит:

Disclosed are a memory system, a memory controller, and a method of operating a memory system. The memory system may control the memory device to store data into zones of memory blocks in the memory device by assigning each data to be written with an address subsequent to a most recently written address in a zone, store journal information including mapping information between a logical address and a physical address for one of the one or more zones in a journal cache, search for journal information corresponding to a target zone targeted to write data when mapping information for the target zone among the one or more zones is updated, and replace the journal information corresponding to the target zone with journal information including the updated mapping information. 1. A memory system comprising:a memory device including memory cells for storing data and operable to perform an operation on one or more memory cells including, a read operation for reading data stored in one or more memory cells, a program operation for writing new data into one or more memory cells, or an erase operation for deleting stored data in one or more memory cells; anda memory controller in communication with the memory device and configured to control the memory device to perform an operation.wherein the memory controller is further configured to:control the memory device to store data into zones of memory blocks in the memory device by assigning each data to be written with an address subsequent to a most recently written address in a zone, wherein the zones of memory blocks are split from a namespace in the memory device;storing, in a journal cache, journal information comprising mapping information between a logical address and a physical address for one of the one or more zones;search, in the journal cache, for journal information corresponding to a target zone targeted to write data, when mapping information for the target zone among the one or more zones is updated; andreplace the ...

Подробнее
06-01-2022 дата публикации

Data processing method for improving access performance of memory device and data storage device utilizing the same

Номер: US20220004498A1
Автор: CHEN Yu-Ta
Принадлежит:

A data storage device includes a memory device including multiple memory blocks corresponding to multiple sub-regions and a memory controller. The memory controller accesses the memory device and updates content of a read count table in response to a read command with at least one designated logical address issued by a host device. Each field of the read count table records a read count associated with one sub-region and the content of the read count table is updated by increasing the read count associated with the sub-region that the designated logical address belongs to. The memory controller selects at least one sub-region to be rearranged according to the content of the read count table and performs a data rearrangement procedure to move data of logical addresses belonging to the selected at least one sub-region to a first memory space of the memory device having continuous physical addresses. 1. A data storage device , comprising:a memory device, comprising a plurality of memory blocks, wherein the memory blocks correspond to a plurality of logical units, each logical unit corresponds to a plurality of logical addresses, the logical addresses corresponding to each logical unit are divided into a plurality of regions and each region is further divided into a plurality of sub-regions; anda memory controller, coupled to the memory device and configured to access the memory device and update content of a read count table in response to a read command with at least one designated logical address issued by a host device,wherein the read count table comprises a plurality of fields, each field is configured to record a read count that is associated with one sub-region and the content of the read count table is updated by increasing the read count associated with the sub-region that the designated logical address belongs to, andwherein the memory controller is further configured to select at least one sub-region to be rearranged according to the content of the read ...

Подробнее
06-01-2022 дата публикации

Logging Pages Accessed From I/O Devices

Номер: US20220004503A1
Принадлежит: Google LLC

Systems and methods of tracking page state changes are provided. An input/output is communicatively coupled to a host having a memory. The I/O device receives a command from the host to monitor page state changes in a region of the memory allocated to a process. The I/O device, bypassing a CPU of the host, modifies data stored in the region based on a request, for example, received from a client device via a computer network. The I/O device records the modification to a bitmap by setting a bit in the bitmap that corresponds to a location of the data in the memory. The I/O device transfers contents of the bitmap to the CPU, wherein the CPU completes the live migration by copying sections of the first region indicated by the bitmap to a second region of memory. In some implementations, the process can be a virtual machine, a user space application, or a container. 1. A method of tracking page state changes , the method comprising:receiving, at an input/output (I/O) device communicatively coupled to a host having a physical memory, a command from the host to monitor page state changes in a first page of the physical memory allocated to a process executing on the host;modifying, by the I/O device, data stored in a first portion of the first page based on a request;recording, by the I/O device, the modification to a bitmap by setting a bit in the bitmap that corresponds to a location of the data in the physical memory;storing, by the I/O device, the bit in a first buffer in a general purpose memory of the host; andcopying, by the I/O device or the host, the first portion of the first page indicated by the bitmap in the first buffer to a second portion of a second page of physical memory, wherein the second page of physical memory can be in the physical memory of the host, or in a second physical memory of a second host.2. The method of claim 1 , wherein the request includes an I/O virtual address indicating a location of the data in a virtual memory of the process claim ...

Подробнее
01-01-2015 дата публикации

Shared Virtual Memory Between A Host And Discrete Graphics Device In A Computing System

Номер: US20150002526A1
Автор: Ginzburg Boris
Принадлежит:

In one embodiment, the present invention includes a device that has a device processor and a device memory. The device can couple to a host with a host processor and host memory. Both of the memories can have page tables to map virtual addresses to physical addresses of the corresponding memory, and the two memories may appear to a user-level application as a single virtual memory space. Other embodiments are described and claimed. 1. A system on chip (SoC) comprising:a plurality of cores;a host memory controller to couple to a host memory;a plurality of graphics units coupled to the plurality of cores; anda device memory controller to couple to a device memory, the plurality of graphics units and the plurality of cores having a shared virtual address space, wherein on a page fault in a first graphics unit, the first graphics unit is to request a missing page from the host memory via a host page table that maps first virtual addresses to physical addresses of the host memory, the first graphics unit having a device page table to map second virtual addresses to physical addresses of the device memory.2. The SoC of claim 1 , wherein the host memory and the device memory appear to a user-level application as a single virtual memory space.3. The SoC of claim 1 , wherein the device memory is to act as a page-based cache memory of the host memory.4. The SoC of claim 3 , wherein coherency between the device memory and the host memory is maintained implicitly without programmer interaction.5. The SoC of claim 1 , wherein one of the plurality of cores is to provide the missing page from the host memory to the device processor if present therein claim 1 , and to set a not present indicator in the host memory for the corresponding page if the missing page is write enabled claim 1 , wherein when the not present indicator is set claim 1 , the plurality of cores is prevented from accessing the corresponding page in the host memory.6. The SoC of claim 1 , wherein one of the ...

Подробнее
05-01-2017 дата публикации

STORAGE DEVICE INCLUDING NONVOLATILE SEMICONDUCTOR MEMORIES WITH DIFFERENT CHARACTERISTICS, METHOD FOR CONTROLLING STORAGE DEVICE, AND COMPUTER-READABLE NONVOLATILE STORAGE MEDIUM FOR STORING PROGRAM

Номер: US20170003892A1
Автор: SEKIDO Kazunori
Принадлежит:

According to one embodiment, a storage device includes a storage, first data in which a sequence number indicating a write-completion order is associated with each erase unit area included in areas of the storage, second data indicating a relationship between each write interval and each write destination, a selection module which obtains the erase unit area corresponding to a logical address of target data to be written, calculates a write interval of the target data from a difference between the sequence number at an occurrence time of writing and the sequence number corresponding to the erase unit area of the first data, and selects the write destination corresponding to the write interval of the target data, and a write module which writes the target data to the selected write destination, and changes the sequence number when writing is completed for one erase unit area. 1. A storage device comprising:a storage module including a plurality of areas having different upper limits in the number of erases;a management storage module which stores first management data in which a sequence number indicating an order of write-completion is associated with each erase unit area included in the plurality of areas, second management data indicating a relationship between each write interval and each write destination area, and address translation data in which a logical address is associated with the erase unit area;a selection module which, when target data to be written is written to the storage module, obtains the erase unit area corresponding to the logical address of the target data to be written based on the address translation data, calculates a write interval of the target data to be written from a difference between the sequence number at a time of occurrence of writing the target data to be written and the sequence number corresponding to the erase unit area of the first management data based on the first management data, and selects the write destination area ...

Подробнее
07-01-2016 дата публикации

DATA-STORAGE DEVICE AND FLASH MEMORY CONTROL METHOD

Номер: US20160004468A1
Автор: CHENG Chang-Kai
Принадлежит:

A data-storage device having a flash memory allocated to provide data-storage space, a valid page count table, logical-to-physical address mapping information, and an invalid block record. The data-storage device further having a controller, allocating the data-storage space to store data issued from a host, and establishing and maintaining the valid page count table, the logical-to-physical address mapping information, and the invalid block record in the FLASH memory to manage the data-storage space. A FLASH memory control method is also provided. 1. A data-storage device , comprising:a FLASH memory, allocated to provide data-storage space, a valid page count table, logical-to-physical address mapping information, and an invalid block record; anda controller, allocating the data-storage space to store data issued from a host, and establishing and maintaining the valid page count table, the logical-to-physical address mapping information, and the invalid block record in the FLASH memory to manage the data-storage space, the controller updates the logical-to-physical address mapping information after an update of the valid page count table is completed; and', 'the controller maintains the invalid block record based on the valid page count table., 'wherein2. The data-storage device as claimed in claim 1 , wherein the controller further records an event record into the FLASH memory to record memory allocations occurring after the latest complete round of updates of the valid page count table claim 1 , the logical-to-physical address mapping information claim 1 , and the invalid block record.3. The data-storage device as claimed in claim 2 , wherein the controller updates the valid page count table based on a comparison between the event record and the logical-to-physical address mapping information.4. The data-storage device as claimed in claim 3 , wherein:every round of updating the valid page count table, the logical-to-physical address mapping information, and the ...

Подробнее
07-01-2016 дата публикации

INTERNAL STORAGE, EXTERNAL STORAGE CAPABLE OF COMMUNICATING WITH THE SAME, AND DATA PROCESSING SYSTEM INCLUDING THE STORAGES

Номер: US20160004634A1
Автор: Kim Dong Min
Принадлежит:

A memory controller, a data processing system, and an electronic device are provided. The memory controller is configured to share a function of one of an internal storage and an external storage in a union mode in which the external storage and the internal storage are logically unified with each other. 1. A memory controller configured to share a function of one of an internal storage and an external storage in a union mode in which the external storage and the internal storage are logically unified with each other ,wherein the memory controller is configured to translate a logical address into a physical address based on a global mapping table which maps the logical address to the physical address of each of the internal storage and the external storage and is further configured to determine which of the internal storage and the external storage processes data transmitted from a host.2. The memory controller of claim 1 , wherein claim 1 , in the union mode claim 1 , the memory controller is configured to control all data of a file to be stored in either of the internal storage and the external storage according to control of the host.3. The memory controller of claim 1 , wherein the memory controller is configured to store data in the internal storage and the external storage in a distributed fashion at a write request of the host.4. The memory controller of claim 1 , wherein in response to the memory controller receiving a request from the host to read data from the internal storage while the external storage is performing a write operation claim 1 , the memory controller is configured to perform a read operation to read the data from the internal storage to be performed.5. The memory controller of claim 1 , wherein the memory controller is configured to collect feature information of the internal storage claim 1 , provide the feature information of the internal storage to the external storage claim 1 , and receive feature information of the external storage ...

Подробнее
05-01-2017 дата публикации

MEMORY MAPPING FOR A GRAPHICS PROCESSING UNIT

Номер: US20170004598A1
Принадлежит: Intel Corporation

An electronic device is described herein. The electronic device may include a page walker module to receive a page request of a graphics processing unit (GPU). The page walker module may detect a page fault associated with the page request. The electronic device may include a controller, at least partially comprising hardware logic. The controller is to monitor execution of the page request having the page fault. The controller determines whether to suspend execution of a work item at the GPU associated with the page request having the page fault, or to continue execution of the work item based on factors associated with the page request. 1. A system , comprising:a display device;a memory, wherein virtual memory addresses are dynamically mapped to physical memory addresses of the memory;a graphics processing unit (GPU) to generate a work item associated with rendering an image for display using the display device, wherein the work item is to indicate a page request;a page walker to receive the page request and to detect a page fault associated with the page request; anda controller at least partially comprising hardware logic, wherein the controller is to:monitor execution of the page request having the page fault; anddetermine whether to suspend the work item having the page fault from execution at the GPU, or to continue execution of the work item, the determination based on a factor of the page request in a context of other page requests, wherein the factor comprises a total number of page requests pending and a time for which a page request associated with the work item has been pending.2. The system of claim 1 , wherein the page walker is to detect page faults based on attributes of the page request claim 1 , the attributes comprising:existence of a paging entry associated with the page request;read/write attributes;privilege levels;execution properties; orany combination thereof.3. The system of claim 1 , wherein the page request indicates a virtual address ...

Подробнее
05-01-2017 дата публикации

Rendering graphics data on demand

Номер: US20170004647A1
Автор: Mark Grossman
Принадлежит: Microsoft Technology Licensing LLC

Methods and systems for rendering graphics data on demand are described herein. One or more page tables are stored that map virtual memory addresses to physical memory addresses and task IDs. A page fault is experienced when a task running on a GPU accesses, using a virtual memory address, a page of memory that has not been written to by the GPU. Context switching is performed in response to the page fault, which frees up the GPU. GPU threads are identified and executed in dependence on the task ID associated with the virtual memory address being used when the page fault occurred to thereby cause the GPU to write to the page of memory associated with the page fault. Further context switching is performed to retrieve and return the state of the task that was running on the GPU when the page fault occurred, and the task is resumed.

Подробнее
04-01-2018 дата публикации

MEMORY ALLOCATION TECHNIQUES AT PARTIALLY-OFFLOADED VIRTUALIZATION MANAGERS

Номер: US20180004539A1
Принадлежит: Amazon Technologies, Inc.

An offloaded virtualization management component of a virtualization host receives an indication from a hypervisor of a portion of main memory of the host for which memory allocation decisions are not to be performed by the hypervisor. The offloaded virtualization management component assigns a subset of the portion to a particular guest virtual machine and provides an indication of the subset to the hypervisor. 1. A system , comprising:one or more processors of a virtualization host;a main memory of the virtualization host; andone or more offloaded virtualization manager components including a first offloaded virtualization manager component, wherein the first offloaded virtualization manager component is accessible from the one or more processors via a peripheral interconnect; designate a first portion of the main memory for a first page table for memory pages of a first size, wherein at least some memory pages of the first size are allocated on behalf of one or more subcomponents of the hypervisor;', 'reserve at least a second portion of the main memory for an executable object to be used for a live update of the hypervisor;', 'in response to a query from the first offloaded virtualization manager component, provide an indication of at least a third portion of the main memory which is available for one or more guest virtual machines; and, 'wherein the memory comprises program instructions that when executed on the one or more processors implement a hypervisor configured to assign a subset of the third portion of main memory to a first guest virtual machine to be instantiated at the virtualization host; and', 'transmit, to the hypervisor, paging metadata indicating at least a location of a second page table for memory pages of a second size, wherein the second page table is to be set up by the hypervisor and used on behalf of the first guest virtual machine., 'wherein the first offloaded virtualization manager component is configured to2. The system as recited in ...

Подробнее
04-01-2018 дата публикации

Unified Paging Scheme for Dense and Sparse Translation Tables on Flash Storage Systems

Номер: US20180004650A1
Принадлежит:

A system comprising a processor and a memory storing instructions that, when executed, cause the system to receive a first translation table entry for a logical block, map the first translation table entry to a first dump unit, the first dump unit included in an array of dump units, identify a second translation table entry for the logical block in the first dump unit, the second translation table entry also being stored in a storage device, and generate a linked list in the storage device from the second translation table entry associated with the first dump unit, the linked list identifying previous translation table entries associated with the logical block. 1. A method comprising:receiving a first translation table entry for a logical block;mapping the first translation table entry to a first dump unit, the first dump unit included in an array of dump units;identifying a second translation table entry for the logical block in the first dump unit, the second translation table entry being stored in a storage device; andgenerating a linked list in the storage device from the second translation table entry associated with the first dump unit, the linked list identifying previous translation table entries associated with the logical block.2. The method of claim 1 , wherein mapping of the first translation table entry is based on a hash function claim 1 , the mapping being a hash of on a logical block number of the logical block of the first translation table.3. The method of claim 1 , further comprising:allocating a space in memory for the array of dump units.4. The method of claim 1 , wherein mapping includes storing the first translation table entry in the first dump unit.5. The method of claim 1 , further comprising:storing the first translation table entry in a logical space in memory.6. The method of claim 1 , wherein the linked list includes updates to the logical block in a reverse chronological order claim 1 , a most recent update being at the front of the ...

Подробнее
04-01-2018 дата публикации

Checkpoint Based Technique for Bootstrapping Forward Map Under Constrained Memory for Flash Devices

Номер: US20180004651A1
Принадлежит:

A system comprising a processor and a memory storing instructions that, when executed, cause the system to determine a first value of a first checkpoint associated with a first snapshot, receive a second value of a second checkpoint associated with a translation table entry from an additional source, determine whether the second value of the second checkpoint is after the first value of the first checkpoint, in response to determining that the second value of the second checkpoint is after the first value of the first checkpoint, retrieve the translation table entry associated with the second checkpoint from the additional source, and reconstruct the translation table using the translation table entry associated with the second checkpoint. 1. A method for reconstructing a translation table in a memory comprising:determining a first value of a first checkpoint associated with a first snapshot;receiving a second value of a second checkpoint associated with a translation table entry from an additional source;determining whether the second value of the second checkpoint is after the first value of the first checkpoint;in response to determining that the second value of the second checkpoint is after the first value of the first checkpoint, retrieving the translation table entry associated with the second checkpoint from the additional source; andreconstructing the translation table using the translation table entry associated with the second checkpoint.2. The method of claim 1 , wherein the first snapshot includes a free running counter denoting a timestamp of sufficient granularity.3. The method of claim 1 , wherein the first snapshot includes a counter associated with an update of a reverse translation map claim 1 , wherein the counter is incremented each time the reverse translation map is persisted.4. The method of claim 1 , wherein the first snapshot includes a counter associated with a meta-log entry claim 1 , wherein the counter is incremented each time a new ...

Подробнее
04-01-2018 дата публикации

Translation Lookup and Garbage Collection Optimizations on Storage System with Paged Translation Table

Номер: US20180004652A1
Принадлежит:

A system comprising a processor and a memory storing instructions that, when executed, cause the system to receive a request for garbage collection, identify a range of physical blocks in a storage device, query a bitmap, the bitmap having a bit for each physical block in the range of physical blocks, determine a status associated with a first bit from the bitmap, in response to determining the status associated with the first bit is a first state, add a first physical block associated with the first bit to a list of physical blocks for relocation, and relocate the list of physical blocks. 1. A method comprising:receiving a request for garbage collection;identifying a range of physical blocks in a storage device;querying a bitmap, the bitmap having a bit for each physical block in the range of physical blocks;determining a status associated with a first bit from the bitmap;in response to determining the status associated with the first bit is a first state, adding a first physical block associated with the first bit to a list of physical blocks for relocation; andrelocating the list of physical blocks.2. The method of claim 1 , wherein a size of the bitmap corresponds to a size of the storage device.3. The method of claim 1 , wherein the first state indicates an active mapping associated with the first physical block.4. The method of claim 1 , further comprising:receiving a request to pre-fetch a translation table entry;in response to receiving the request to pre-fetch, marking the translation table entry in the memory; andgenerating a non-zero reference count for the translation table entry.5. The method of claim 3 , wherein the marked translation table entry is associated with an expiration timeout.6. The method of claim 1 , further comprising:receiving a write request for a first logical block;mapping the first logical block to a second physical block;allocating a second bit associated with the second physical block;assigning the first state to the second bit ...

Подробнее
04-01-2018 дата публикации

Efficient Management of Paged Translation Maps In Memory and Flash

Номер: US20180004656A1
Принадлежит: Western Digital Technologies Inc

A system comprising a processor and a memory storing instructions that, when executed, cause the system to receive a request to select translation table entries to store in a storage device, determine a plurality of translation table entries associated with a dump unit, allocate the plurality of translation table entries into a first group of translation table entries associated with a first node and a second group of translation table entries associated with a second node, the first group of translation table entries being frequently accessed and the second group of translation table entries being rarely accessed. determine a first status associated with a first recent access bit for a first translation table entry, the first translation table entry being included in the first group of translation table entries, and add the first translation table entry to the second group of translation table entries.

Подробнее
04-01-2018 дата публикации

Direct store to coherence point

Номер: US20180004660A1
Принадлежит: Microsoft Technology Licensing LLC

A system that uses a write-invalidate protocol has at least two types of stores. A first type of store operation uses a write-back policy resulting in snoops for copies of the cache line at lower cache levels. A second type of store operation writes, using a coherent write-through policy, directly to the last-level cache without snooping the lower cache levels. By storing directly to the coherence point, where cache coherence is enforced, for the coherent write-through operations, snoop transactions and responses are not exchanged with the other caches. A memory order buffer at the last-level cache ensures proper ordering of stores/loads sent directly to the last-level cache.

Подробнее
04-01-2018 дата публикации

COMPUTER SYSTEM INCLUDING SYNCHRONOUS INPUT/OUTPUT AND HARDWARE ASSISTED PURGE OF ADDRESS TRANSLATION CACHE ENTRIES OF SYNCHRONOUS INPUT/OUTPUT TRANSACTIONS

Номер: US20180004664A1
Принадлежит:

A synchronous input/output (I/O) computing system includes a processor and a memory unit that stores program instructions. The system purges one or more address translation entries in response to the processor executing the program instructions to issue, via an operating system running on the synchronous I/O computing system, a synchronous I/O command indicating a request to perform a transaction. The program instructions further command the operating system to select a device table entry from a device table, load the entry into the DTC, request required address translation entries, install the required address translation entries in the address translation cache, and transfer data packets corresponding to the transaction. The program instructions further command the operating system to automatically purge the address translation cache entries associated with a transaction in response to detect that the transaction is completed. 1. A method of purging one or more address translation cache entries included in a synchronous input/output (I/O) computing system , the method comprising:issuing, via an operating system running on the synchronous I/O computing system, a synchronous I/O command indicating a request to perform a transaction;selecting a device table entry from a device table, loading the entry into a device table cache (DTC), requesting required address translation entries, installing the required address translation entries in the address translation cache and transferring data packets corresponding to the transaction using the selected device table entry and the required address translation entries; andautomatically purging the address translation cache entries associated with a transaction in response to detecting that the transaction is completed.2. The method of claim 1 , further comprising automatically purging claim 1 , via a host bridge claim 1 , the at least one stale address translation cache entry from an address translation cache without receiving ...

Подробнее
04-01-2018 дата публикации

APPLICATION EXECUTION ENCLAVE MEMORY METHOD AND APPARATUS

Номер: US20180004675A1
Принадлежит:

Apparatuses, methods and storage medium associated with application execution enclave memory page cache management, are disclosed herein. In embodiments, an apparatus may include a processor with processor supports for application execution enclaves; memory organized into a plurality of host physical memory pages; and a virtual machine monitor to be operated by the processor to manage operation of virtual machines. Management of operation of the virtual machines may include facilitation of mapping of virtual machine-physical memory pages of the virtual machines to the host physical memory pages, including maintenance of an unallocated subset of the host physical memory pages to receive increased security protection for selective allocation to the virtual machines, for virtualization and selective allocation to application execution enclaves of applications of the virtual machines. Other embodiments may be described and/or claimed. 1. An apparatus for computing , comprising:a physical processor with processor supports for application execution enclaves;memory organized into a plurality of host physical memory pages; anda virtual machine monitor to be operated by the physical processor to manage operation of virtual machines formed from virtualization of the physical processor and the memory, wherein management of operation of the virtual machines includes facilitation of mapping of virtual machine-physical memory pages of the virtual machines to the host physical memory pages, including maintenance of an unallocated subset of the host physical memory pages to receive increased security protection for selective allocation to the virtual machines as virtual machine-physical memory pages of the virtual machines, for virtualization into one or more virtual pages of the virtual machines for selective allocation to application execution enclaves of applications of the virtual machines.2. The apparatus of claim 1 , wherein the virtual machine monitor is to allocate a host ...

Подробнее
04-01-2018 дата публикации

MEMORY SYSTEM AND METHOD FOR OPERATING THE SAME

Номер: US20180004677A1
Принадлежит:

A memory system includes a memory device including a memory block, the memory block including a plurality of memory cell groups, an address translator that maps a logical address of a data to a physical address of the memory block, and a controller configured to divide the plurality of memory cell groups into a plurality of first memory cell groups and at least one second memory cell group, and control the address translator so that the address translator maps a logical address of a data to a physical address of the first memory cell groups of the memory block and not in the at least one second memory cell group and switches the at least one second memory cell group with a selected first memory cell group among the plurality of the first memory cell groups when a predetermined period of time elapses. 1. A memory system , comprising:a memory device including a memory block, the memory block including a plurality of memory cell groups;an address translator that maps a logical address of a data to a physical address of the memory block; anda controller configured to:divide the plurality of memory cell groups into a plurality of first memory cell groups and at least one second memory cell group, andcontrol the address translator so that the address translator maps a logical address of a data to a physical address of the first memory cell groups of the memory block and not in the at least one second memory cell group and switches the at least one second memory cell group with a selected first memory cell group among the plurality of the first memory cell groups when a predetermined period of time elapses.2. The memory system of claim 1 , wherein the selected first memory cell group is disposed adjacent to the at least second memory cell group in a first direction.3. The memory system of claim wherein switching the at least one second memory cell group with a selected first memory cell group among the plurality of the first memory cell groups includes re-mapping a logical ...

Подробнее
04-01-2018 дата публикации

APPARATUS AND METHOD FOR PERFORMING ADDRESS TRANSLATION

Номер: US20180004678A1
Принадлежит:

An apparatus, system, and method for address translation are provided. Physical address information corresponding to virtual addresses is prefetched and stored, where at least some sequences of the virtual addresses are in a predefined order. The physical address information is prefetched based on identification information provided by a data processing activity, comprising at least a segment identifier and a portion of a virtual address to be translated. The storage has segments of entries, wherein each segment stores physical address information which corresponds to virtual addresses in a predefined order. This predefined order means that it is not necessary to store virtual addresses in the storage. Storage capacity and response speed are therefore gained. 1. An apparatus comprising:address translation storage to store physical address information in a plurality of entries, each entry being contained within a segment of entries, and each segment containing entries which correspond to virtual addresses in a predefined order;address translation circuitry responsive to identification information specified by a data processing activity to generate a corresponding physical address from the stored physical address information,wherein the identification information comprises a segment identifier and a portion of a virtual address, wherein the portion of the virtual address specifies a selected entry within the segment of entries specified by the segment identifier,wherein the address translation storage is capable of storing a number of segments which is at least equal to an expected number of segments which the data processing activity will require within a time period given by an expected latency for the retrieval of a unit of physical address information from a memory; andaddress translation prefetch circuitry responsive to the segment identifier specified by the data processing activity to initiate retrieval of physical address information corresponding to a ...

Подробнее
04-01-2018 дата публикации

Systems, Apparatuses, and Methods for Platform Security

Номер: US20180004681A1
Автор: David A. Koufaty
Принадлежит: Intel Corp

Systems, methods, and apparatuses for platform security are described. For example, in an embodiment, an apparatus includes address translation circuitry to translation a virtual address to a physical address and to provide a first protection domain, at least one protection range register, the at least one protection range register to store a range of virtual addresses to protect as part of a protection domain, and comparison circuitry to compare the virtual address to the range of virtual addresses of the at least one protection range register and to output a second protection domain upon a match in of the virtual address and the range of virtual addresses of the at least one protection register.

Подробнее
02-01-2020 дата публикации

HARDWARE-ASSISTED PAGING MECHANISMS

Номер: US20200004677A1
Принадлежит:

Processing circuitry for computer memory management includes memory reduction circuitry to implement a memory reduction technique; and reference count information collection circuitry to: access a memory region, the memory region subject to the memory reduction technique; obtain an indication of memory reduction of the memory region; calculate metrics based on the indication of memory reduction of cache lines associated with the memory region; and provide the metrics to a system software component for use in memory management mechanisms. 1. Processing circuitry for computer memory management , the processing circuitry comprising:memory reduction circuitry to implement a memory reduction technique; and access a memory region of a memory device, the memory region subject to the memory reduction technique;', 'obtain an indication of memory reduction of the memory region;', 'based on the indication of memory reduction, calculate metrics of cache lines associated with the memory region; and', 'provide the metrics to a system software component for use in memory management mechanisms,, 'reference count information collection circuitry towherein the memory reduction technique includes memory deduplication,wherein the indication of memory reduction is a reference count, the reference count enumerating a number of logical addresses that point to a physical address of a cache line of the cache lines associated with the memory region, andwherein to calculate metrics based on the indication of memory reduction of the cache lines associated with the memory region, the reference count information collection circuitry is to calculate a number of cache lines with a reference count value of one.2. The processing circuitry of claim 1 , wherein the memory region is a memory page.34.-. (canceled)5. The processing circuitry of claim 1 , wherein to obtain the indication of memory reduction of cache lines claim 1 , the reference count information collection circuitry is to:access a ...

Подробнее
02-01-2020 дата публикации

Virtual Memory Management

Номер: US20200004688A1
Принадлежит: Imagination Technologies Ltd

A method of managing access to a physical memory formed of n memory page frames using a set of virtual address spaces having n virtual address spaces each formed of a plurality p of contiguous memory pages. The method includes receiving a write request to write a block of data to a virtual address within a virtual address space i of the n virtual address spaces, the virtual address defined by the virtual address space i, a memory page j within that virtual address space i and an offset from the start of that memory page j; translating the virtual address to an address of the physical memory using a virtual memory table having n by p entries specifying mappings between memory pages of the virtual address spaces and memory page frames of the physical memory, wherein the physical memory address is defined by: (i) the memory page frame mapped to the memory page j as specified by the virtual memory table, and (ii) the offset of the virtual address; and writing the block of data to the physical memory address.

Подробнее
02-01-2020 дата публикации

MEMORY CONSTRAINED TRANSLATION TABLE MANAGEMENT

Номер: US20200004689A1
Автор: Jean Sebastien Andre
Принадлежит:

Devices and techniques for memory constrained translation table management are disclosed herein. A level of a translation table is logically segmented into multiple segments. Here, a bottom level of the translation table includes a logical to physical address pairing for a portion of a storage device and other levels of the translation table include references within the translation table. The multiple segments are written to the storage device. A first segment of the multiple segments is loaded to byte-addressable memory. A request for an address translation is received and determined to be for an address referred to by a second segment of the multiple segments. The first segment is then replaced with the second segment in the byte-addressable memory and the request is fulfilled using the second segment to locate a lower level of the translation table that includes the address translation. 1. A device for memory translation table management , the device comprising:a byte-addressable memory;a storage device; and load, from the storage device to the byte-addressable memory, a first segment of multiple segments of a logical-to-physical (L2P) translation table, wherein a bottom level of the L2P translation table includes a logical to physical address pairing for a portion of the storage device, and other levels of the L2P translation table include references within the L2P translation table;', 'receive a request for an address translation;', 'determine whether the request is for an address referred to by a second segment of the multiple segments; and', replace the first segment with the second segment in the byte-addressable memory; and', 'fulfill the request using the second segment to locate a lower level of the L2P translation table that includes the address translation., 'in accordance with a determination that the request is for an address referred to by the second segment], 'a controller configured to2. The device of claim 1 , wherein the multiple segments are ...

Подробнее
13-01-2022 дата публикации

REBUILDING LOGICAL-TO-PHYSICAL ADDRESS MAPPING WITH LIMITED MEMORY

Номер: US20220012182A1
Автор: Wei Meng
Принадлежит:

Exemplary methods, apparatuses, and systems include reading logical-to-physical (L2P) table entries from non-volatile memory into volatile memory. Upon detection of a trigger to recover L2P data that was unmerged with the L2P table entries, a copy of an L2P journal is read from non-volatile memory. The L2P journal includes the L2P data that was unmerged with the L2P table entries. One or more of the L2P table entries are updated using the L2P data from the L2P journal. 1. A non-transitory computer-readable storage medium comprising instructions that , when executed by a processing device , cause the processing device to:read logical-to-physical (L2P) table entries from non-volatile memory into volatile memory;detect a trigger to recover L2P data that was unmerged with the L2P table entries; read a copy of an L2P journal from non-volatile memory, the L2P journal including the L2P data that was unmerged with the L2P table entries; and', 'update one or more of the L2P table entries using the L2P data from the L2P journal., 'in response to detecting the trigger2. The non-transitory computer-readable storage medium of claim 1 , wherein:the L2P table entries include layer 0 table entries, layer 1 table entries, and layer 2 table entries;the L2P journal includes journal entries, each journal entry including a layer 1 table entry identifier and a physical address storing a layer 2 table entry; andupdating the L2P table entries includes updating layer 1 table entries based upon the journal entries.3. The non-transitory computer-readable storage medium of claim 2 , wherein storage space in the volatile memory for the L2P table entries is smaller than space consumed by all the L2P table entries claim 2 , and wherein reading the L2P table entries into volatile memory includes omitting some layer 2 table entries.4. The non-transitory computer-readable storage medium of claim 2 , wherein the processing device is further to:in response to an eviction of a layer 2 table entry from ...

Подробнее
13-01-2022 дата публикации

METHODS AND SYSTEMS FOR TRANSLATING VIRTUAL ADDRESSES IN A VIRTUAL MEMORY BASED SYSTEM

Номер: US20220012183A1
Принадлежит:

An information handling system and method for translating virtual addresses to real addresses including a processor for processing data; memory devices for storing the data; and a memory controller configured to control accesses to the memory devices, where the processor is configured, in response to a request to translate a first virtual address to a second physical address, to send from the processor to the memory controller a page directory base and a plurality of memory offsets. The memory controller is configured to: read from the memory devices a first level page directory table using the page directory base and a first level memory offset; combine the first level page directory table with a second level memory offset; and read from the memory devices a second level page directory table using the first level page directory table and the second level memory offset. 1. A method of translating in a computing system a virtual address to a second address comprising:obtaining a page directory base;obtaining a first level memory offset from the virtual address;obtaining a first level page directory using the page directory base and the first level memory offset;obtaining a second level memory offset from the virtual address; andobtaining a second level page directory table using the first level page directory and the second level memory offset.2. The method according to claim 1 , wherein all the obtaining steps are performed by a memory controller not local to a processor.3. The method according to claim 1 , further comprising obtaining a memory line that contains the address of a page table entry (PTE) claim 1 , and extracting from the memory line containing the address of the page table entry (PTE) claim 1 , the page table entry (PTE) claim 1 , wherein the PTE contains the translation of the virtual address to the second address.4. The method according to claim 1 , further comprising determining that all of a plurality of memory offsets have been used with a ...

Подробнее
03-01-2019 дата публикации

INVALIDATION OF SHARED MEMORY IN A VIRTUAL ENVIRONMENT

Номер: US20190004707A1
Принадлежит:

A server logical partition (LPAR) of a virtualized computer includes shared memory regions (SMRs). The SMRs include pages of the server LPAR memory to share with client LPARs. A hypervisor utilizes an export vector to associate logical pages of the server LPAR with SMRs. The hypervisor further utilizes a reference array to associate SMRs with client LPARs that have mapped at least one physical memory page of the SMR from a logical page of the client LPAR memory. In processing an operation to unmap one or more shared physical pages from one or more LPARs, the hypervisor uses the export vector and reference array to determine which LPARs have had a mapping to the physical pages. 1. A method for managing a shared memory mapping in a computer , the method comprising:receiving a first shared page request, wherein the first shared page request is associated with a first logical page included in a first logical memory block (LMB) of a first logical partition (LPAR), wherein the first logical page corresponds to a physical page included in a shared memory region (SMR) associated with the first LPAR;determining, in response to the first shared page request, based at least in part on the first logical page corresponding to the physical page, a shared access state associated with the first LMB, wherein the shared access state indicates that the SMR is associated with first LMB;receiving a second shared page request, wherein the second shared page request is associated with access to the physical page by a second LPAR included in the computer;determining, in response to the second shared page request, a mapping state associated with the SMR;receiving a mapping request associated with the physical page;determining, in response to the mapping request, based at least in part on the shared access state associated with the first LMB and the mapping state associated with the SMR, that the second LPAR has established a mapping to the physical page; andinvalidating the mapping to the ...

Подробнее
03-01-2019 дата публикации

READ AND PROGRAM OPERATIONS IN A MEMORY DEVICE

Номер: US20190004938A1
Принадлежит: Intel Corporation

Technology for a memory device operable to program memory cells in the memory device is described. The memory device can include a plurality of memory cells and a memory controller. The memory controller can receive a page of data. The memory controller can segment the page of data into a group of data segments. The memory controller can program the group of data segments to memory cells in the plurality of memory cells that are associated with an inhibit tile group (ITG). The group of data segments for the page of data can be programmed using all bits included in each of the memory cells associated with the ITG. 1. A system operable to program memory cells , the system comprising:a plurality of memory cells; and receive a page of data;', 'segment the page of data into a group of data segments; and', 'program the group of data segments to memory cells in the plurality of memory cells that are associated with an inhibit tile group (ITG), wherein the group of data segments for the page of data is programmed using all bits included in each of the memory cells associated with the ITG., 'a memory controller comprising logic to2. The system of claim 1 , wherein the memory controller is configured to allocate the page of data into one wordline using all bits included in each of the memory cells associated with the ITG claim 1 , such that bits on a same page of data are stored into same memory cells.3. The system of claim 1 , wherein the memory cells do not include data segments from different pages of data.4. The system of claim 1 , wherein the memory controller is configured to program the page of data into the memory cells associated with the ITG using a compression by input-output (TO) in which bits of a same page of data are stored into a same memory cell.5. The system of claim 1 , wherein the memory controller is configured to program the page of data into the memory cells using the ITG and using all bits included in each of the memory cells associated with the ITG to ...

Подробнее
03-01-2019 дата публикации

STORAGE DEVICE, ITS CONTROLLING METHOD, AND STORAGE SYSTEM HAVING THE STORAGE DEVICE

Номер: US20190004942A1
Принадлежит:

A storage device determines whether or not reading target data subjected to a first conversion process is divided and stored into multiple pages. When the data subjected to the first conversion process is stored in one of a plurality of pages, the data is read from the page, and a second conversion process for returning the data to a state before the data is subjected to the first conversion process is executed to the data. When the reading target data is divided and stored into two or more of the plurality of pages, a portion of the data is read from each of the two or more pages in which the portion of the data is stored, the portion of the data is stored in the buffer memory, the data subjected to the first conversion process is restored, and the second conversion process is executed to the restored data. 1. A storage device comprising:a plurality of storage media each having a plurality of pages that are each the reading and writing unit of data;a buffer memory temporarily holding the data read from and written into the plurality of storage media; anda medium controller executing a first conversion process to the data read from and written into the plurality of storage media, and reading and writing the data to which the first conversion process is executed, from and into the plurality of storage media,wherein the medium controller determines:whether or not the reading target data subjected to the first conversion process is divided and stored into two or more of the plurality of pages;when the data subjected to the first conversion process is stored in one of the plurality of pages, reads the data from the page, and executes, to the data, a second conversion process for returning the data to a state before the data is subjected to the first conversion process; andwhen the reading target data is divided and stored into two or more of the plurality of pages, reads a portion of the data from each of the two or more pages in which the portion of the data is stored, ...

Подробнее
03-01-2019 дата публикации

System and method for host system memory translation

Номер: US20190004944A1
Принадлежит: Western Digital Technologies Inc

Systems and methods for host system memory translation are disclosed. The memory system may send a logical-to-physical address translation table to the host system. Thereafter, the host system may send commands that include a logical address and a physical address (with the host system using the logical-to-physical address translation table previously sent to generate the physical address). After sending the table to the host system, the memory system may monitor changes in the table, and record these changes in an update table. The memory system may use the update table in determining whether to accept or reject the physical address sent from the host system in processing the host system command. In response to determining to reject the physical address, the memory system may internally generate the physical address using the logical address sent from the host system and a logical-to-physical address translation table resident in the memory system.

Подробнее
03-01-2019 дата публикации

MEMORY SYSTEM FOR CONTROLLING NONVOLATILE MEMORY

Номер: US20190004964A1
Автор: Kanno Shinichi
Принадлежит:

According to one embodiment, a memory system copies content of a first logical-to-physical address translation table corresponding to a first region of a nonvolatile memory to a second logical-to-physical address translation table corresponding to a second region of the nonvolatile memory. When receiving a read request specifying a logical address in the second region, the memory system reads a part of the first data from the first region based on the second logical-to-physical address translation table. The memory system detects a block which satisfies a refresh condition from a first group of blocks allocated to the first region, corrects an error of data of the detected block and writes the corrected data back to the detected block. 1. A memory system connectable to a host computing device , the memory system comprising:a nonvolatile memory including a plurality of blocks; anda controller electrically connected to the nonvolatile memory and configured to:manage a plurality of regions in the nonvolatile memory, the regions including a first region storing first data referred to by another region and a second region referring to the first data;copy content of a first logical-to-physical address translation table corresponding to the first region to a second logical-to-physical address translation table corresponding to the second region in response to a request from the host computing device;when receiving a read request specifying a logical address in the second region from the host computing device, read a part of the first data from the first region based on the second logical-to-physical address translation table, and return the read data to the host computing device;when receiving a write request specifying a logical address in the second region from the host computing device, write, to the second region, second data to be written, and update the second logical-to-physical address translation table such that a physical address indicating a physical storage ...

Подробнее
03-01-2019 дата публикации

METHOD FOR SWITCHING ADDRESS SPACES VIA AN INTERMEDIATE ADDRESS SPACE

Номер: US20190004965A1
Принадлежит:

A method of re-mapping a boot loader image from a first to a second address space includes: determining a difference in a virtual address of the boot loader image in the first and second address spaces; building page tables for a third address space that maps a code section within the boot loader image at first and second address ranges separated by the difference and the code section causes execution to jump from a first instruction in the first address range to a second instruction in the second address range; executing an instruction of the code section in the first address space using pages tables for the first address space; executing the first instruction and then the second instruction using the page tables for the third address space; and executing an instruction of the boot loader image in the second address space using page tables for the second address space. 1. A method of re-mapping a boot loader image from a first address space to a target location in a second address space , wherein first pages tables map the first address space to a machine address space and second page tables map the second address space to the machine address space , comprising:(a) determining a difference in a virtual address of the boot loader image in the first address space and a corresponding virtual address of the boot loader image in the second address space;(b) building page tables for a third address space that maps a code section within the boot loader image at a first address range and a second address range, wherein the two address ranges are separated by the determined difference and the code section when executed causes execution to jump from a first instruction that is mapped in the first address range to a second instruction that is mapped in the second address range;(c) executing, in a processor, an instruction in the code section that is mapped in the first address space using pages tables for the first address space;(d) executing, in the processor, the first ...

Подробнее
03-01-2019 дата публикации

MITIGATING ATTACKS ON KERNEL ADDRESS SPACE LAYOUT RANDOMIZATION

Номер: US20190004972A1
Принадлежит:

Various systems and methods for detecting and preventing side-channel attacks, including attacks aimed at discovering the location of KASLR-randomized privileged code sections in virtual memory address space, are described. In an example, a computing system includes electronic operations for detecting unauthorized attempts to access kernel virtual memory pages via trap entry detection, with operations including: generating a trap page with a physical memory address; assigning a phantom page at an open location in the privileged portion of the virtual memory address space; generating a plurality of phantom page table entries corresponding to an otherwise-unmapped privileged virtual memory region; placing the trap page in physical memory and placing the phantom page table entry in a page table map; and detecting an access to the trap page via the phantom page table entry, to trigger a response to a potential attack. 1. A computing system adapted for detecting unauthorized attempts to access privileged virtual memory pages , the computing system comprising: host a plurality of privileged virtual memory pages in a privileged portion of a virtual memory address space; and', 'maintain a page table map of page table entries, wherein the page table map is used to indicate respective physical addresses in memory of the plurality of privileged virtual memory pages; and, 'memory to generate a trap page at a physical memory address of the memory;', 'assign a phantom page at an open location in the privileged portion of the virtual memory address space;', 'generate phantom page table entries mapping the phantom page to the trap page; and', 'detect access to the trap page, accessed via the phantom page table entries, to trigger a response to a potential attack., 'processing circuitry to2. The computing system of claim 1 , the processing circuitry further to generate the privileged virtual memory pages of an operating system kernel or virtual memory manager claim 1 , assign the ...

Подробнее
01-01-2015 дата публикации

STORAGE SYSTEM

Номер: US20150006801A1
Принадлежит:

A storage system monitors the first access frequency of occurrence which is the access frequency of occurrence from a host device during a first period, and the second access frequency of occurrence which is the access frequency of occurrence from a host device during a second period shorter than the first period. Along with performing data relocation among the tiers (levels) in the first period cycle based on the first access frequency of occurrence, the storage system performs a decision whether or not to perform a second relocation based on the first access frequency of occurrence and the second access frequency of occurrence, synchronously with access from a host device. Here the threshold value utilized in a decision on whether or not to perform the first relocation is different from the threshold value utilized in a decision on whether or not to perform the second relocation. 1. A storage system comprising:a first storage device, which is a flash device, whose storage areas are managed as a first tier,a second storage device whose storage areas are managed as a second tier; anda controller providing a virtual volume including a plurality of logical areas to a host, wherein the controller is configured to:allocate at least one of a plurality of pages in the first tier and in the second tier to at least one of the logical areas that is indicated by a write request from a host to store data of the write request; andmigrate data stored in a page in the second tier to a page in the first tier based on an access status of the data,wherein a number of pages in the first tier storing data migrated from the second tier to the first tier is controlled at least based on a cumulative number of pages whose data has been migrated from the second tier to the first tier, and a number of years of usage of the first storage device.2. A storage system according to the claim 1 , wherein the controller is further configured to:calculate a target number of pages whose data is to be ...

Подробнее
03-01-2019 дата публикации

Memory system and method for controlling nonvolatile memory

Номер: US20190006379A1
Автор: Shinichi Kanno
Принадлежит: Toshiba Memory Corp

According to one embodiment, a memory system classifies a plurality of nonvolatile memory dies connected to a plurality of channels, into a plurality of die groups such that each of the plurality of nonvolatile memory dies belongs to only one die group. The memory system performs a data write/read operation for one die group of the plurality of die groups in accordance with an I/O command from a host designating one of a plurality of regions including at least one region corresponding to each die group. The memory system manages a group of free blocks in the nonvolatile memory for each of the plurality of die group by using a plurality of free block pools corresponding to the plurality of die groups.

Подробнее
20-01-2022 дата публикации

MEMORY SUB-SYSTEMS INCLUDING MEMORY DEVICES OF VARIOUS LATENCIES AND CAPACITIES

Номер: US20220019379A1
Автор: Bert Luca
Принадлежит:

A write request comprising a logical address, a payload, and an indicator reflecting the character of the payload is received from an application. Based on the indicator, a value of a parameter associated with storing the payload on one or more of a plurality of memory devices is identified. The value of the parameter is determined to satisfy a criterion associated with a particular memory device of the plurality of memory devices. The payload is stored on the particular memory device. 1. A system comprising:a plurality of memory devices; and receiving a write command comprising a logical address, a payload, and an indicator reflecting a characteristic of the payload;', 'identifying, based on the indicator, a value of a parameter associated with storing the payload on one or more of the plurality of memory devices;', 'determining that the value of the parameter satisfies a criterion associated with a particular memory device of the plurality of memory devices; and', 'storing the payload on the particular memory device., 'a processing device, operatively coupled with the plurality of memory devices, to perform operations comprising2. The system of claim 1 , wherein determining that the value of the parameter satisfies the criterion further comprises:determining that the value of the parameter exceeds or is equal to a capacity threshold.3. The system of claim 1 , wherein determining that the value of the parameter satisfies the criterion further comprises:determining that the value of the parameter does not exceed a latency threshold.4. The system of claim 1 , wherein the plurality of memory devices comprises at least one low latency memory device having a latency not exceeding a latency threshold and at least one high capacity memory device having a capacity exceeding or equal to a capacity threshold.5. The system of claim 1 , wherein the plurality of memory devices are exposed to a host computing system within a single namespace.6. The system of claim 1 , wherein ...

Подробнее
12-01-2017 дата публикации

ADDRESS RANGE PRIORITY MECHANISM

Номер: US20170010974A1
Принадлежит:

Method and apparatus to efficiently manage data in caches. Data in caches may be managed based on priorities assigned to the data. Data may be requested by a process using a virtual address of the data. The requested data may be assigned a priority by a component in a computer system called an address range priority assigner (ARP). The ARP may assign a particular priority to the requested data if the virtual address of the requested data is within a particular range of virtual addresses. The particular priority assigned may be high priority and the particular range of virtual addresses may be smaller than a cache's capacity. 1. A processor comprising: a plurality of execution units;', 'one or more register files;', 'a translation lookaside buffer to translate a virtual address of a request for first data to a physical address, the first data associated with a first thread;', 'a priority assigner to assign a first priority to the first data;', 'a first cache memory to store data, the first cache memory including a plurality of cache lines to store a priority value and a corresponding data; and, 'at least one core to execute one or more threads, the at least one core includingwherein the first cache memory is to evict second data stored in a first cache line of the first cache memory and store the first data in the first cache line, wherein the first data has the same priority as the second data, and wherein the first cache memory is to prevent third data having a second priority from being allocated into the first cache memory; anda second cache memory coupled to the at least one core.2. The processor of claim 1 , wherein the first cache memory is to clear the first priority of the first data at an end of a program phase.3. The processor of claim 1 , wherein the first cache memory comprises a lower level cache memory.4. The processor of claim 3 , wherein the second cache memory comprises a higher level cache memory.5. The processor of claim 1 , wherein the first ...

Подробнее
12-01-2017 дата публикации

MANAGEMENT OF MEMORY PAGES

Номер: US20170010979A1
Принадлежит:

In a method for managing memory pages, responsive to determining that a server is experiencing memory pressure, one or more processors identifying a first memory page in a listing of memory pages in the server. The method further includes determining whether the first memory page corresponds to a logical partition (LPAR) of the server that is scheduled to undergo an operation to migrate data stored on memory pages of the LPAR to another server. The method further includes, responsive to determining that the first memory page does correspond to a LPAR of the server that is scheduled to undergo an operation to migrate data, determining whether to evict the first memory page based on a memory page state associated with the first memory page. The method further includes, responsive to determining to evict the first memory page, evicting data stored in the first memory page to a paging space. 1one or more computer processors;one or more computer readable storage media; and in response to determining that a server is experiencing memory pressure, program instructions to identify a first memory page in a listing of memory pages in the server, wherein the listing of memory pages is a least recently used (LRU) list of memory pages of the server for use in a LRU algorithm;', 'program instructions to determine a modification status for the first memory page based on an entry in a hypervisor translation table corresponding to the first memory page, wherein the entry in the hypervisor translation table indicates a logical address of the first memory page and a real page number (RPN) corresponding to the first memory page;', 'program instructions to store an indication of the determined modification status in a page frame table (PFT) associated with the first memory page;', 'program instructions to determine a transmission status corresponding to the first memory page based on an entry in a memory page migration list, wherein the entry in the memory page migration list indicates ...

Подробнее
08-01-2015 дата публикации

MEMORY MANAGING APPARATUS AND IMAGE PROCESSING APPARATUS

Номер: US20150012720A1
Принадлежит:

The memory area managing unit (a) sets a protect flag to each virtual area allocated in a virtual memory space, the protect flag indicating whether a use of the virtual area has been finished or not, and (b) when a part or all of a first virtual area would overlap another second virtual area due to expansion or movement of the first virtual area, allows the expansion or the movement of the first virtual area accompanying with overlapping the second virtual area, if the protect flag of the second virtual area indicates that a use of the second virtual area has been finished. If the expansion or the movement is allowed, the memory pool managing unit adds a physical area in a physical memory space corresponding to an overlapping part of the first and second virtual areas into a memory pool to map to another virtual area. 1. A memory managing apparatus , comprising:a memory area managing unit that sets a flag to each virtual area allocated in a virtual memory space, the flag indicating that a use of the virtual area has been finished or not, and when a part or all of a first virtual area would overlap another second virtual area due to expansion or movement of the first virtual area, allows the expansion or the movement of the first virtual area accompanying with overlapping the second virtual area if the flag of the second virtual area indicates that a use of the second virtual area has been finished, and does not allow the expansion or the movement of the first virtual area accompanying with overlapping the second virtual area if the flag of the second virtual area does not indicate that a use of the second virtual area has been finished; anda memory pool managing unit that adds a physical area in a physical memory space corresponding to an overlapping part of the first and the second virtual areas into a memory pool in order to map the physical area to another virtual area, if the expansion or the movement of the first virtual area accompanying with overlapping the ...

Подробнее
08-01-2015 дата публикации

IDENTIFICATION OF PAGE SHARING OPPORTUNITIES WITHIN LARGE PAGES

Номер: US20150012722A1
Принадлежит:

Memory performance in a computer system that implements large page mapping is improved even when memory is scarce by identifying page sharing opportunities within the large pages at the granularity of small pages and breaking up the large pages so that small pages within the large page can be freed up through page sharing. In addition, the number of small page sharing opportunities within the large pages can be used to estimate the total amount of memory that could be reclaimed through page sharing. 1. A method of reclaiming memory in a computer system where the memory is partitioned and accessed as small pages and large pages , comprising:selecting a large page that is comprised of a group of small pages based on a number of small page sharing opportunities identified therein;updating mappings for the memory so that a mapping to the selected large page is changed to mappings to small pages, at least one of the small pages being a shared small page; andmarking one or more of the small pages in the group as free.2. The method of claim 1 , further comprising:scanning each of the large pages and determining a number of small pages therein that can be shared,wherein the selecting is based on the relative number of shareable small pages in the large pages.3. The method of claim 2 , wherein the selecting is based on an access frequency of the large pages.4. The method of claim 1 , further comprising:determining the small pages in the group that can be actually shared,wherein the small pages that can be actually shared are marked as free.5. The method of claim 1 , wherein the large page is selected if the number of small page therein that can be shared is greater than a threshold.6. The method of claim 5 , further comprising:setting the threshold according to memory usage by the computer system,wherein the threshold is set lower as the memory usage increases.7. The method of claim 1 , further comprising:scanning each of the large pages and determining from the scanning a ...

Подробнее
14-01-2016 дата публикации

SEMICONDUCTOR STORAGE

Номер: US20160011782A1
Принадлежит:

A first objective is to reduce performance degradation of a semiconductor storage resulting from address translation. A second objective is to reduce an increase in the manufacturing cost of the semiconductor storage resulting from address translation. A third objective is to provide the semiconductor storage with high reliability. To accomplish the above objectives, a storage area of a nonvolatile memory included in the semiconductor storage is segmented into multiple blocks, and each of the blocks is segmented into multiple pages. Then, an erase count is controlled on a page basis (), and address translation is controlled on a block basis (). 1. A semiconductor storage comprising:a nonvolatile memory in which a storage area is segmented into multiple blocks, and each of the blocks are segmented into multiple pages,wherein an erase count is controlled in units of the pages, andaddress translation from a logical address into a physical address is implemented in units of the blocks.2. The semiconductor storage according to claim 1 , wherein a data size of information used for address translation is smaller than a data size of information used for controlling the erase count.3. The semiconductor storage according to claim 1 , wherein the number of writes to an erase count table is larger than the number of writes to the address translation table.4. The semiconductor storage according to claim 1 , wherein erase count leveling is performed in the block.5. The semiconductor storage according to claim 4 , wherein an offset page number is used when the leveling is performed in the block.6. The semiconductor storage according to claim 1 , whereinthe page individually has a main area and a reserve area, andwhen data is programmed to the main area of the page, the erase count of the page is programmed to the reserve area of the page.7. The nonvolatile storage device according to claim 1 , wherein claim 1 , further claim 1 , an erase count is stored in a reserve area of the ...

Подробнее
14-01-2016 дата публикации

STORAGE SYSTEM

Номер: US20160011967A9
Принадлежит:

A storage system monitors the first access frequency of occurrence which is the access frequency of occurrence from a host device during a first period, and the second access frequency of occurrence which is the access frequency of occurrence from a host device during a second period shorter than the first period. Along with performing data relocation among the tiers (levels) in the first period cycle based on the first access frequency of occurrence, the storage system performs a decision whether or not to perform a second relocation based on the first access frequency of occurrence and the second access frequency of occurrence, synchronously with access from a host device. Here the threshold value utilized in a decision on whether or not to perform the first relocation is different from the threshold value utilized in a decision on whether or not to perform the second relocation. 1. A storage system comprising:a first storage device, which is a flash device, whose storage areas are managed as a first tier,a second storage device whose storage areas are managed as a second tier; anda controller providing a virtual volume including a plurality of logical areas to a host, wherein the controller is configured to:allocate at least one of a plurality of pages in the first tier and in the second tier to at least one of the logical areas that is indicated by a write request from a host to store data of the write request; andmigrate data stored in a page in the second tier to a page in the first tier based on an access status of the data,wherein a number of pages in the first tier storing data migrated from the second tier to the first tier is controlled at least based on a cumulative number of pages whose data has been migrated from the second tier to the first tier, and a number of years of usage of the first storage device.2. A storage system according to the claim 1 , wherein the controller is further configured to:calculate a target number of pages whose data is to be ...

Подробнее
14-01-2016 дата публикации

TRANSLATING BETWEEN MEMORY TRANSACTIONS OF FIRST TYPE AND MEMORY TRANSACTIONS OF A SECOND TYPE

Номер: US20160011985A1
Принадлежит:

A data processing apparatus includes bridge circuitry which serves to translate memory transactions of a first type (AXI) into memory transactions of a second type (PCI Express). The bridge circuitry includes translation circuitry which maps at least some of the bits of attribute data of a memory transaction of the first type to unused bits within the significant bits of an address of the second type, which are unused to represent significant bits of the address of memory transactions of the first type. 1. Apparatus for processing data comprising:bridge circuitry having a first port configured to transmit memory transactions of a first type, a second port configured to receive memory transactions of a second type and translation circuitry configured to translate between memory transactions of said first type and memory transactions of said second type; whereinsaid memory transactions of said first type specify X significant bits of address and A bits of attribute data, where X and A are both positive integer values;said memory transactions of said second type specify Y significant bits of address, where Y is a positive integer value and Y is greater than X; andsaid translation circuitry is configured to map at least some of said A bits of attribute data of a memory transaction of said first type to unused bits within said Y significant bit of address of a second type unused to represent said X significant bits of address of said memory transaction of said first type.2. Apparatus as claimed in claim 1 , comprising a transaction source coupled to said second port and configured:to generate a translation request transaction of said second type to be sent to said translation circuitry, andto receive a translation response of said second type from said translation circuitry.3. Apparatus as claimed in claim 2 , wherein said transaction source comprises a translation cache configured to store said translation response.4. Apparatus as claimed in claim 3 , whereinsaid ...

Подробнее
11-01-2018 дата публикации

RESTRICTED ADDRESS TRANSLATION TO PROTECT AGAINST DEVICE-TLB VULNERABILITIES

Номер: US20180011651A1
Принадлежит:

An apparatus includes an extended capability register and an input/output (I/O) memory management circuitry. The I/O memory management circuitry is to receive, from an I/O device, an address translation request referencing a guest virtual address associated with a guest virtual address space of a virtual machine. The I/O memory management circuitry may translate the guest virtual address to a guest physical address associated with a guest physical address space of the virtual machine, and, responsive to determining that a value stored by the extended capability register indicates a restrict-translation-request-response (RTRR) mode, transmit, to the I/O device, a translation response having the guest physical address. 1. An apparatus comprising an extended capability register and an input/output (I/O) memory management circuitry , the I/O memory management circuitry to:receive, from an I/O device, an address translation request referencing a virtual address associated with a guest virtual address space of a virtual machine;translate the virtual address to a guest physical address associated with a guest physical address space of the virtual machine; andresponsive to determining that a value stored by the extended capability register indicates a restrict-translation-request-response (RTRR) mode, transmit, to the I/O device, a translation response comprising the guest physical address.2. The apparatus of claim 1 , wherein the I/O memory management circuitry is further to claim 1 , responsive to receipt claim 1 , from the I/O device claim 1 , of a translated request including the guest physical address:complete translation of the guest physical address to a host physical address using a virtual machine monitor (VMM) mapping between the guest physical address and the host physical address; andtransmit the host physical address to the I/O device upon successful translation of the guest physical address to the host physical address.3. The apparatus of claim 1 , wherein the ...

Подробнее
10-01-2019 дата публикации

Storage device and method of operating the same

Номер: US20190012081A1
Принадлежит: SK hynix Inc

Provided herein may be a storage device and a method of operating the same. A memory controller for controlling a memory device including a plurality of memory blocks having improved read performance may include a random read workload control, unit configured to control a state of a random read workload such that the random read workload is in any one of a set state and a clear state depending on a random read count obtained by counting a number of random read requests that are inputted from an external host; and a random read processing unit configured to retrieve a physical address corresponding to a logical address of the respective random read requests depending on the state of the random read workload.

Подробнее
10-01-2019 дата публикации

PAGE BASED DATA PERSISTENCY

Номер: US20190012084A1
Автор: Schreter Ivan
Принадлежит:

A method for page based data persistence can include storing data associated with a state machine at a computing node. The data can be stored by at least allocating a first data page for storing the data. In response to the allocation of the first data page, a first page reference to the first data page can be added to a first page list in an in-memory buffer at the computing node. When the in-memory buffer reaches maximum capacity, a second data page can be allocated for storing the first page list. A second page reference to the second data page can be added to a second page list in the in-memory buffer. Related systems and articles of manufacture, including computer program products, are also provided. 1. A system , comprising:at least one data processor; and storing data associated with a state machine at a computing node, the data being stored by at least allocating a first data page for storing at least a portion of the data associated with the state machine;', 'in response to the allocation of the first data page, adding, to a first page list in an in-memory buffer at the computing node, a first page reference to the first data page; and', allocating a second data page for storing the first page list; and', 'adding, to a second page list in the in-memory buffer, a second page reference to the second data page., 'in response to the in-memory buffer reaching maximum capacity], 'at least one memory storing instructions which, when executed by the at least one data processor, cause operations comprising2. The system of claim 1 , further comprising:incrementing a reference count associated with the first data page based at least on the first data page being referenced by the first page list.3. The system of claim 2 , further comprising:in response to the first page list becoming obsolete, decrementing the reference count associated with the first data page.4. The system of claim 3 , further comprising:in response to the reference count associated with the first ...

Подробнее
10-01-2019 дата публикации

SUPPORTING SOFT REBOOT IN MULTI-PROCESSOR SYSTEMS WITHOUT HARDWARE OR FIRMWARE CONTROL OF PROCESSOR STATE

Номер: US20190012179A1
Принадлежит:

A method of initializing a secondary processor pursuant to a soft reboot of system software comprises storing code to be executed by the secondary processor in memory, building first page tables to map the code into a first address space and second page tables to identically map the code into a second address space, fetching a first instruction of the code based on a first virtual address in the first address space and the first page tables, and executing the code beginning with the first instruction to switch from the first to the second page tables. The method further comprises, fetching a next instruction of the code using a second virtual address, which is identically mapped to a corresponding machine address, turning off a memory management unit of the secondary processor, and executing a waiting loop until a predetermined location in the physical memory changes in value. 1. A method of initializing a secondary processor pursuant to a soft reboot of system software , said method comprising:storing code to be executed by the secondary processor in a region of physical memory;building first page tables to map the code into a first address space and second page tables to identically map the code into a second address space;fetching a first instruction of the code from a first location in the physical memory based on a first virtual address and active page tables, wherein the first virtual address is a virtual address in the first address space;executing the code beginning with the first instruction to switch the active page tables from the first page tables to the second page tables; andafter the active page tables have been switched from the first page tables to the second page tables, (i) fetching a next instruction of the code to be executed from a second location in the physical memory using a second virtual address, which is identically mapped to a corresponding machine address, (ii) turning off a memory management unit of the secondary processor, and (iii) ...

Подробнее
14-01-2021 дата публикации

SYSTEMS, METHODS, AND DEVICES FOR POOLED SHARED/VIRTUALIZED OR POOLED MEMORY WITH THIN PROVISIONING OF STORAGE CLASS MEMORY MODULES/CARDS AND ACCELERATORS MANAGED BY COMPOSABLE MANAGEMENT SOFTWARE

Номер: US20210011755A1
Автор: Shah Shreyas
Принадлежит:

Provided are systems, methods, and devices for management of storage class memory modules. Methods include receiving a request from an application running on a server, the request received at a memory controller, and maintaining a page table comprising page numbers, server numbers, storage class memory (SCM) dual-inline memory module (DIMM) numbers, and pointers mapping blocks of memory to SCM DIMMs in devices connected to the server through a network interface. The methods also include allocating memory using the request from the application, wherein whether the memory is locally allocated or remotely allocated remains transparent to the application. 1. A storage class memory (SCM) dual in-line memory module (DIMM) , comprising:a memory controller associated with the SCM DIMM, the memory controller being configured to control the flow of data between a processing unit and the SCM DIMM using a plurality of transactions including read and write transactions;a plurality of SCM persistent memory integrated circuits included on the SCM DIMM; anda network interface included on the SCM DIMM, the network interface having a unique Media Access Control address, wherein the SCM DIMM is operable to conduct data transfers over the network interface while bypassing the processing unit.2. The SCM DIMM of claim 1 , wherein the processing unit is a central processing unit (CPU).3. The SCM DIMM of claim 1 , wherein the processing unit is a graphics processing unit (GPU).4. The SCM DIMM of claim 1 , wherein the processing unit is a hardware accelerator.5. The SCM DIMM of claim 1 , wherein the processing unit is a neural processing unit (NPU).6. Server claim 1 , comprising:a central processing unit;a hardware accelerator connected to the central processing unit;a network input/output (I/O) chip connected to the central processing unit;a storage class memory (SCM) dual-inline memory module (DIMM) connected to the central processing unit through the central processing unit interface, ...

Подробнее
14-01-2021 дата публикации

VMID AS A GPU TASK CONTAINER FOR VIRTUALIZATION

Номер: US20210011760A1
Принадлежит:

Systems, apparatuses, and methods for abstracting tasks in virtual memory identifier (VMID) containers are disclosed. A processor coupled to a memory executes a plurality of concurrent tasks including a first task. Responsive to detecting one or more instructions of the first task which correspond to a first operation, the processor retrieves a first identifier (ID) which is used to uniquely identify the first task, wherein the first ID is transparent to the first task. Then, the processor maps the first ID to a second ID and/or a third ID. The processor completes the first operation by using the second ID and/or the third ID to identify the first task to at least a first data structure. In one implementation, the first operation is a memory access operation and the first data structure is a set of page tables. Also, in one implementation, the second ID identifies a first application of the first task and the third ID identifies a first operating system (OS) of the first task. 1. A system comprising:a memory storing program instructions of a plurality of tasks, wherein the plurality of tasks include a first task; execute the first task and one or more other tasks concurrently;', receive a first identifier (ID) which uniquely identifies the first task, wherein the first ID does not identify a source hierarchy of the first task;', 'map the first ID to a second ID which identifies the source hierarchy of the first task; and', 'complete the first operation by performing an access to a first data structure using the second ID to identify the first task., 'responsive to detecting one or more instructions of the first task which correspond to a first operation], 'a processor coupled to the memory, wherein the processor is configured to2. The system as recited in claim 1 , wherein the processor accesses a mapping table to map the first ID to the second ID and to a third ID.3. The system as recited in claim 2 , wherein the second ID identifies a first application and the ...

Подробнее
10-01-2019 дата публикации

MEMORY SYSTEM AND OPERATION METHOD THEREOF

Номер: US20190012264A1
Принадлежит:

A memory system may include: a memory device having a plurality of banks, each comprising a memory cell region including a plurality of memory cells, and a page buffer unit; and a controller suitable for receiving a write address and write data from a host, and controlling a write operation of the memory device, wherein the controller comprises: a page buffer table (PBT) comprising fields to retain the same data as the page buffer units of the respective banks; and a processor suitable for comparing the write data to data stored in a field of the PBT, corresponding to the write address, and controlling the memory device to write the write data or the data stored in the page buffer unit to memory cells selected according to the write address, based on a comparison result. 1. A memory system comprising:a memory device having a plurality of banks, each comprising a memory cell region including a plurality of memory cells, and a page buffer unit; anda controller suitable for receiving a write address and write data from a host, and controlling a write operation of the memory device, a page buffer table (PBT) comprising fields to retain the same data as the page buffer units of the respective banks; and', 'a processor suitable for comparing the write data to data stored in a field of the PBT, corresponding to the write address, and controlling the memory device to write the write data or the data stored in the page buffer unit to memory cells selected according to the write address, based on a comparison result., 'wherein the controller comprises2. The memory system of claim 1 , wherein the processor comprises:a comparison module suitable for comparing the write data to the data stored in the field of the PBT, corresponding to the write address, and outputting a comparison signal; anda management module suitable for controlling the memory device to write the data stored in the page buffer unit to the selected memory cells when the comparison signal indicates that the ...

Подробнее
10-01-2019 дата публикации

APPARATUSES AND METHODS FOR A PROCESSOR ARCHITECTURE

Номер: US20190012266A1
Принадлежит:

Embodiments of an invention a processor architecture are disclosed. In an embodiment, a processor includes a decoder, an execution unit, a coherent cache, and an interconnect. The decoder is to decode an instruction to zero a cache line. The execution unit is to issue a write command to initiate a cache line sized write of zeros. The coherent cache is to receive the write command, to determine whether there is a hit in the coherent cache and whether a cache coherency protocol state of the hit cache line is a modified state or an exclusive state, to configure a cache line to indicate all zeros, and to issue the write command toward the interconnect. The interconnect is to, responsive to receipt of the write command, issue a snoop to each of a plurality of other coherent caches for which it must be determined if there is a hit. 1. A processor comprising:a decoder to decode an instruction to zero a cache line;an execution unit, coupled to the decoder and responsive to the decode of the instruction, to issue a write command to initiate a cache line sized write of zeros at a memory address;a coherent cache, coupled to the execution unit, to receive the write command, to determine whether there is a hit in the coherent cache responsive to the write command, to determine whether a cache coherency protocol state of the hit cache line is a modified state or an exclusive state, to configure a cache line to indicate all zeros when the cache coherency protocol state is the modified state or the exclusive state, and to issue the write command toward an interconnect when there is a miss responsive receiving to the write command;the interconnect, responsive to receipt of the write command, to issue a snoop to each of a plurality of other coherent caches for which it must be determined if there is a hit, wherein the interconnect, or the execution unit responsive to a message from the interconnect, to cause a cache line in one of the coherent caches to be configured to indicate all ...

Подробнее
10-01-2019 дата публикации

MECHANISMS TO ENFORCE SECURITY WITH PARTIAL ACCESS CONTROL HARDWARE OFFLINE

Номер: US20190012271A1
Принадлежит:

One feature pertains to an apparatus that includes a memory circuit, a system memory-management unit (SMMU), and a processing circuit. The memory circuit stores an executable program associated with a client. The SMMU enforces memory access control policies for the memory circuit, and includes a plurality of micro-translation lookaside buffers (micro-TLBs), macro-TLB, and a page walker circuit. The plurality of micro-TLBs include a first micro-TLB that enforces memory access control policies for the client. The processing circuit loads memory address translations associated with the executable program into the first micro-TLB, and initiates isolation mode for the first micro-TLB causing communications between the first micro-TLB and the macro-TLB and between the first micro-TLB and the page walker circuit to be severed. The first micro-TLB continues to enforce memory access control policies for the client while in isolation mode. 1. An apparatus comprising:a memory circuit storing an executable program associated with a client;a system memory-management unit (SMMU) adapted to enforce memory access control policies for the memory circuit, the SMMU including a plurality of micro-translation lookaside buffers (micro-TLBs), a macro-translation lookaside buffer (macro-TLB), and a page walker circuit, the plurality of micro-TLBs including a first micro-TLB that enforces memory access control policies for the client; and load memory address translations associated with the executable program into the first micro-TLB, and', 'initiate isolation mode for the first micro-TLB to cause communications between the first micro-TLB and the macro-TLB and between the first micro-TLB and the page walker circuit to be severed, the first micro-TLB to continue to enforce memory access control policies for the client while in isolation mode., 'a processing circuit communicatively coupled to the memory circuit and the SMMU, the processing circuit adapted to'}2. The apparatus of claim 1 , ...

Подробнее
14-01-2021 дата публикации

CAPTURING TIME-VARYING STORAGE OF DATA IN MEMORY DEVICE FOR DATA RECOVERY PURPOSES

Номер: US20210011845A1
Автор: Huang Jian
Принадлежит:

A memory device (or memory sub-system) includes one or more memory components having multiple blocks, the multiple blocks containing pages of data. A processing device is coupled to the one or more memory components. The processing device to execute firmware to: track write timestamps of the pages of data that have been marked as invalid; retain a storage state stored for each page marked as invalid, wherein invalid data of the marked pages remains accessible via the storage states; in response to a write timestamp of a page being beyond a retention time window, mark the page as expired, indicating that the page is an expired page; and reclaim the expired page for storage of new data during a garbage collection operation. 1. A memory device comprising:one or more memory components comprising a plurality of blocks, the plurality of blocks containing pages of data; and track write timestamps of the pages of data that have been marked as invalid;', 'retain a storage state stored for each page marked as invalid, wherein invalid data of the marked pages remain accessible via the storage states;', 'in response to a write timestamp of a page being beyond a retention time window, mark the page as expired, indicating that the page is an expired page; and', 'reclaim the expired page for storage of new data during a garbage collection operation., 'a processing device coupled to the one or more memory components, the processing device to execute firmware to2. The memory device of claim 1 , wherein the processing device is further to execute the firmware to store metadata with each page claim 1 , identified by a physical page address (PPA) claim 1 , wherein the metadata comprises:a logical page address (LPA) mapped to the PPA;a back-pointer comprising a previous PPA to which the LPA was previously mapped, wherein the back-pointer is useable to construct a reverse mapping chain between different data versions for an identical LPA; andthe write timestamp of the PPA, which is to ...

Подробнее
14-01-2021 дата публикации

ZERO COPY METHOD THAT CAN SPAN MULTIPLE ADDRESS SPACES FOR DATA PATH APPLICATIONS

Номер: US20210011855A1
Принадлежит:

A system and method for transferring data between a user space buffer in the address space of a user space process running on a virtual machine and a storage system are described The user space buffer is represented as a file with a file descriptor in the method, a file system proxy receives a request for I/O read or write from the user space process without copying data to be transferred. The file system proxy then sends the request to a file system server without copying data to be transferred. The file system server then requests that the storage system perform the requested I/O directly between the storage system and the user space buffer, the only transfer of data being between the storage system and the user space buffer. 1. A method for transferring data between a storage system and a user space buffer of a user space process running in a virtual machine having an address space including a guest virtual address space for user space processes and a kernel space for a guest operating system , the method comprising:receiving an I/O read or write request by a file system server backed by the storage system, the file system server managing a file system having a set of files;receiving a set of guest physical page addresses representing the user space buffer, the set of guest physical page addresses being derived from a guest virtual address of the user space buffer, wherein the address space of the virtual machine is identified as a file in the file system by a file descriptor, and the file descriptor is obtained from a helper process of the virtual machine; andrequesting that the storage system send data to or receive data from the user space buffer, wherein the storage system transfers the data to or from the user space buffer in response to the I/O read or write request.2. The method of claim 1 , wherein the I/O request received by the file system server is obtained from a proxy process that interacts with the file system server claim 1 , the proxy process ...

Подробнее
14-01-2021 дата публикации

Method and Apparatus for Enhancing Isolation of User Space from Kernel Space

Номер: US20210011856A1
Принадлежит:

A method and an apparatus for enhancing isolation of user space from kernel space, to divide an extended page table into a kernel-mode extended page table and a user-mode extended page table, such that user-mode code cannot access some or all content in the kernel space, and/or kernel-mode code cannot access some content in the user space, thereby enhancing isolation of the user space from the kernel space and preventing content leakage of the kernel space. 1. A method for enhancing isolation of user space from kernel space in a virtualized system comprising a virtual machine and a virtual machine monitor , the method comprising:creating, by the virtual machine monitor, extended page tables comprising at least a user-mode extended page table and a kernel-mode extended page table, wherein the user-mode extended page table is to be called when the virtual machine executes user-mode code by a processor running the virtual machine, and wherein the kernel-mode extended page table is to be called when the virtual machine executes kernel-mode code by the processor running the virtual machine; andperforming, by the virtual machine monitor, mapping processing on the user-mode extended page table or the kernel-mode extended page table,wherein performing the mapping processing on the user-mode extended page table comprises mapping first page-table pages in a guest page table that are for translating a kernel-mode guest virtual address to an invalid page-table page using the user-mode extended page table, andwherein performing the mapping processing on the kernel-mode extended page table comprises mapping second page-table pages in the guest page table that are for translating a user-mode guest virtual address to the invalid page-table page using the kernel-mode extended page table.2. The method according to claim 1 , wherein the invalid page-table page is a host physical page whose content is all Os.3. The method according to claim 1 , wherein before creating the extended page ...

Подробнее
10-01-2019 дата публикации

Shared filesystem for distributed data storage system

Номер: US20190012329A1
Автор: Ivan Schreter
Принадлежит: SAP SE

A method for controlling access to a shared filesystem stored in a distributed data storage system is provided. The method can include storing a file comprising a shared filesystem as an inode object and a series of data block objects comprising the shared filesystem. Responding to a request from a client to open the file can include generating, in the shared filesystem, a client object, an open file object, and a client index object. The client object can be linked to the open file object and the client index object. The open file object and the client index object can be further linked to the inode object to indicate the file being accessed by the client. Related systems and articles of manufacture, including computer program products, are also provided.

Подробнее
10-01-2019 дата публикации

Logging changes to data stored in distributed data storage system

Номер: US20190012357A1
Автор: Ivan Schreter
Принадлежит: SAP SE

A method for logging changes to data stored in a distributed data storage system can include responding to a request to change the data stored in the distributed data storage system by generating a log entry corresponding to the change. A replica of the data can be stored at each of a first computing node and a second computing node comprising the distributed data storage system. The log entry can be added to a first log stored at the first computing node and propagated to the second computing node to add the first log entry to a second log stored at the second computing node. A crash recovery can be performed at the first computing node and/or the second computing node based on the first log and/or the second log. Related systems and articles of manufacture, including computer program products, are also provided.

Подробнее
14-01-2021 дата публикации

TRUST ZONE-BASED OPERATING SYSTEM AND METHOD

Номер: US20210011996A1
Принадлежит:

A trust zone-based operating system including a secure world subsystem that runs a trusted execution environment TEE, a TEE monitoring area, and a security switching apparatus is provided. When receiving a sensitive operation request sent by a trusted application TA in the TEE, the TEE writes a sensitive instruction identifier and an operation parameter of the sensitive operation request into a general-purpose register, and sends a switching request to the security switching apparatus. The security switching apparatus receives the switching request, and switches a running environment of the secure world subsystem from the TEE to the TEE monitoring area. The TEE monitoring area stores a sensitive instruction in the operating system. After the running environment is switched, the corresponding first sensitive instruction is called based on the first sensitive instruction identifier, and a corresponding first sensitive operation is performed by using the first sensitive instruction and the first operation parameter. 1. A trust zone-based operating system applied to a terminal device , comprising:a secure world subsystem; 'after a first sensitive operation request sent by a trusted application TA in the TEE is received, store, in a general-purpose register, a first sensitive instruction identifier corresponding to the first sensitive operation request and a first operation parameter of the first sensitive operation request, and send a first switching request carrying a first switching identifier to the security switching apparatus, wherein the first switching identifier is used to identify that a running environment of the secure world subsystem needs to be switched from the TEE to the TEE monitoring area;', 'a trusted execution environment (TEE) configured to 'store a sensitive instruction in the operating system; after the running environment of the secure module subsystem is switched from the TEE to the TEE monitoring area, read the first sensitive instruction ...

Подробнее