Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 8674. Отображено 199.
23-10-2017 дата публикации

Zweiphasige Befehlspuffer zum Überlappen von IOMMU-Abbildung und Lesevorgängen zweitrangiger Datenspeicher

Номер: DE202017103915U1
Автор:
Принадлежит: GOOGLE INC, GOOGLE INC.

Computerprogrammprodukt, das Programmcode umfasst, der, wenn er durch einen oder mehrere Prozessoren ausgeführt wird, Folgendes ausführt: Kopieren von Daten von einer gegebenen Adresse eines zweitrangigen Datenspeichers in einen internen Puffer eines Speichercontrollers in einer ersten Phase unter Verwendung eines oder mehrerer Prozessoren, wobei das Kopieren wenigstens teilweise während des Abbildens einer spezifizierten physikalischen Adresse in eine Eingabe/Ausgabe-Datenspeichermanagementeinheit (IOMMU) durch ein Betriebssystem stattfindet; Bestimmen, ob eine zweite Phase ausgelöst wird, mit dem einen oder den mehreren Prozessoren; und Kopieren der Daten aus dem internen Puffer des Speichercontrollers an die spezifizierte physikalische Adresse des dynamischen Schreib-Lese-Datenspeichers (DRAM) mit dem einen oder den mehreren Prozessoren, falls die zweite Phase ausgelöst wird.

Подробнее
27-01-2022 дата публикации

Netzwerkschnittstelle für Datentransport in heterogenen Rechenumgebungen

Номер: DE112020001833T5
Автор: Marolia, Sankaran, Raj, Jani, Sarangam, Sharp
Принадлежит: INTEL CORPORATION

Eine Netzwerkschnittstellensteuerung kann programmiert sein, um empfangene Schreibdaten entweder über eine Host-zu-Vorrichtung-Fabric oder eine Beschleuniger-Fabric zu einem Speicherpuffer zu leiten. Für empfangene Pakete, die in einen Speicherpuffer geschrieben werden sollen, der mit einer Beschleunigervorrichtung assoziiert ist, kann die Netzwerkschnittstellensteuerung eine Adressübersetzung einer Zielspeicheradresse des empfangenen Pakets bestimmen und bestimmen, ob ein sekundärer Kopf verwendet werden soll. Wenn eine übersetzte Adresse verfügbar ist und ein sekundärer Kopf verwendet werden soll, wird eine Direktspeicherzugriffs-Engine verwendet, um einen Teil des empfangenen Pakets über die Beschleuniger-Fabric in einen Zielspeicherpuffer zu kopieren, der mit der Adressübersetzung assoziiert ist. Dementsprechend kann das Kopieren eines Teils des empfangenen Pakets durch die Host-zu-Vorrichtung-Fabric und in einen Zielspeicher vermieden und die Nutzung der Host-zu-Vorrichtung-Fabric ...

Подробнее
09-10-2019 дата публикации

Managing lowest point of coherency (LPC) memory using service layer adapter

Номер: GB0002572287A8
Принадлежит:

Managing lowest point of coherency (LPC) memory using a service layer adapter, the adapter coupled to a processor and an accelerator on a host computing system, the processor configured for symmetric multi-processing, including receiving, by the adapter, a memory access instruction from the accelerator; retrieving, by the adapter, a real address for the memory access instruction; determining, using base address registers on the adapter, that the real address targets the LPC memory, wherein the base address registers direct memory access requests between the LPC memory and other memory locations on the host computing system; and sending, by the adapter, the memory access instruction and the real address to a media controller for the LPC memory, wherein the media controller for the LPC memory is attached to the adapter via a memory interface.

Подробнее
16-03-2005 дата публикации

Data processing apparatus and method for controlling access to memory

Номер: GB0000502381D0
Автор:
Принадлежит:

Подробнее
06-05-2020 дата публикации

Efficient testing of direct memory address translation

Номер: GB0002578412A
Принадлежит:

A circuit and method provide efficient stress testing of address translations in an integrated circuit such as a link processing unit. A random DMA mode (RDM) circuit provides a random input to index into a translation validation table (TVT)that is used to generate the real memory address. The RDM circuit allows testing all entries of the TVT, and thus all DMA modes, regardless of what bus agents are connected to the link processing unit. The RDM circuit may use a multiplexer to select between a runtime input and a random test input provided by the random bit generator. When the link processing unit is in a test mode a mode selection bit is asserted to select the random test input.

Подробнее
15-09-2009 дата публикации

CACHE WITH DMA (DIRECT MEMORY ACCESS) AND DIRTY BITS

Номер: AT0000441893T
Принадлежит:

Подробнее
15-02-2011 дата публикации

ON COMMON BIT WAS BASED TLB OPERATIONS

Номер: AT0000497211T
Принадлежит:

Подробнее
15-03-2012 дата публикации

SOFTWARE CONTROLLED CACHE CONFIGURATION

Номер: AT0000548695T
Принадлежит:

Подробнее
13-03-2018 дата публикации

TRANSLATION OF INPUT/OUTPUT ADDRESSES TO MEMORY ADDRESSES

Номер: CA0002800636C

An address provided in a request issued by an adapter is converted to an address directly usable in accessing system memory. The address includes a plurality of bits, in which the plurality of bits includes a first portion of bits and a second portion of bits. The second portion of bits is used to index into one or more levels of address translation tables to perform the conversion, while the first portion of bits are ignored for the conversion. The first portion of bits are used to validate the address.

Подробнее
28-09-2018 дата публикации

판독 캐시 메모리

Номер: KR0101891428B1
Принадлежит: 마이크론 테크놀로지, 인크

... 본 개시는 판독 캐시 메모리를 위한 방법들 및 장치들을 포함한다. 하나의 장치는 제 1 DRAM 어레이, 제 1 및 제 2 NAND 어레이, 및 DRAM 어레이와 제 1 NAND 어레이 사이, 그리고 제 1 NAND 어레이와 제 2 NAND 어레이 사이의 데이터의 이동을 관리하도록 구성된 제어기를 포함하는 판독 캐시 메모리 장치를 포함한다.

Подробнее
07-04-2017 дата публикации

가상화 컴퓨팅 환경내 직접 메모리 액세스 요청들 라우팅

Номер: KR1020170038873A
Принадлежит:

... 디바이스는 가상 어드레스를 식별하는 직접 메모리 액세스 요청을 수신할 수 있다. 디바이스는 가상 어드레스가 가상 어드레스들의 특정 범위내에 있는지 여부를 결정할 수 있다. 디바이스는 가상 어드레스 값이 가상 어드레스들의 특정 범위내에 포함된 지의 결정에 기초하여 제 1 동작 또는 제 2 동작을 선택적으로 수행할 수 있다. 제 1 동작은 가상 어드레스가 가상 어드레스들의 특정 범위내에 있지 않을 때 가상 어드레스를 메모리 디바이스와 관련된 물리적 어드레스로 변환하기 위해 제 1 어드레스 변환 알고리즘이 수행되게 하는 것을 포함할 수 있다. 제 2 동작은 가상 어드레스가 가상 어드레스들의 특정 범위내에 있을 때 가상 어드레스를 물리적 어드레스로 변환하기 위해 제 2 어드레스 변환 알고리즘이 수행되게 하는 것을 포함할 수 있다. 제 2 어드레스 변환 알고리즘은 제 1 어드레스 변환 알고리즘과 상이할 수 있다.

Подробнее
01-01-2018 дата публикации

Data transfer device and data transfer method

Номер: TW0201800950A
Принадлежит:

When a write request signal including an address that belongs to a DMA area and requesting a write is received from a CPU (50), a slave interface (10) subtracts the header address of the DMA area from said address to calculate an offset value of said address to the header address and registers the offset value as a record along with a record number in a table (43). A master interface (20) adds, to the offset value, the header address of an IO device buffer area which is the storage area of an IO buffer (71) read from and written to by the CPU (50) and converts the offset value to an address that belongs to the IO device buffer area, reads out data at the converted address, and writes the read data to a dual-port memory (30) to be associated with an identifier that is associated with the record number.

Подробнее
17-01-2019 дата публикации

MEMORY SYSTEM FOR A DATA PROCESSING NETWORK

Номер: WO2019012290A1
Принадлежит:

A data processing network includes a network of devices addressable via a system address space, the network including a computing device configured to execute an application in a virtual address space. A virtual-to-system address translation circuit is configured to translate a virtual address to a system address. A memory node controller has a first interface to a data resource addressable via a physical address space, a second interface to the computing device, and a system-to-physical address translation circuit, configured to translate a system address in the system address space to a corresponding physical address in the physical address space of the data resource. The virtual-to-system mapping may be a range table buffer configured to retrieve a range table entry comprising an offset address of a range together with a virtual address base and an indicator of the extent of the range.

Подробнее
23-05-2017 дата публикации

Scalable data using RDMA and MMIO

Номер: US0009658782B2

To improve upon some of the characteristics of current storage systems in general and block data storage systems in particular, exemplary embodiments combine state-of-the art networking techniques with state-of-the-art data storage elements in a novel way. To accomplish this combination in a highly effective way, it is proposed to combine networking remote direct memory access (RDMA) technique and storage-oriented memory mapped input output (MMIO) technique in a system to provide direct access from a remote storage client to a remote storage system with little to no central processing unit (CPU) intervention of the remote storage server. In some embodiments, this technique may reduce the required CPU intervention on the client side. These reductions of CPU intervention potentially reduce latency while providing performance improvements, and/or providing more data transfer bandwidth and/or throughput and/or more operations per second compared to other systems with equivalent hardware.

Подробнее
06-10-2020 дата публикации

Safe userspace device access for network function virtualization using an IOMMU to map supervisor memory to a reserved range of application virtual addresses

Номер: US0010795591B2
Принадлежит: Red Hat, Inc., RED HAT INC

A device access system includes a memory having a supervisor memory, a processor, an input output memory management unit (IOMMU), and a supervisor. The supervisor includes a supervisor driver, which executes on the processor to allocate the supervisor memory and reserve a range of application virtual addresses. The supervisor driver programs the IOMMU to map the supervisor memory to the reserved range. A device is granted access to the reserved range, which is protected in host page table entries such that an application cannot modify data within the range. The supervisor driver configures the device to use the supervisor memory and receive a request including a virtual address and length from the application to use the device. The supervisor driver validates the request by verifying that the virtual address and length do not overlap the range reserved by the supervisor, and responsive to validating the request, submits the request to the device.

Подробнее
19-11-2019 дата публикации

Multi-queue device assignment for application groups

Номер: US0010481951B2
Принадлежит: Red Hat Israel, Ltd., RED HAT ISRAEL LTD

A system and method of device assignment includes receiving, by a supervisor, an assignment request to assign a device to a first application and a second application. The first application is associated with a first memory and has a first address. The second application is associated with a second memory and has a second address. The supervisor selects a first bus address offset and a second bus address offset, which is different from the first bus address offset. The supervisor sends, to the first application, the first bus address offset. The supervisor sends, to the second application, the second bus address offset. The supervisor updates a mapping to the first address to include the first bus address offset and updates a mapping to the second address to include the second bus address offset. The device is assigned to the first application and the second application.

Подробнее
04-02-2020 дата публикации

Multi-dimensional computer storage system

Номер: US0010552050B1
Принадлежит: BiTMICRO LLC, BITMICRO LLC

In an embodiment of the invention, an apparatus comprises: a multi-dimensional memory that is expandable in a first direction; wherein the multi-dimensional memory comprises a serial chain; wherein the serial chain comprises a first serial chain that is expandable in a first direction; and wherein the first serial chain comprises a first memory controller, a first memory module coupled to the first memory controller, a second memory controller coupled to the first memory controller, and a second memory module coupled to the second memory controller. In another embodiment of the invention, a method comprises: providing a multi-dimensional memory that is expandable in a first direction; wherein the multi-dimensional memory comprises a serial chain; wherein the serial chain comprises a first serial chain that is expandable in a first direction; and wherein the first serial chain comprises a first memory controller, a first memory module coupled to the first memory controller, a second memory ...

Подробнее
25-07-2019 дата публикации

Two Stage Command Buffers To Overlap Iommu Map And Second Tier Memory Reads

Номер: US20190227729A1
Принадлежит:

IOMMU map-in may be overlapped with second tier memory access, such that the two operations are at least partially performed at the same time. For example, when a second tier memory read into a storage device controller internal buffer is initiated, an IOMMU mapping may be built simultaneously. To achieve this overlap, a two-stage command buffer is used. In a first stage, content is read from a second tier memory address into the storage device controller internal buffer. In a second stage, the internal buffer is written into the DRAM physical address.

Подробнее
12-05-2020 дата публикации

Scratchpad-based operating system for multi-core embedded systems

Номер: US0010649914B1

An embodiment may involve determining that a first logical partition of a scratchpad memory coupled to a processor core is empty and a first application is scheduled to execute; instructing a direct memory access (DMA) engine to load the first application into the first logical partition and then instructing the processor core to execute the first application from the first logical partition; while the first application is being executed from the first logical partition, determining that a second logical partition of the scratchpad memory is empty and a second application is scheduled to execute; instructing the DMA engine to load the second application into the second logical partition; determining that execution of the first application has completed; and instructing the DMA engine to unload the first application from the first logical partition and instructing the processor core to execute the second application from the second logical partition.

Подробнее
29-04-2014 дата публикации

Image forming apparatus and method of translating virtual memory address into physical memory address

Номер: US0008711418B2

An image forming apparatus includes a function unit to perform functions of the image forming apparatus, and a control unit to control the function unit to perform the functions of the image forming apparatus. The control unit includes a processor core to operate in a virtual memory address, a main memory to operate in a physical memory address and store data used in the functions of the image forming apparatus, and a plurality of input/output (I/O) logics to operate in the virtual memory address and control at least one of the functions performed by the image forming apparatus. Each of the plurality of I/O logics translates the virtual memory address into the physical memory address corresponding to the virtual memory address and accesses the main memory.

Подробнее
17-03-2020 дата публикации

Memory access optimization for an I/O adapter in a processor complex

Номер: US0010592451B2

An aspect includes memory access optimization for an I/O adapter in a processor complex. A memory block distance is determined between the I/O adapter and a memory block location in the processor complex and determining one or more memory movement type criteria between the I/O adapter and the memory block location based on the memory block distance. A memory movement operation type is selected based on a memory movement process parameter and the one or more memory movement type criteria. A memory movement process is initiated between the I/O adapter and the memory block location using the memory movement operation type.

Подробнее
07-08-2018 дата публикации

Hardware-based translation lookaside buffer (TLB) invalidation

Номер: US0010042777B2
Принадлежит: QUALCOMM Incorporated, QUALCOMM INC

Hardware-based translation lookaside buffer (TLB) invalidation techniques are disclosed. A host system is configured to exchange data with a peripheral component interconnect express PCIE) endpoint (EP). A memory management unit (MMU), which is a hardware element, is included in the host system to provide address translation according to at least one TLB. In one aspect, the MMU is configured to invalidate the at least one TLB in response to receiving at least one TLB invalidation command from the PCIE EP. In another aspect, the PCIE EP is configured to determine that the at least one TLB needs to be invalidated and provide the TLB invalidation command to invalidate the at least one TLB. By implementing hardware-based TLB invalidation in the host system, it is possible to reduce TLB invalidation delay, thus leading to increased data throughput, reduced power consumption, and improved user experience.

Подробнее
11-10-2022 дата публикации

Parallel scheduling of encryption engines and decryption engines to prevent side channel attacks

Номер: US0011470061B2

This disclosure describes systems on a chip (SOCs) that prevent side channel attacks on encryption and decryption engines of an electronic device. The SoCs of this disclosure concurrently operate key-diverse encryption and decryption datapaths to obfuscate the power trace signature exhibited by the device that includes the SoC. An example SoC includes an encryption engine configured to encrypt transmission (Tx) channel data using an encryption key and a decryption engine configured to decrypt encrypted received (Rx) channel data using a decryption key that is different from the encryption key. The SoC also includes a scheduler configured to establish concurrent data availability between the encryption and decryption engines and activate the encryption engine and the decryption engine to cause the encryption engine to encrypt the Tx channel data concurrently with the decryption engine decrypting the encrypted Rx channel data using the decryption key that is different from the encryption ...

Подробнее
27-05-2020 дата публикации

ASYNCHRONOUS COPYING OF DATA WITHIN MEMORY

Номер: EP3657341A1
Принадлежит:

An example method includes during execution of a software application by a processor, receiving, by a copy processor separate from the processor, a request for an asynchronous data copy operation to copy data within a memory accessible by the copy processor, wherein the request is received from a copy manager accessible by the software application in a user space of an operating system managing execution of the software application; in response to the request, initiating, by the copy processor, the asynchronous data copy operation; continuing execution of the software application by the processor; determining, by the copy processor, that the asynchronous data copy operation has completed; and in response to determining that the asynchronous copy operation has completed, selectively notifying, by the copy processor, the software application that the asynchronous copy operation has completed.

Подробнее
02-10-2019 дата публикации

Verfahren und elektronische Vorrichtung zur Datenverarbeitung zwischen mehreren Prozessoren

Номер: DE112018000474T5

Eine elektronische Vorrichtung kann Folgendes umfassen: einen ersten Speicher zum Speichern erster Daten mit einer bestimmten Rate; einen ersten Prozessor, der mit dem ersten Speicher verbunden und konfiguriert ist, um die ersten Daten in eine Mehrzahl von zweiten Daten zu unterteilen, die jeweils eine Größe aufweisen, die kleiner als die Größe der ersten Daten ist; einen zweiten Speicher zum Speichern des mindestens einen Teils der Mehrzahl von zweiten Daten mit einer Rate, die schneller als die angegebene Rate ist; einen zweiten Prozessor, der mit dem zweiten Speicher verbunden und konfiguriert ist, um den mindestens einen Teil der mehreren zweiten Daten zu verarbeiten; und ein DMA-Steuermodul, das mit dem zweiten Prozessor verbunden ist, zum Senden/Empfangen von Daten zwischen dem ersten Speicher und dem zweiten Speicher, wobei das DMA-Steuermodul konfiguriert ist: zumindest auf der Grundlage eines Verarbeitungsbefehls für die Mehrzahl von zweiten Daten, der vom ersten Prozessor an den ...

Подробнее
25-09-2019 дата публикации

Managing lowest point of coherency (lPC) memory using service layer adapter

Номер: GB0002572287A
Принадлежит:

Managing lowest point of coherency (LPC) memory using a service layer adapter, the adapter coupled to a processor and an accelerator on a host computing system, the processor configured for symmetric multi-processing, including receiving, by the adapter, a memory access instruction from the accelerator; retrieving, by the adapter, a real address for the memory access instruction; determining, using base address registers on the adapter, that the real address targets the LPC memory, wherein the base address registers direct memory access requests between the LPC memory and other memory locations on the host computing system; and sending, by the adapter, the memory access instruction and the real address to a media controller for the LPC memory, wherein the media controller for the LPC memory is attached to the adapter via a memory interface.

Подробнее
18-11-2009 дата публикации

Data packet processing at a control element

Номер: GB2460014A
Принадлежит:

A method and apparatus for processing data packets 20 comprises creating a first data packet at a control element 30 the data of the packet indicating a source 21-23 of the data packet different from the control element and a tag value 24 indicating that the first packet was created by the control element. The data packet will preferably include a destination. In a preferred embodiment the packet is a PCI-Express data packet. The tag value may be in a specified range defined with reference to data stored at the control element or at the first source. A second packet may be received at the control element in response to the transmission of the first packet. The second packet may be forwarded to the control element if an only if the packet was generated in response to a first packet generated by the control element. Alternatively, if no indication data is present the packet is forward direct to the destination. The control element may be a virtualisation proxy controller.

Подробнее
15-03-2012 дата публикации

ON ID TASKS BASING ERROR ADMINISTRATION AND - REMOVAL

Номер: AT0000546778T
Принадлежит:

Подробнее
13-01-2017 дата публикации

판독 캐시 메모리

Номер: KR1020170005472A
Принадлежит:

... 본 개시는 판독 캐시 메모리를 위한 방법들 및 장치들을 포함한다. 하나의 장치는 제 1 DRAM 어레이, 제 1 및 제 2 NAND 어레이, 및 DRAM 어레이와 제 1 NAND 어레이 사이, 그리고 제 1 NAND 어레이와 제 2 NAND 어레이 사이의 데이터의 이동을 관리하도록 구성된 제어기를 포함하는 판독 캐시 메모리 장치를 포함한다.

Подробнее
14-05-2021 дата публикации

DATA ACCESS METHOD AND APPARATUS, AND STORAGE MEDIUM

Номер: WO2021088587A1
Автор: LIU, Yang
Принадлежит:

The present application relates to the technical field of data storage, and disclosed are a data access method and apparatus, and a storage medium. In the present application, according to the address of a business logic space carried in a first data write request sent by a client, a storage device may determine a target hard disk and the address of a hard disk logic space corresponding to the business logic space, and then write target data into the target hard disk according to the address of the hard disk logic space. That is, the client accesses the storage device by means of the address of the business logic space. In the inside of the storage device, the address of the business logic space is converted to the address of the hard disk logic space. The hard disk writes data according to the address of the hard disk logic space. Hence, only one address conversion is required, which saves overhead and also improves the efficiency of data reading and writing.

Подробнее
14-05-2021 дата публикации

CONFIDENTIAL COMPUTING MECHANISM

Номер: WO2021091744A1
Принадлежит:

According to a first aspect, execution logic is configured to perform a linear capability transfer operation which transfers a physical capability from a partition of a first software modules to a partition of a second of software module without retaining it in the partition of the first. According to a second, alternative or additional aspect, the execution logic is configured to perform a sharding operation whereby a physical capability is divided into at least two instances, which may later be combined.

Подробнее
11-07-2019 дата публикации

FAST INVALIDATION IN PERIPHERAL COMPONENT INTERCONNECT (PCI) EXPRESS (PCIe) ADDRESS TRANSLATION SERVICES (ATS)

Номер: WO2019135909A1
Принадлежит:

Fast invalidation in peripheral component interconnect (PCI) express (PCIe) address translation services (ATS) initially utilize a fast invalidation request to alert endpoints that an address is being invalidated with a fast invalidation synchronization command that causes the endpoints to flush through any residual read/write commands associated with any invalidated address and delete any associated address entries in an address translation cache (ATC). Each endpoint may send a synchronization complete acknowledgement to the host. Further, a tag having an incrementing identifier for each invalidation request may be used to determine if an endpoint has missed an invalidation request.

Подробнее
12-06-2018 дата публикации

Mid-thread pre-emption with software assisted context switch

Номер: US0009996386B2
Принадлежит: Intel Corporation, INTEL CORP

Methods and apparatus relating to mid-thread pre-emption with software assisted context switch are described. In an embodiment, one or more threads executing on a Graphics Processing Unit (GPU) are stopped at an instruction level granularity in response to a request to pre-empt the one or more threads. The context data of the one or more threads is copied to memory in response to completion of the one or more threads at the instruction level granularity and/or one or more instructions. Other embodiments are also disclosed and claimed.

Подробнее
11-04-2017 дата публикации

Offloading of computation for rack level servers and corresponding methods and systems

Номер: US0009619406B2
Принадлежит: Xockets, Inc., XOCKETS INC

A method for handling multiple networked applications using a distributed server system is disclosed. The method can include providing at least one main processor and a plurality of offload processors connected to a memory bus; and operating a virtual switch respectively connected to the main processor and the plurality of offload processors using the memory bus, with the virtual switch receiving memory read/write data over the memory bus.

Подробнее
15-04-2021 дата публикации

PROCESSORS, METHODS, SYSTEMS, AND INSTRUCTIONS TO PROTECT SHADOW STACKS

Номер: US20210109684A1
Принадлежит:

A processor of an aspect includes a decode unit to decode an instruction. The processor also includes an execution unit coupled with the decode unit. The execution unit, in response to the instruction, is to determine that an attempted change due to the instruction, to a shadow stack pointer of a shadow stack, would cause the shadow stack pointer to exceed an allowed range. The execution unit is also to take an exception in response to determining that the attempted change to the shadow stack pointer would cause the shadow stack pointer to exceed the allowed range. Other processors, methods, systems, and instructions are disclosed.

Подробнее
21-06-2018 дата публикации

MEMORY MANAGEMENT DEVICE

Номер: US20180173626A1
Принадлежит:

Memory modules and associated devices and methods are provided using a memory copy function between a cache memory and a main memory that may be implemented in hardware. Address translation may additionally be provided.

Подробнее
08-01-2019 дата публикации

Row identification number generation in database direct memory access engine

Номер: US0010176114B2

Techniques provide for hardware accelerated data movement between main memory and an on-chip data movement system that comprises multiple core processors that operate on the tabular data. The tabular data is moved to or from the scratch pad memories of the core processors. While the data is in-flight, the data may be manipulated by data manipulation operations. The data movement system includes multiple data movement engines, each dedicated to moving and transforming tabular data from main memory data to a subset of the core processors. Each data movement engine is coupled to an internal memory that stores data (e.g. a bit vector) that dictates how data manipulation operations are performed on tabular data moved from a main memory to the memories of a core processor, or to and from other memories. The internal memory of each data movement engine is private to the data movement engine. Tabular data is efficiently copied between internal memories of the data movement system via a copy ring ...

Подробнее
10-05-2022 дата публикации

RAID storage-device-assisted read-modify-write system

Номер: US0011327683B2
Принадлежит: Dell Products L.P.

A RAID storage-device-assisted RMW system includes a RAID primary data drive that retrieves second primary data via a DMA operation from a host system, and XOR's it with its first primary data to produce first interim parity data that it writes via a DMA operation to a RAID parity data drive. The RAID parity data drive XOR's its first parity data and the first interim parity data to produce second parity data that overwrites the first parity data. The RAID parity data drive also performs GF operations on the first interim parity data and its second interim parity data and XOR's the results to produce interim Q data that it writes via a DMA operation to a RAID Q data drive. The RAID Q data drive XOR's its first Q data and the interim Q data to produce second Q data that overwrites the first Q data.

Подробнее
29-02-2024 дата публикации

COMPUTER DEVICE AND MEMORY REGISTRATION METHOD

Номер: US20240070087A1
Принадлежит:

A computer device includes a processor; a memory connected to the processor, where the memory includes a first memory region; and a plurality of network adapters connected to the processor. The processor is configured to: when registering the first memory region with the plurality of network adapters, record a first memory address in a first memory address translation table (MTT) into a memory protection table (MPT) of each network adapter in the plurality of network adapters, so that each of the network adapters is capable of accessing the first memory region by using the first memory address translation table. The first memory address translation table is used to indicate a correspondence between a virtual address of the first memory region and a physical address of the first memory region.

Подробнее
27-07-2022 дата публикации

PROGRAMMABLE ENGINE FOR DATA MOVEMENT

Номер: EP4031979A1
Принадлежит:

Подробнее
04-06-2008 дата публикации

Method of processing data packets

Номер: GB0000807671D0
Автор:
Принадлежит:

Подробнее
19-08-1981 дата публикации

DATA PROCESSING APPARATUS

Номер: GB0001595740A
Автор:
Принадлежит:

Подробнее
15-03-2011 дата публикации

TLB VERUND UNBLOCKING OPERATION

Номер: AT0000500552T
Принадлежит:

Подробнее
03-08-2018 дата публикации

스위치들 내의 어드레스 캐싱

Номер: KR1020180088525A
Принадлежит:

... 컴퓨터 저장 매체들 상에서 인코딩되는 컴퓨터 프로그램들을 포함하여, 어드레스를 스위치의 메모리에 저장하기 위한 방법들, 시스템들 및 장치가 개시된다. 시스템들 중 하나는 버스에 연결된 디바이스들 각각과 스위치 간에 버스 상의 어떠한 컴포넌트들도 없이, 디바이스들로부터 패킷들을 수신하고, 패킷들을 디바이스들로 전달하는 스위치, 물리적 어드레스들로의 가상 어드레스들의 맵핑을 저장하기 위해 스위치에 통합된 메모리, 및 스위치에 통합되고, 스위치에 의해 실행 가능한 명령들을 저장하는 저장 매체를 포함하고, 명령들은 스위치로 하여금 동작들을 수행하게 하고, 동작들은, 버스에 의해 스위치에 연결된 디바이스에 대한 어드레스 변환 요청에 대한 응답을 수신하는 동작 ― 응답은 물리적 어드레스로의 가상 어드레스의 맵핑을 포함함 ― , 및 응답을 수신하는 동작에 대한 응답으로, 물리적 어드레스로의 가상 어드레스의 맵핑을 메모리에 저장하는 동작을 포함한다.

Подробнее
02-02-2017 дата публикации

중앙 처리 장치(CPU)-기반 시스템의 압축된 메모리 제어기(CMC)들을 이용한 메모리 대역폭 압축의 제공

Номер: KR1020170012233A
Принадлежит:

... 중앙 처리 장치(CPU)-기반 시스템의 CMC(compressed memory controller)들을 이용하여 메모리 대역폭 압축을 제공하는 것이 개시된다. 이와 관련하여, 일부 양상들에서, CMC는 시스템 메모리의 물리적 어드레스에 대한 메모리 판독 요청을 수신하고, 물리적 어드레스의 에러 정정 코드(ECC) 비트들 및/또는 마스터 디렉토리로부터 물리적 어드레스에 대한 압축 표시자(CI)를 판독하도록 구성된다. CI에 기초하여, CMC는 메모리 판독 요청에 대해 판독될 메모리 블록들의 수를 결정하고 결정된 수의 메모리 블록들을 판독한다. 일부 양상들에서, CMC는 시스템 메모리의 물리적 어드레스에 대한 메모리 기록 요청을 수신하고 기록 데이터의 압축 패턴에 기초하여 기록 데이터에 대한 CI를 생성하도록 구성된다. CMC는 생성된 CI로 물리적 어드레스의 ECC 비트들 및/또는 마스터 디렉토리를 업데이트한다.

Подробнее
09-08-2018 дата публикации

REDUCING OR AVOIDING BUFFERING OF EVICTED CACHE DATA FROM AN UNCOMPRESSED CACHE MEMORY IN A COMPRESSION MEMORY SYSTEM WHEN STALLED WRITE OPERATIONS OCCUR

Номер: WO2018144184A1
Принадлежит:

Aspects disclosed involve reducing or avoiding buffering of evicted cache data from an uncompressed cache memory in a compression memory system when stalled write operations occur. A processor-based system is provided that includes a cache memory and a compression memory system. When a cache entry is evicted from the cache memory, cache data and a virtual address associated with the evicted cache entry are provided to the compression memory system. The compression memory system reads metadata associated with the virtual address of the evicted cache entry to determine the physical address in the compression memory system mapped to the evicted cache entry. If the metadata is not available, the compression memory system stores the evicted cache data at a new, available physical address in the compression memory system without waiting for the metadata. Thus, buffering of the evicted cache data to avoid or reduce stalling write operations is not necessary.

Подробнее
01-11-2018 дата публикации

DATA STREAM ASSEMBLY CONTROL

Номер: US20180314438A1
Принадлежит: ARM IP LIMITED

Technology for operating a data-source device for assembling a data stream compliant with a data stream constraint. The technology comprises acquiring a plurality of data items by accessing data in a memory and/or transforming data. Prior to completion of the accessing data in a memory, an accessor is selected based on an estimate of access constraint. Prior to completion of the transforming data, a transformer is selected based on an estimate of transformation constraint, wherein the transportation constraint comprises any data acquisition constraint. The access and transformation constraints are dependent upon system state it the data-source system. The data items are positioned in the data stream, and, responsive to achieving compliance with the data stream constraint, the data strewn is communicated.

Подробнее
21-04-2020 дата публикации

Hypervisor direct memory access

Номер: US0010628202B2

This disclosure generally relates to hypervisor memory virtualization. Techniques disclosed herein improve peripheral component interconnect express (PCI-e) device interoperability with a virtual machine. As an example, when a direct-memory access request is received from a PCI-e device but the target memory is currently unmapped, an indication may be provided to a memory paging processor so as to page-in the memory, such that the PCI-e device may continue to function normally. In some examples, the access request may be buffered and replayed once the memory is paged-in, or the access request may be retried, among other examples.

Подробнее
28-04-2020 дата публикации

Systems and methods for virtio based optimization of data packet paths between a virtual machine and a network device for live virtual machine migration

Номер: US0010635474B2

A new approach is proposed that contemplates systems and methods to support virtio-based data packet path optimization for live virtual machine (VM) migration for Linux. Specifically, a data packet receiving (Rx) path and a data packet transmitting (Tx) path between a VM running on a host and a virtual function (VF) driver configured to interact with a physical network device of the host to receive and transmit communications dedicated to the VM are both optimized to implement a zero-copy solution to reduce overheads in packet processing. Under the proposed approach, the data packet Tx path utilizes a zero-copy mechanism provided by Linux kernel to avoid copying from virtio memory rings/Tx vrings in memory of the VM. The data packet Rx path also implements a zero-copy solution, which allows a virtio device of the VM to communicate directly with the VF driver of the network device while bypassing a macvtap driver entirely from the data packet Rx path.

Подробнее
04-11-2021 дата публикации

POINTER-BASED DYNAMIC DATA STRUCTURES IN KEY-VALUE STORES

Номер: US20210342293A1
Принадлежит:

A computer-implemented method includes receiving data structures in memory space and creating micro-heaps on a per-data structure basis. Each data structure is associated with a micro-heap allocator. The method also includes storing the data structures in a key-value store. Values of the key-value store are associated with the data structures. A computer program product includes one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media. The program instructions include program instructions to perform the foregoing method. A system includes a processor and logic integrated with the processor, executable by the processor, or integrated with and executable by the processor. The logic is configured to perform the foregoing method.

Подробнее
04-02-2021 дата публикации

CONCURRENT READING AND WRITING WITH CRASH RECOVERY IN PERSISTENT MEMORY

Номер: US20210034281A1
Принадлежит:

Systems and methods for concurrent reading and writing in shared, persistent byte-addressable non-volatile memory is described herein. One method includes in response to initiating a write sequence to one or more memory elements, checking an identifier memory element to determine whether a write sequence is in progress. In addition, the method includes updating an ingress counter. The method also includes adding process identification associated with a writer node to the identifier memory element. Next, a write operation is performed. After the write operation, an egress counter is incremented and the identifier memory element is reset to an expected value.

Подробнее
08-05-2018 дата публикации

Use of interrupt memory for communication via PCIe communication fabric

Номер: US9965417B1
Принадлежит: XILINX INC, Xilinx, Inc.

Techniques for communication with a host system via a peripheral component interconnect express (PCIe) communication fabric are disclosed herein. A peripheral device having its own memory address space executes a boot ROM to initialize a PCIe-to internal memory address space bridge and to disable MSIx interrupts. The peripheral device monitors a specific location in memory dedicated to MSIx interrupts for a particular value that indicates that PCIe device enumeration is complete. At this point, the peripheral device knows that its PCIe base address registers have been set by the host, and sets address translation registers for translating addresses in the address space of the host to the address space of the peripheral device.

Подробнее
28-06-2022 дата публикации

Digital signal processing data transfer

Номер: US0011372546B2
Принадлежит: Nordic Semiconductor ASA

A technique for transferring data in a digital signal processing system is described. In one example, the digital signal processing system comprises a number of fixed function accelerators, each connected to a memory access controller and each configured to read data from a memory device, perform one or more operations on the data, and write data to the memory device. To avoid hardwiring the fixed function accelerators together, and to provide a configurable digital signal processing system, a multi-threaded processor controls the transfer of data between the fixed function accelerators and the memory. Each processor thread is allocated to a memory access channel, and the threads are configured to detect an occurrence of an event and, responsive to this, control the memory access controller to enable a selected fixed function accelerator to read data from or write data to the memory device via its memory access channel.

Подробнее
31-03-2021 дата публикации

SECURE ADDRESS TRANSLATION SERVICES USING MESSAGE AUTHENTICATION CODES AND INVALIDATION TRACKING

Номер: EP3798856A1
Принадлежит:

Embodiments are directed to providing a secure address translation service. An embodiment of a system includes a memory for storage of data, an Input/Output Memory Management Unit (IOMMU) coupled to the memory via a host-to-device link the IOMMU to perform operations, comprising receiving a memory access request from a remote device via a host-to-device link, wherein the memory access request comprises a host physical address (HPA) that identifies a physical address within the memory pertaining to the memory access request and a first message authentication code (MAC), generating a second message authentication code (MAC) using the host physical address received with the memory access request and a private key associated with the remote device, and performing at least one of allowing the memory access to proceed when the first MAC and the second MAC match and the HPA is not in an invalidation tracking table (ITT) maintained by the IOMMU; or blocking the memory operation when the first MAC ...

Подробнее
17-02-2021 дата публикации

SECURE INTERFACE DISABLEMENT

Номер: EP3776221A1
Принадлежит:

Подробнее
08-09-2021 дата публикации

ADDRESS CACHING IN SWITCHES

Номер: EP3329378B1
Автор: SEREBRIN, Benjamin C.
Принадлежит: Google LLC

Подробнее
05-02-2020 дата публикации

ONE STEP ADDRESS TRANSLATION OF GRAPHICS ADDRESSES IN VIRTUALIZATION

Номер: EP3605342A1
Принадлежит:

A system and method including, in some embodiments, receiving a request for a graphics memory address for an input/output (I/O) device assigned to a virtual machine in a system that supports virtualization, and installing, in a graphics memory translation table, a physical guest graphics memory address to host physical memory address translation.

Подробнее
09-10-2019 дата публикации

HYBRID MEMORY MANAGEMENT

Номер: EP3282364B1
Принадлежит: Google LLC

Подробнее
31-08-2022 дата публикации

Data Processors

Номер: GB0002604153A
Принадлежит:

In a data processing system in which varying numbers of channels for accessing a memory can be configured, the communications channel to use for an access to the memory is determined by mapping 451 a memory address associated with the memory access to an intermediate address within an intermediate address space, selecting 452, based on the number of channels configured for use to access the memory, a mapping operation to use to determine from the intermediate address which channel to use for the memory access, and using 452 the selected mapping operation to determine from the intermediate address which channel to use for the memory access.

Подробнее
15-05-2011 дата публикации

INTELLIGENT CACHE WITH INTERRUPTIBLE BLOCK AHEAD GETTING

Номер: AT0000509315T
Принадлежит:

Подробнее
15-10-2011 дата публикации

CACHE WITH BLOCK AHEAD GETTING AND DMA (DIRECT MEMORY ACCESS)

Номер: AT0000527599T
Принадлежит:

Подробнее
03-01-2020 дата публикации

Data writing method and device and double- live system thereof

Номер: CN0106844234B
Автор:
Принадлежит:

Подробнее
04-01-2018 дата публикации

DUAL-PORT NON-VOLATILE DUAL IN-LINE MEMORY MODULES

Номер: US20180004422A1
Принадлежит:

According to an example, a dual-port non-volatile dual in-line memory module (NVDIMM) includes a first port to provide a central processing unit (CPU) with access to universal memory of the dual-port NVDIMM and a second port to provide an external NVDIMM manager circuit with access to the universal memory of the dual-port NVDIMM. Accordingly, a media controller of the dual-port NVDIMM may store data received from the CPU through the first port in the universal memory, control dual-port settings received from the CPU, and transmit the stored data to the NVDIMM manager circuit through the second port of the dual-port NVDIMM. 1. A dual-port non-volatile dual in-line memory module (NVDIMM) , corn prising:a first port to provide a central processing unit (CPU) with access to universal memory of the dual-port NVDIMM;a second port to provide an external NVDIMM manager circuit with access to the universal memory of the dual-port NVDIMM, wherein the NVDIMM manager circuit interfaces with remote storage; and store data received from the CPU through the first port of the dual-port NVDIMM in the universal memory,', 'control dual-port settings for the dual-port NVDIMM received from the CPU through the first port of the dual-port NVDIMM, wherein the dual-port settings include at least one of an active-active redundancy flow and an active-passive redundancy flow, and', 'transmit the stored data to the NVDIMM manager circuit through the second port of the dual-port NVDIMM., 'a media controller to'}2. The dual-port NVDIMM of claim 1 , wherein responsive to controlling the dual-port settings to be the active-active redundancy flow claim 1 , the media controller is to set both the first port and the second port of the dual-port NVDIMM to an active state so that the CPU and NVDIMM manager circuit can simultaneously access the dual-port NVDIMM.3. The dual-port NVDIMM of claim 2 , wherein the media controller comprises an integrated direct memory access (DMA) engine migrate the stored ...

Подробнее
18-10-2005 дата публикации

TLB lock and unlock operation

Номер: US0006957315B2

A digital system and method of operation is provided in which several processing resources ( 340 ) and processors ( 350 ) are connected to a shared translation lookaside buffer (TLB) ( 300, 310 (n)) of a memory management unit (MMU) and thereby access memory and devices. These resources can be instruction processors, coprocessors, DMA devices, etc. Each entry location in the TLB is filled during the normal course of action by a set of translated address entries ( 308, 309 ) along with qualifier fields ( 301, 302, 303 ) that are incorporated with each entry. Operations can be performed on the TLB that are qualified by the various qualifier fields. A command ( 360 ) is sent by an MMU manager to the control circuitry of the TLB ( 320 ) during the course of operation. Commands are sent as needed to flush (invalidate), lock or unlock selected entries within the TLB. Each entry in the TLB is accessed ( 362, 368 ) and the qualifier field specified by the operation command is evaluated ( 364 ).

Подробнее
06-10-2020 дата публикации

Memory controller for selective rank or subrank access

Номер: US0010795834B2
Принадлежит: Rambus Inc., RAMBUS INC

A memory module having reduced access granularity. The memory module includes a substrate having signal lines thereon that form a control path and first and second data paths, and further includes first and second memory devices coupled in common to the control path and coupled respectively to the first and second data paths. The first and second memory devices include control circuitry to receive respective first and second memory access commands via the control path and to effect concurrent data transfer on the first and second data paths in response to the first and second memory access commands.

Подробнее
01-12-2020 дата публикации

Memory access protection apparatus and methods for memory mapped access between independently operable processors

Номер: US0010853272B2
Принадлежит: Apple Inc., APPLE INC

Methods and apparatus for registering and handling access violations of host memory. In one embodiment, a peripheral processor receives one or more window registers defining an extent of address space accessible from a host processor; responsive to an attempt to access an extent of address space outside of the extent of accessible address space, generates an error message; stores the error message within a violation register; and resumes operation of the peripheral processor upon clearance of the stored error message.

Подробнее
09-03-2021 дата публикации

Virtual file system for cloud-based shared content

Номер: US0010942899B2
Принадлежит: Box, Inc., BOX INC

A server in a cloud-based environment interfaces with storage devices that store shared content accessible by two or more users. Individual items within the shared content are associated with respective object metadata that is also stored in the cloud-based environment. Download requests initiate downloads of instances of a virtual file system module to two or more user devices associated with two or more users. The downloaded virtual file system modules capture local metadata that pertains to local object operations directed by the users over the shared content. Changed object metadata attributes are delivered to the server and to other user devices that are accessing the shared content. Peer-to-peer connections can be established between the two or more user devices. Object can be divided into smaller portions such that processing the individual smaller portions of a larger object reduces the likelihood of a conflict between user operations over the shared content.

Подробнее
09-06-2022 дата публикации

MEMORY MANAGEMENT DEVICE

Номер: US20220179792A1
Принадлежит:

Memory modules and associated devices and methods are provided using a memory copy function between a cache memory and a main memory that may be implemented in hardware. Address translation may additionally be provided.

Подробнее
09-12-2020 дата публикации

NETWORK INTERFACE FOR DATA TRANSPORT IN HETEROGENEOUS COMPUTING ENVIRONMENTS

Номер: EP3748510A1
Принадлежит:

A network interface controller can be programmed to direct write received data to a memory buffer via either a host-to-device fabric or an accelerator fabric. For packets received that are to be written to a memory buffer associated with an accelerator device, the network interface controller can determine an address translation of a destination memory address of the received packet and determine whether to use a secondary head. If a translated address is available and a secondary head is to be used, a direct memory access (DMA) engine is used to copy a portion of the received packet via the accelerator fabric to a destination memory buffer associated with the address translation. Accordingly, copying a portion of the received packet through the host-to-device fabric and to a destination memory can be avoided and utilization of the host-to-device fabric can be reduced for accelerator bound traffic.

Подробнее
30-05-2018 дата публикации

HYBRID MEMORY MANAGEMENT

Номер: EP3291097A3
Принадлежит:

Methods, systems, and apparatus for determining whether an access bit is set for each page table entry of a page table based on a scan of the page table with at least one page table walker, the access bit indicating whether a page associated with the page table entry was accessed in a last scan period; incrementing a count for each page in response to determining that the access bit is set for the page table entry associated with the page; resetting the access bit after determining whether the access bit is set for each page table entry; receiving a request to access, from a main memory, a first page of data; initiating a page fault based on determining that the first page of data is not stored in the main memory; and servicing the page fault with a DMA engine.

Подробнее
18-11-2020 дата публикации

ADDRESS VALIDATION USING SIGNATURES

Номер: EP3739458A1
Автор: SEREBRIN, Benjamin C.
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating signed addresses. One of the methods includes receiving, by a component from a device, a plurality of first requests, each first request for a physical address and including a virtual address, determining, by the component, a first physical address using the virtual address, generating a first signature for the first physical address, and providing, to the device, a response that includes the first signature, receiving, from the device, a plurality of second requests, each second request for access to a second physical address and including a second signature, determining, by the component for each of the plurality of second requests, whether the second physical address is valid using the second signature, and for each second request for which the second physical address is determined to be valid, servicing the corresponding second request.

Подробнее
01-04-2020 дата публикации

Address validation using signatures

Номер: GB0002574280B
Принадлежит: GOOGLE LLC, Google LLC.

Подробнее
27-08-2019 дата публикации

Номер: KR0102015077B1
Автор:
Принадлежит:

Подробнее
01-04-2020 дата публикации

Method for providing device information of cluster storage system

Номер: TW0202013195A
Принадлежит:

A method for providing device information of cluster storage system includes: by a host, through execution of a first command, obtaining generic device name of each expander and each hard disk; by the host, based on generic device name of each expander through execution of a second command, obtaining name of storage device where the expander is located; by the host, based on generic device name of each hard disk through execution of a third command, obtaining address of each hard disk; by the host, based on generic device name of each expander through execution of a fourth command, obtaining address of the expander and address of each hard disk which connects to the expander; by the host, based on information obtained by executing the first command, the second command, the third command and the fourth command generating hard disk information including name of storage device where the expander to which each hard disk is connected is located.

Подробнее
16-08-2018 дата публикации

Devices and methods for autonomous hardware management of circular buffers

Номер: TW0201830256A
Принадлежит:

An autonomous circular buffer is described in connection with the various embodiments of the present disclosure. An autonomous circular buffer controller may control movement of data between a user of the autonomous circular buffer and a peripheral. The autonomous circular buffer may enable direct memory access type data movement, including between the user and the peripheral.

Подробнее
02-02-2017 дата публикации

ADDRESS CACHING IN SWITCHES

Номер: WO2017019216A1
Автор: SEREBRIN, Benjamin C.
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for storing an address in a memory of a switch. One of the systems includes a switch that receives packets from and delivers packets to devices connected to a bus without any components on the bus between the switch and each of the devices, a memory integrated into the switch to store a mapping of virtual addresses to physical addresses, and a storage medium integrated into the switch storing instructions executable by the switch to cause the switch to perform operations including receiving a response to an address translation request for a device connected to the switch by the bus, the response including a mapping of a virtual address to a physical address, and storing, in the memory, the mapping of the virtual address to the physical address in response to receiving the response.

Подробнее
12-10-2021 дата публикации

Concurrent reading and writing with crash recovery in persistent memory

Номер: US0011144237B2

Systems and methods for concurrent reading and writing in shared, persistent byte-addressable non-volatile memory is described herein. One method includes in response to initiating a write sequence to one or more memory elements, checking an identifier memory element to determine whether a write sequence is in progress. In addition, the method includes updating an ingress counter. The method also includes adding process identification associated with a writer node to the identifier memory element. Next, a write operation is performed. After the write operation, an egress counter is incremented and the identifier memory element is reset to an expected value.

Подробнее
22-06-2004 дата публикации

Cache with DMA and dirty bits

Номер: US0006754781B2

A digital system and method of operation is provided in which the digital system has at least one processor, with an associated multi-segment cache memory circuit (506(n). Validity circuitry (VI) is connected to the memory circuit and is operable to indicate if each segment of the plurality of segments holds valid data. Dirty bit circuitry (DI) is connected to the memory circuit for indicating if data within the cache is incoherent with a secondary back-up memory. DMA circuitry can transfer (1652) blocks of data/instructions (1660) between the cache and a secondary memory (1602). A transfer mode circuit (1681) controls how DMA operations are affected by the dirty bits. If the transfer mode circuit is in a first mode, a DMA operation transfers only segments (1661) indicated as dirty (1685). If the transfer mode circuit is in a second mode, a DMA operation transfers and entire block of data (1660) without regard to dirty indicators (1686). DMA transfers from the cache to secondary memory ...

Подробнее
17-02-2022 дата публикации

METHOD AND SYSTEM FOR PERFORMING READ/WRITE OPERATION WITHIN A COMPUTING SYSTEM HOSTING NON-VOLATILE MEMORY

Номер: US20220050770A1
Принадлежит: Samsung Electronics Co., Ltd.

A method for performing a write operation includes selecting, by a host, at least a free write buffer from a plurality of write buffers of a shared memory buffer (SMB) by accessing a cache structure within the SMB for tracking the free write buffer; sending, by the host, at least a logical address accessed from the cache structure with respect to the selected write buffer to issue a write-command to a non-volatile memory; receiving a locking instruction of the selected write buffer from the non-volatile memory; updating a status of the selected write buffer within the cache structure based on the received locking instruction; and allowing the non-volatile memory to extract contents of one or more locked write buffers including the selected write buffer.

Подробнее
08-08-2019 дата публикации

DATA PATH FOR GPU MACHINE LEARNING TRAINING WITH KEY VALUE SSD

Номер: US20190244140A1
Принадлежит:

A system and method for machine learning. The system includes a GPU with a GPU memory, and a key value storage device connected to the GPU memory. The method includes, writing, by the GPU, a key value request to a key value request queue in a input-output region of the GPU memory, the key value request including a key. The method further includes reading, by the key value storage device, the key value request from the key value request queue, and writing, by the key value storage device, in response to the key value request, a value to the input-output region of the GPU memory, the value corresponding to the key of the key value request.

Подробнее
22-06-2023 дата публикации

HOST DEVICE PERFORMING NEAR DATA PROCESSING FUNCTION AND ACCELERATOR SYSTEM INCLUDING THE SAME

Номер: US20230195651A1
Принадлежит:

A host device includes a unit processor configured to generate a near data processing (NDP) request, a host expansion control circuit configured to receive the NDP request; and a local memory device configured to store data corresponding to the NDP request according to control by the expansion control circuit. In response to receiving the NDP request, the host expansion control circuit performs a request processing operation to perform a read or a write operation corresponding to the NDP request on the local memory device and performs a computation operation using the requested data corresponding to the NDP request.

Подробнее
21-06-2022 дата публикации

Memory system using SRAM with flag information to identify unmapped addresses

Номер: US0011366736B2
Принадлежит: SK hynix Inc.

A memory system includes a nonvolatile memory device; a random access memory configured to store, in response to an unmap request received from a host device, a flag information indicating that an unmap address as a target of the unmap request is unmapped; and a control unit configured to flush the flag information to the nonvolatile memory device, wherein the control unit flushes the flag information to the nonvolatile memory device when a first condition is satisfied.

Подробнее
22-06-2022 дата публикации

TECHNOLOGIES FOR OFFLOAD DEVICE FETCHING OF ADDRESS TRANSLATIONS

Номер: EP4016314A1
Принадлежит:

Techniques for offload device address translation fetching are disclosed. In the illustrative embodiment, a processor of a compute device sends a translation fetch descriptor to an offload device before sending a corresponding work descriptor to the offload device. The offload device can request translations for virtual memory address and cache the corresponding physical addresses for later use. While the offload device is fetching virtual address translations, the compute device can perform other tasks before sending the corresponding work descriptor, including operations that modify the contents of the memory addresses whose translation are being cached. Even if the offload device does not cache the translations, the fetching can warm up the cache in a translation lookaside buffer. Such an approach can reduce the latency overhead that the offload device may otherwise incur in sending memory address translation requests that would be required to execute the work descriptor.

Подробнее
19-09-2018 дата публикации

Address validation using signatures

Номер: GB0002553228B
Принадлежит: GOOGLE LLC, Google LLC

Подробнее
14-06-1978 дата публикации

DATA PROCESSING SYSTEMS

Номер: GB0001514555A
Автор:
Принадлежит:

... 1514555 Data processing system SIEMENS AG 16 Sept 1975 [17 Sept 1974] 38092/75 Addition to 1447680 Heading G4A] The Parent Specification is modified to enable larger storage systems to be handled by dividing the data in the chain list store into pages in a conventional hierarchical manner using a higher order segment table providing the base addresses of a group of pages each storing chain list entries. Thus the chain list pages can be treated within the virtual memory system in exactly the same way as other pages of stored data, i.e. they can be transferred in and out of the working store as required. Input/output operations between peripherals and the working store DM of a virtual memory system proceed as in the Parent Specification by incrementing or decrementing (depending on signal DTB) the byte address part DR1 of a real address comprising byte DR1 and page DR2 parts for each byte of data transferred until a page boundary is crossed indicated by a carry PCR from register DR1. Since ...

Подробнее
07-12-2018 дата публикации

A DDR management and control system based on FPGA hardware acceleration

Номер: CN0108958800A
Принадлежит:

Подробнее
06-11-2018 дата публикации

System starting method and apparatus, electronic device and storage medium

Номер: CN0108763099A
Принадлежит:

Подробнее
11-08-2023 дата публикации

Method, device and system for improving memory area access efficiency of RDMA engine, chip and storage medium

Номер: CN116578504A
Принадлежит:

The invention discloses a method for improving the memory area access efficiency of an RDMA engine, which comprises the following steps of: receiving an RDMA request message from a far-end host, and analyzing to obtain a memory area and a virtual address requested to be used by the request message; performing authentication processing on the request message by adopting a memory protection translation table (MPT) in a host memory; after the authentication succeeds, according to a page table pointer corresponding to a virtual address in the request message, retrieving whether a matched page table entry exists in a bypass translation buffer (TLB) preset in the MPT, and if yes, obtaining a physical address corresponding to the virtual address; and performing read-write operation on a memory area in a host memory pointed by the physical address according to the request message. The invention further discloses a corresponding device and system, a chip and a storage medium. According to the invention ...

Подробнее
21-10-2016 дата публикации

프리페칭을 갖는 변환 색인 버퍼

Номер: KR1020160122278A
Принадлежит:

... 시스템 TLB는, 개시자들로부터 변환 프리페치 요청들을 수락한다. 미스들은 워커 포트에 외부 변환 요청들을 발생시킨다. ID, 어드레스, 및 클래스와 같은, 요청의 속성들뿐만 아니라 TLB의 상태는 변환 테이블들의 다수의 레벨들 내에서 변환들의 할당 정책에 영향을 미친다. 변환 테이블들은 SRAM으로 구현되고, 그룹들로 조직된다.

Подробнее
07-02-2019 дата публикации

OPERATION MAPPING IN A VIRTUAL FILE SYSTEM FOR CLOUD-BASED SHARED CONTENT

Номер: US20190042593A1
Принадлежит: Box, Inc.

A server in a cloud-based environment is interfaced with storage devices that store shared content accessible by two or more user devices that interact with the cloud-based service platform over a network. A virtual file system module is delivered to a user device, which user device hosts one or more applications. The virtual file system module detects a plurality of application calls issued by processes or threads operating on the user device. The plurality of application calls are mapped into one coalesced cloud call. The coalesced cloud call is delivered to the cloud-based service platform to facilitate access to the shared content by the application. The mapping of application calls to the coalesced cloud call is based on pattern rules that are applied over a stream of incoming application calls. A delay may be observed after mapping to a first pattern, and before making a mapping to a second pattern.

Подробнее
01-01-2019 дата публикации

Efficient testing of direct memory address translation

Номер: US0010169186B1

A circuit and method provide efficient stress testing of address translations in an integrated circuit such as a link processing unit. A random DMA mode (RDM) circuit provides a random input to index into a translation validation table (TVT) that is used to generate the real memory address. The RDM circuit allows testing all entries of the TVT, and thus all DMA modes, regardless of what bus agents are connected to the link processing unit. The RDM circuit may use a multiplexer to select between a runtime input and a random test input provided by the random bit generator. When the link processing unit is in a test mode a mode selection bit is asserted to select the random test input.

Подробнее
12-01-2012 дата публикации

Method and apparatus for wireless broadband systems direct data transfer

Номер: US20120011295A1
Принадлежит: DESIGNART NETWORKS LTD

Apparatus and method for direct data transfer in a wireless broadband system having an operating system, the apparatus including a central processing unit (CPU), at least one dedicated Direct Memory Access unit (DMA) local to the CPU, coupled directly to the CPU, and a commands FIFO (First In First Out) receiving commands from the CPU and automatically transferring the commands in sequence to the DMA for implementation by the DMA, in the absence of intervention by the operating system.

Подробнее
19-01-2012 дата публикации

Sharing memory spaces for access by hardware and software in a virtual machine environment

Номер: US20120017029A1
Принадлежит: Hewlett Packard Development Co LP

Example methods, apparatus, and articles of manufacture to share memory spaces for access by hardware and software in a virtual machine environment are disclosed. A disclosed example method involves enabling a sharing of a memory page of a source domain executing on a first virtual machine with a destination domain executing on a second virtual machine. The example method also involves mapping the memory page to an address space of the destination domain and adding an address translation entry for the memory page in a table. In addition, the example method involves sharing the memory page with a hardware device for direct memory access of the memory page by the hardware device.

Подробнее
09-02-2012 дата публикации

Systems and methods for using a shared buffer construct in performance of concurrent data-driven tasks

Номер: US20120036288A1
Принадлежит: Calos Fund LLC

Disclosed herein are techniques to execute tasks with a computing device. A first task is initiated to perform an operation of the first task. A buffer construct that represents a region of memory accessible to the operation of the first task is created. A second task is initiated to perform of an operation of the second task that is configured to be timed to initiate in response to the buffer construct being communicated to the second task from the first task.

Подробнее
09-02-2012 дата публикации

Data Flow Control Within and Between DMA Channels

Номер: US20120036289A1
Принадлежит: Individual

In one embodiment, a direct memory access (DMA) controller comprises a transmit circuit and a data flow control circuit coupled to the transmit circuit. The transmit circuit is configured to perform DMA transfers, each DMA transfer described by a DMA descriptor stored in a data structure in memory. There is a data structure for each DMA channel that is in use. The data flow control circuit is configured to control the transmit circuit's processing of DMA descriptors for each DMA channel responsive to data flow control data in the DMA descriptors in the corresponding data structure.

Подробнее
22-03-2012 дата публикации

Memory system having high data transfer efficiency and host controller

Номер: US20120072618A1
Автор: Akihisa Fujimoto
Принадлежит: Individual

According to one embodiment, the host controller includes a register set to issue command, and a direct memory access (DMA) unit and accesses a system memory and a device. First, second, third and fourth descriptors are stored in the system memory. The first descriptor includes a set of a plurality of pointers indicating a plurality of second descriptors. Each of the second descriptors comprises the third descriptor and fourth descriptor. The third descriptor includes a command number, etc. The fourth descriptor includes information indicating addresses and sizes of a plurality of data arranged in the system memory. The DMA unit sets, in the register set, the contents of the third descriptor forming the second descriptor, from the head of the first descriptor as a start point, and transfers data between the system memory and the host controller in accordance with the contents of the fourth descriptor.

Подробнее
10-05-2012 дата публикации

Fencing Direct Memory Access Data Transfers In A Parallel Active Messaging Interface Of A Parallel Computer

Номер: US20120117281A1
Принадлежит: International Business Machines Corp

Fencing direct memory access (‘DMA’) data transfers in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

Подробнее
17-05-2012 дата публикации

Location of Memory Management Translations in an Emulated Processor

Номер: US20120124271A1
Автор: Matthew L. Evans
Принадлежит: International Business Machines Corp

A method and system for location of memory management translations in an emulated processor. The method includes: detecting a page miss of a process on an emulated processor, wherein the emulated processor software refills a translation lookaside buffer (TLB); locating a secondary data structure in memory; fetching a missing translation from a secondary data structure in memory; and inserting the missing translation in a guest translation lookaside buffer; wherein the steps are carried out in a trap handler in the emulated environment. The steps may be carried out in the emulated processor or in a host server of the emulated processor instead of invoking a guest operating system trap handler.

Подробнее
17-05-2012 дата публикации

Serial i/o using jtag tck and tms signals

Номер: US20120124438A1
Автор: Lee D. Whetsel
Принадлежит: Texas Instruments Inc

The present disclosure describes a novel method and apparatus of using the JTAG TAP's TMS and TCK terminals as a general purpose serial Input/Output (I/O) bus. According to the present disclosure, the TAP's TMS terminal is used as a clock signal and the TCK terminal is used as a bidirectional data signal to allow serial communication to occur between; (1) an IC and an external controller, (2) between a first and second IC, or (3) between a first and second core circuit within an IC.

Подробнее
21-06-2012 дата публикации

System and method for peripheral device communications

Номер: US20120159018A1
Принадлежит: Alon Tsafrir, Fullerton Mark N, Ofer Bar-Shalom

A method for operating a host device includes comparing a predetermined response of a peripheral device to a response token received from the peripheral device. The predetermined response and the response token are generated based on a first command transmitted from the host device to the peripheral device. The method further includes controlling a transfer of first data from a first memory to a peripheral control module based on the comparison between the predetermined response and the response token without interrupting a host control module, and selectively passing interrupts to the host control module when the predetermined response does not match the response token.

Подробнее
21-06-2012 дата публикации

Memory Module With Reduced Access Granularity

Номер: US20120159061A1
Принадлежит: RAMBUS INC

A memory module having reduced access granularity. The memory module includes a substrate having signal lines thereon that form a control path and first and second data paths, and further includes first and second memory devices coupled in common to the control path and coupled respectively to the first and second data paths. The first and second memory devices include control circuitry to receive respective first and second memory access commands via the control path and to effect concurrent data transfer on the first and second data paths in response to the first and second memory access commands.

Подробнее
21-06-2012 дата публикации

Ieee 1149.1 and p1500 test interfaces combined circuits and processes

Номер: US20120159275A1
Автор: Lee D. Whetsel
Принадлежит: Texas Instruments Inc

In a first embodiment a TAP of IEEE standard 1149.1 is allowed to commandeer control from a WSP of IEEE standard P1500 such that the P1500 architecture, normally controlled by the WSP, is rendered controllable by the TAP. In a second embodiment (1) the TAP and WSP based architectures are merged together such that the sharing of the previously described architectural elements are possible, and (2) the TAP and WSP test interfaces are merged into a single optimized test interface that is operable to perform all operations of each separate test interface. One approach provides for the TAP to maintain access and control of the TAP instruction register, but provides for a selected data register to be accessed and controlled by either the TAP+ATC or by the discrete CaptureDR, UpdateDR, TransferDR, ShiftDR, and ClockDR WSP data register control signals.

Подробнее
13-09-2012 дата публикации

Data transfer control device, integrated circuit of same, data transfer control method of same, data transfer completion notification device, integrated circuit of same, data transfer completion notification method of same, and data transfer control system

Номер: US20120233372A1
Принадлежит: Panasonic Corp

A data transfer control device 1061 includes a read pointer update unit 5004 updating a value of a global read pointer RPg with a value of a local read pointer (first local read pointer) RP 11 held by a local read pointer hold unit 5007 when completion of data transfer is recognized and a position, in an order of reading descriptors, of a descriptor D 3010 a indicated by the local read pointer RP 11 is earlier than positions of descriptors D 3010 b and D 3010 c respectively indicated by local read pointers (second local read pointers) RP 12 and RP 13 held by the other data transfer control devices 1062 and 1063.

Подробнее
13-09-2012 дата публикации

Meta Garbage Collection for Functional Code

Номер: US20120233592A1
Автор: Alexander G. Gounares
Принадлежит: Concurix Corp

An execution environment for functional code may treat application segments as individual programs for memory management. A larger program of application may be segmented into functional blocks that receive an input and return a value, but operate without changing state of other memory objects. The program segments may have memory pages allocated to the segments by the operating system as other full programs, and may deallocate memory pages when the segments finish operating. Functional programming languages and imperative programming languages may define program segments explicitly or implicitly, and the program segments may be identified at compile time or runtime.

Подробнее
25-10-2012 дата публикации

Data transfer system and data transfer method

Номер: US20120271973A1
Автор: Masaharu Adachi
Принадлежит: Ricoh Co Ltd

A data transfer system includes: a processor; a main memory that is connected to the processor; a peripheral controller that is connected to the processor; and a peripheral device that is connected to the peripheral controller and includes a register set, wherein the peripheral device transfers data stored in the register set to a predetermined memory region of the main memory or the processor by a DMA (Data Memory Access) transfer, and the processor reads out the data transferred to the memory region by the DMA transfer without accessing to the peripheral device.

Подробнее
13-12-2012 дата публикации

Storage architecture for backup application

Номер: US20120317379A1
Принадлежит: Microsoft Corp

Aspects of the subject matter described herein relate to a storage architecture. In aspects, an address provided by a data source is translated into a logical storage address of virtual storage. This logical storage address is translated into an identifier that may be used to store data on or retrieve data from a storage system. The address space of the virtual storage is divided into chunks that may be streamed to the storage system.

Подробнее
10-01-2013 дата публикации

Data transfer control device and data transfer control method

Номер: US20130013821A1
Автор: Masaki Okada
Принадлежит: Fujitsu Semiconductor Ltd

A data transfer control device that selects one of a plurality of DMA channels and transfers data to or from memory includes a request holding section configured to hold a certain number of data transfer requests of the plurality of DMA channels and a request rearranging section configured to select and rearrange the data transfer requests that are held in a basic transfer order so that the data transfer requests of each of the plurality of DMA channels are successively outputted for a number of successive transfers set in advance.

Подробнее
10-01-2013 дата публикации

Hot-swapping active memory for virtual machines with directed i/o

Номер: US20130013877A1
Автор: Kun Tian
Принадлежит: Intel Corp

Embodiments of the invention describe a DMA Remapping unit (DRU) to receive, from a virtual machine monitor (VMM), a hot-page swap (HPS) request, the HPS request to include a virtual address, in use by at least one virtual machine (VM), mapped to a first memory page location, and a second memory page location. The DRU further blocks DMA requests to addresses of memory being remapped until the HPS request is fulfilled, copies the content of the first memory page location to the second memory page location, and ramps the virtual address from the first memory page location to the second memory page location.

Подробнее
31-01-2013 дата публикации

Using a dma engine to automatically validate dma data paths

Номер: US20130031281A1
Принадлежит: Oracle International Corp

The disclosed embodiments provide a system that uses a DMA engine to automatically validate DMA data paths for a computing device. During operation, the system configures the DMA engine to perform a programmable DMA operation that generates a sequence of memory accesses which validate the memory subsystem and DMA paths of the computing device. For instance, the operation may include a sequence of reads and/or writes that generate sufficient data traffic to exercise the computing device's I/O controller interface and DMA data paths to memory to a specified level. The system initiates this programmable DMA operation, and then checks outputs for the operation to confirm that the operation executed successfully.

Подробнее
14-03-2013 дата публикации

Methods and structure for improved processing of i/o requests in fast path circuits of a storage controller in a clustered storage system

Номер: US20130067125A1
Принадлежит: LSI Corp

Methods and structure for improved processing of fast path I/O requests in a clustered storage system. In a storage controller of a clustered storage system, the controller comprises a fast path I/O request processing circuit tightly coupled with host system drivers for fast processing of requests directed to storage devices of a logical volume. The controller also comprises a logical volume I/O processing stack (typically implemented as programmed instructions) for processing I/O requests from a host system directed to a logical volume. Based on detecting a change of ownership of a device or volume and/or a change to logical to physical mapping of a logical volume, fast path I/O requests may be converted to logical volume requests based on mapping context information within the fast path I/O request and shipped within the clustered storage system for processing.

Подробнее
14-03-2013 дата публикации

Methods and structure for managing visibility of devices in a clustered storage system

Номер: US20130067569A1
Принадлежит: LSI Corp

Methods and system for implementing a clustered storage solution are provided. One embodiment is a storage controller that communicatively couples a host system with a storage device. The storage controller comprises an interface and a control unit. The interface is operable to communicate with the storage device. The control unit is operable to identify ownership information for a storage device, and to determine if the storage controller is authorized to access the storage device based on the ownership information. The storage controller is operable to indicate the existence of the storage device to the host system if the storage controller is authorized, and operable to hide the existence of the storage device from the host system if the storage controller is not authorized.

Подробнее
28-03-2013 дата публикации

System and method for reducing cross coupling effects

Номер: US20130076424A1
Принадлежит: Qualcomm Inc

A device includes a plurality of driver circuits coupled to a plurality of bus lines. A first driver circuit of the plurality of driver circuits is coupled to a first bus line of the plurality of bus lines. The first driver circuit includes one of a skewed inverter, a level shifter, a latch, and a sense amplifier configured to produce an output signal that transitions after a first delay in response to a first digital value transition of an input signal from high to low and transitions after a second delay in response to a second digital value transition of the input signal from low to high. The first delay is different from the second delay by an amount sufficient to reduce power related to transmission of signals over the first bus line and over a second bus line in close physical proximity to the first bus line.

Подробнее
11-04-2013 дата публикации

MODULAR INTEGRATED CIRCUIT WITH COMMON INTERFACE

Номер: US20130091316A1
Принадлежит: BROADCOM CORPORATION

A modular integrated circuit includes a hub module that is coupled to a plurality of spoke modules via a plurality of hub interfaces. The plurality of hub interfaces provide a plurality of signal interfaces between the hub module and each of the plurality of spoke modules, wherein each of the plurality of signal interfaces is isolated from each of the other signal interfaces of the plurality of signals interface, and wherein each of the plurality of signal interfaces operates in accordance with a common signaling format. 1. A modular integrated circuit comprising:a plurality of spoke modules; and a power management unit, coupled to the plurality of hub interfaces, that selectively supplies a plurality of power supply signals to the plurality of spoke modules via the plurality of hub interfaces; and', 'a clock control circuit, coupled to the plurality of hub interfaces, that selectively supplies a plurality of clock signals to the plurality of spoke modules via the plurality of hub interfaces;', 'wherein the plurality of hub interfaces provides a plurality of signal interfaces between the hub module and each of the plurality of spoke modules, wherein each of the plurality of signal interfaces is isolated from each of the other signal interfaces of the plurality of signals interface, and wherein each of the plurality of signal interfaces operates in accordance with a common signaling format., 'a hub module that is coupled to the plurality of spoke modules to facilitate inter-spoke communications via a corresponding plurality of hub interfaces, the hub module including2. The modular integrated circuit of wherein the common signaling format for each of the plurality of hub interfaces includes:a clock request signal received from a corresponding one of the plurality of spoke modules; andat least one of the plurality of clock signals.3. The modular integrated circuit of wherein the common signaling format for each of the plurality of hub interfaces includes:a power ...

Подробнее
18-04-2013 дата публикации

MAINTAINING PROCESSOR RESOURCES DURING ARCHITECTURAL EVENTS

Номер: US20130097360A1
Принадлежит:

In one embodiment of the present invention, a method includes switching between a first address space and a second address space, determining if the second address space exists in a list of address spaces; and maintaining entries of the first address space in a translation buffer after the switching. In such manner, overhead associated with such a context switch may be reduced. 1an execution logic to support a virtual machine monitor (VMM) to provide an abstraction of one or more virtual machines (VMs) to a plurality of guests running on the one or more VMs, each of the plurality of guests to include an operating system (OS) and software, the VMM to provide access to each of the one or more VMs to a set of physical resources including processor resources, memory, and input/output (TO) devices,a translation lookaside buffer (TLB) having a plurality of page table entries (PTEs) to translate virtual addresses to physical addresses of memory pages, each PTE including:an address space identifier (ASID) to identify an address space associated with the PTE,a thread identifier (ID) to identify a thread associated with the PTE,a valid bit; anda current ASID register to store a current ASID that is to be updated to switch to a different address space,wherein the execution logic, in response to a context switch, is to either not invalidate any PTEs of the TLB or to selectively invalidate one or more PTEs of the TLB based on ASIDs of the PTEs and the current ASID stored in the current ASID register.. A processor, comprising: This application is a continuation of U.S. patent application Ser. No. 13/020,161, filed Feb. 3, 2011, and entitled “MAINTAINING PROCESSOR RESOURCES DURING ARCHITECTURAL EVENTS” which is a continuation of U.S. patent application Ser. No. 12/483,519, filed Jun. 12, 2009, entitled “MAINTAINING PROCESSOR RESOURCES DURING ARCHITECTURAL EVENTS” now U.S. Issued U.S. Pat. No. 7,889,972, Issued on Mar. 1, 2011, which is a continuation of U.S. patent application Ser ...

Подробнее
25-04-2013 дата публикации

METHOD AND SYSTEM FOR PROVIDING HARDWARE SUPPORT FOR MEMORY PROTECTION AND VIRTUAL MEMORY ADDRESS TRANSLATION FOR A VIRTUAL MACHINE

Номер: US20130103882A1
Автор: Anvin H. Peter
Принадлежит: Intellectual Venture Funding LLC

A method for providing hardware support for memory protection and virtual memory address translation for a virtual machine. The method includes executing a host machine application within a host machine context and executing a virtual machine application within a virtual machine context. A plurality of TLB (translation look aside buffer) entries for the virtual machine context and the host machine context are stored within a TLB. Memory protection bits for the plurality of TLB entries are logically combined to enforce memory protection on the virtual machine application. 114-. (canceled)15. A method comprising:storing a plurality of TLB (translation look aside buffer) entries for a virtual machine context and a host machine context; andusing a logical operation to combine a plurality of memory protection bits for the plurality of TLB entries to enforce memory protection on the virtual machine context, wherein the memory protection bits include at least one bit stored in at least one of the TLB entries and at least one bit to be stored in at least one of the TLB entries.16. The method of claim 15 , wherein the at least one bit stored in at least one of the TLB entries is a read/write bit or a dirty bit.17. The method of claim 15 , wherein the at least one bit to be stored in at least one of the TLB entries is a dirty bit.18. The method of claim 15 , wherein the logical operation is a logical AND operation.19. The method of claim 15 , further comprising:executing a host machine application within the host machine context; andexecuting a virtual machine application within the virtual machine context.20. The method of claim 19 , wherein the using a logical operation comprises:enforcing memory protection on the virtual machine application.21. The method of claim 19 , wherein the host machine application comprises a host machine operating system that executes within the host machine context claim 19 , and wherein the virtual machine application comprises a virtual machine ...

Подробнее
02-05-2013 дата публикации

DATA TRANSFER CONTROL APPARATUS, DATA TRANSFER CONTROL METHOD, AND COMPUTER PRODUCT

Номер: US20130111078A1
Принадлежит: FUJITSU LIMITED

A data transfer control apparatus includes a transferring unit that transfers data from a transfer source memory to a transfer destination memory, according to an instruction from a first processor; and a first processor configured to detect a process execute by the first processor, determine whether transfer of the data is urgent, based on the type of the detected process, and control the transferring unit or the first processor to transfer the data, based on a determination result. 1. A data transfer control apparatus comprising:a transferring unit that transfers data from a transfer source memory to a transfer destination memory, according to an instruction from a first processor; and detect a process execute by the first processor,', 'determine whether transfer of the data is urgent, based on the type of the detected process, and', 'control the transferring unit or the first processor to transfer the data, based on a determination result., 'a first processor configured to2. The data transfer control apparatus according to claim 1 , whereinthe first processor further detects a state change of the first processor, andthe first processor determines whether the transfer of the data is urgent, based on the type of the process and a state change of the first process.3. The data transfer control apparatus according to claim 1 , whereinthe first processor further detects that a state of the process changes from an active state to an inactive state or from the inactive state to the active state, andthe first processor determines whether the transfer the data is urgent, based on the type of the process and a detection result concerning the state of the process.4. The data transfer control apparatus according to claim 1 , whereinthe first processor upon determining that the transfer is urgent, controls the transferring unit such that the data is transferred.5. The data transfer control apparatus according to claim 1 , whereinthe first processor determines whether storage ...

Подробнее
02-05-2013 дата публикации

DATA PROCESSING DEVICE, CHAIN AND METHOD, AND CORRESPONDING COMPUTER PROGRAM

Номер: US20130111079A1
Принадлежит:

A data processing device includes a memory, a direct memory access controller including a receiving module configured to receive data coming from outside the device and for writing the data in a main buffer memory of the memory, and a processing unit programmed to read and process data written by the receiving module in a work area of the main buffer memory. The main buffer memory is divided between a used space, where the receiving module is configured not to write, and free space, where the receiving module is configured to write. The processing unit is further programmed to define the work area, and the direct memory access controller includes a buffer memory manager configured to free data written in the main buffer memory, by defining a location of this data as a free space, only when this data is outside the work area. 111-. (canceled)12. A data processing device comprising:a memory;a direct memory access controller comprising a receiving module configured to receive data coming from outside the device and for writing the data in a predetermined portion of the memory, as a main buffer memory;a processing unit programmed to read and process data written by the receiving module in the main buffer memory area as a work area,wherein:the main buffer memory is divided between used space and free space;the processing unit is further programmed to define the work area; andthe direct memory access controller comprises a buffer memory manager configured to free data written in the main buffer memory, by defining a location of this data as a free space, only when this data is outside the work area.13. A device according to claim 12 , wherein the processing unit is programmed to wait until the receiving module writes data received in the entire work area before reading and processing the data of the work area.14. A device according to claim 12 , wherein:the receiving module is configured to write each data item received in a location of the main buffer memory indicated by ...

Подробнее
02-05-2013 дата публикации

Software translation lookaside buffer for persistent pointer management

Номер: US20130111151A1
Автор: Aman Naimat, Eric Sedlar
Принадлежит: Oracle International Corp

Techniques are provided for performing OID-to-VMA translations during runtime. Vector registers are used to implement a “software TLB” to perform OID-to-VMA translations. Runtime dereferencing is performed using one or more vector registers to compare each OID that needs to be dereferenced against a set of cached OIDs. When a cached OID matches the OID being dereferenced, the VMA of the cached OID is retrieved from cache. Buffer cache items may be pinned during the period in which the software TLB stores entries for the items. The cache of OID translation information may be single or multi-leveled, and may be partially or completely stored in registers within a processor. When stored in registers, the translation information may be spilled out of the register, and reloaded into the register, as the register is needed for other purposes.

Подробнее
02-05-2013 дата публикации

Digital Signal Processing Data Transfer

Номер: US20130111159A1
Принадлежит: Imagination Technologies Ltd

A technique for transferring data in a digital signal processing system is described. In one example, the digital signal processing system comprises a number of fixed function accelerators, each connected to a memory access controller and each configured to read data from a memory device, perform one or more operations on the data, and write data to the memory device. To avoid hardwiring the fixed function accelerators together, and to provide a configurable digital signal processing system, a multi-threaded processor controls the transfer of data between the fixed function accelerators and the memory. Each processor thread is allocated to a memory access channel, and the threads are configured to detect an occurrence of an event and, responsive to this, control the memory access controller to enable a selected fixed function accelerator to read data from or write data to the memory device via its memory access channel.

Подробнее
09-05-2013 дата публикации

METHOD, SYSTEM, AND APPARATUS FOR PAGE SIZING EXTENSION

Номер: US20130117531A1
Принадлежит:

A method, system, and apparatus may initialize a fixed plurality of page table entries for a fixed plurality of pages in memory, each page having a first size, wherein a linear address for each page table entry corresponds to a physical address and the fixed plurality of pages are aligned. A bit in each of the page table entries for the aligned pages may be set to indicate whether or not the fixed plurality of pages is to be treated as one combined page having a second page size larger than the first page size. Other embodiments are described and claimed. 1. A processor , comprising:a plurality of execution cores; anda translation lookaside buffer (TLB) coupled to the plurality of execution cores, the TLB to store a plurality of page table entries (PTEs) to translate virtual addresses to physical addresses of memory pages, each PTE having at least 64 bits and further including:a first bit to indicate whether a corresponding memory page is a 4-kilobyte (KB) memory page or a larger size memory page,a second bit to indicate whether the corresponding memory page has been written,a third bit to indicate whether the corresponding memory page has been accessed,a fourth bit to indicate whether the corresponding PTE is able to be used to perform address translation.2. The processor of claim 1 , wherein the larger size memory page is one of a 64 KB page and a 1024 KB page.3. The processor of claim 1 , wherein the larger size memory page is one of a 64 KB claim 1 , 1024 KB claim 1 , 2 megabyte (MB) claim 1 , and 4 MB memory page.4. The processor of claim 1 , wherein the first bit is to indicate whether the corresponding memory page is either the 4 KB memory page or a 64 KB memory page.5. The processor of claim 1 , wherein the fourth bit is also to indicate whether the corresponding memory page has been loaded into a physical memory. This is a continuation of application Ser. No. 11/967,868, filed Dec. 31, 2007, currently pending.Many processors may make use of virtual or ...

Подробнее
23-05-2013 дата публикации

ROUTING SWITCH APPARATUS, NETWORK SWITCH SYSTEM, AND ROUTING SWITCHING METHOD

Номер: US20130132634A1
Автор: DIAO Junfeng, Liu Yunhai
Принадлежит: Huawei Technologies Co., Ltd.

The present disclosure relates to a routing switch apparatus, a network switch system, and a routing switch method. The routing switch apparatus includes one or more direct memory access modules and at least two protocol conversion interfaces. The direct memory access module is configured to generate a continuous access request of a cross network node, and control data transmission in the at least two protocol conversion interfaces; each protocol conversion interface is configured to convert a communication protocol of data transmitted inside and outside the routing switch apparatus and connect the routing switch module and an external network node. The routing switch apparatus may be introduced to replace a network switch, so that cross-node memory access and IO space access can be performed directly rather than through a proxy, thereby reducing delay of the cross-node memory access and IO space access and improving overall performance of a system. 1. A routing switch apparatus , comprising:one or more direct memory access modules; andat least two protocol conversion interfaces; wherein:the direct memory access module is configured to generate a continuous access request of a cross network node and control data transmission in the at least two protocol conversion interfaces, andeach protocol conversion interface is configured to convert a communication protocol of data transmitted inside and outside the routing switch apparatus and connect the routing switch module and an external network node.2. The routing switch apparatus according to claim 1 , wherein the direct memory access module comprises:a direct memory access controller; anda direct memory access channel; whereinthe direct memory access controller controls a connection between the direct memory access channel and the protocol conversion interface according to configuration information.3. The routing switch apparatus according to claim 2 , wherein the direct memory access module further comprises a storage ...

Подробнее
30-05-2013 дата публикации

Efficient Memory and Resource Management

Номер: US20130138840A1

The present system enables passing a pointer, associated with accessing data in a memory, to an input/output (I/O) device via an input/output memory management unit (IOMMU). The I/O device accesses the data in the memory via the IOMMU without copying the data into a local I/O device memory. The I/O device can perform an operation on the data in the memory based on the pointer, such that I/O device accesses the memory without expensive copies.

Подробнее
06-06-2013 дата публикации

Direct Device Assignment

Номер: US20130145051A1
Автор: Andrew Kegel, Mark Hummel
Принадлежит: Advanced Micro Devices Inc

A system is enabled for configuring an IOMMU to provide direct access to system memory data by at least one I/O device/peripheral. Further, the IOMMU is configured to pass a pointer to at least one I/O device without having to translate the pointer. Further, commands are sent from a process within a guest operating system (OS) directly to a peripheral without intervention from a hypervisor. Further, the IOMMU is configured to grant peripherals access permissions to memory blocks to maintain isolation among peripherals.

Подробнее
27-06-2013 дата публикации

IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS, AND CONTROL PROGRAM

Номер: US20130166792A1
Автор: SHIMMOTO Takafumi
Принадлежит:

An image processing method includes: dividing received data into a header and a body; and writing the data in at least one buffer through a direct memory access (DMA) transfer. 1. An image processing method comprising:dividing received data into a header and a body; andwriting the data in at least one buffer through a direct memory access (DMA) transfer.2. The image processing method according to claim 1 , further comprising:analyzing the header; andselecting a buffer as a write destination of the DMA transfer according to contents of data contained in the body.3. The image processing method according to claim 1 , wherein the writing includes:if data contained in the body is content data, writing the data in a projection buffer through the DMA transfer; andif data contained in the body is a command, writing the data in a reception buffer through the DMA transfer.4. The image processing method according to claim 1 , further comprising:adding dummy data to each of the header and the body to adjust alignment thereof when subjecting the divided received data to the DMA transfer.5. The image processing method according to claim 1 , wherein the writing includes:writing the received data in a reception buffer; andif data contained in a body is content as a result of analysis, writing the data in a projection buffer through the DMA transfer.6. The image processing method according to claim 1 , wherein the writing includes:out of the received data, writing data having a size required for an analysis of the header, in a reception buffer through the DMA transfer; andwriting content data in a projection buffer, the content data being included in part of the body written in the reception buffer and a rest of the body.7. An image processing apparatus comprising:a network board that analyzes received data and outputs image data; anda projection unit that projects the image data output from the network board as an optical image, wherein a division unit that divides the received ...

Подробнее
04-07-2013 дата публикации

Application processor and a computing system having the same

Номер: US20130173883A1
Автор: Il-Ho Lee, Kyong-Ho Cho
Принадлежит: SAMSUNG ELECTRONICS CO LTD

An application processor includes a system memory unit, peripheral devices, a control unit and a central processing unit (CPU). The system memory unit includes one page table. The peripheral devices share the page table and perform a DMA (Direct Memory Access) operation on the system memory unit using the page table, where each of the peripheral devices includes a memory management unit having a translation lookaside buffer. The control unit divides a total virtual address space corresponding to the page table into sub virtual address spaces, assigns the sub virtual address spaces to the peripheral devices, respectively, allocates and releases a DMA buffer in the system memory unit, and updates the page table, where at least two of the sub virtual address spaces have different sizes from each other. The CPU controls the peripheral devices and the control unit. The application processor reduces memory consumption.

Подробнее
18-07-2013 дата публикации

Fencing Direct Memory Access Data Transfers In A Parallel Active Messaging Interface Of A Parallel Computer

Номер: US20130185465A1

Fencing direct memory access (‘DMA’) data transfers in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints. 1. A method of fencing direct memory access (‘DMA’) data transfers in a parallel active messaging interface (‘PAMI’) of a parallel computer , the parallel computer comprising a plurality of compute nodes that execute a parallel application , the PAMI comprising data communications endpoints , the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources including DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically , in the same order in which the communications are transmitted , the method comprising:initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, an origin endpoint and a target endpoint, each DMA instruction effecting a deterministic DMA data transfer through a DMA controller and a segment of shared ...

Подробнее
25-07-2013 дата публикации

Direct Memory Address for Solid-State Drives

Номер: US20130191594A1

A storage device is provided for direct memory access. A controller of the storage device performs a mapping of a window of memory addresses to a logical block addressing (LBA) range of the storage device. Responsive to receiving from a host a write request specifying a write address within the window of memory-addresses, the controller initializes a first memory buffer in the storage device and associates the first memory buffer with a first address range within the window of memory addresses such that the write address of the request is within the first address range. The controller writes to the first memory buffer based on the write address. Responsive to the buffer being full the controller persists contents of the first memory buffer to the storage device using logical block addressing based on the mapping. 125-. (canceled)26. A method for direct memory access in a storage device , the method comprising:performing a mapping of a window of memory addresses to a logical block addressing (LBA) range of the storage device;responsive to receiving from a host a write request specifying a write address within the window of memory addresses, initializing a first memory buffer in the storage device;associating the first memory buffer with a first address range within the window of memory addresses such that the write address of the request is within the first address range;writing to the first memory buffer based on the write address; andresponsive to the buffer being full, persisting contents of the first memory buffer to the storage device using logical block addressing based on the mapping,27. The method of claim 26 , further comprising:associating the first memory buffer with a first timer;responsive to writing to the first memory buffer, restarting the first timer; andresponsive to detecting expiration of the first timer, persisting contents of the first memory buffer to the storage device.28. The method of claim 26 , further comprising:responsive to receiving ...

Подробнее
01-08-2013 дата публикации

METHODS AND SYSTEMS FOR DEVICES WITH SELF-SELECTING BUS DECODER

Номер: US20130198433A1
Принадлежит: MICRON TECHNOLOGY, INC.

Disclosed are methods and devices, among which is a device including a self-selecting bus decoder. In some embodiments, the device may be coupled to a microcontroller, and the self-selecting bus decoder may determine a response of the peripheral device to requests from the microcontroller. In another embodiment, the device may include a bus translator and a self-selecting bus decoder. The bus translator may be configured to translate between signals from a selected one of a plurality of different types of buses. A microcontroller may be coupled to a selected one of the plurality of different types of buses of the bus translator. 1. A system , comprising:a microcontroller; anda device coupled to the microcontroller via a bus, wherein the device comprises a decoder configured to select the device in response to a request from the microcontroller.2. The system of claim 1 , wherein the device comprises a pattern recognition processor.3. The system of claim 1 , wherein the decoder is configured to receive a memory mapping configuration.4. The system of claim 3 , wherein the memory mapping configuration comprises an indication of a plurality of memory addresses provided by the device.5. The system of claim 3 , wherein the memory mapping configuration comprises an indication of a plurality of memory addresses provided by the microcontroller.6. The system of claim 3 , wherein the decoder is configured to determine if a memory address associated with the request is provided by the device based on the memory mapping configuration.7. The system of claim 3 , wherein the decoder is configured to determine a type of the request.8. The system of claim 3 , wherein the decoder is configured to determine a response to the request.9. The system of claim 1 , wherein the device comprises double data rate two (DDR2) RAM.10. The system of claim 1 , wherein the request comprises a memory read claim 1 , a memory write claim 1 , a memory refresh claim 1 , a DMA request claim 1 , or any ...

Подробнее
08-08-2013 дата публикации

OBJECT-BASED MEMORY STORAGE

Номер: US20130205114A1
Принадлежит: FUSION-IO

The method includes receiving an object operation from an application at a hardware device manager. The object operation includes an object identifier. The method includes performing the object operation directly on a storage device. A physical address for the object corresponding to the object identifier is mapped directly to the object identifier in an index managed by the hardware device manager. 1. A method , comprising:receiving an object operation from an application at a hardware storage device manager, wherein the object operation comprises an object identifier; andperforming the object operation directly on a storage device, wherein a physical address for the object corresponding to the object identifier is mapped directly to the object identifier in an index managed by the hardware storage device manager.2. The method of claim 1 , wherein the physical address is mapped directly to the object identifier in an absence of a block-based translation layer for the storage device.3. The method of claim 1 , wherein the object operation is a write operation comprising:writing object data to the storage device at a new physical address; andmapping an address for the object identifier in the index to the new physical address.4. The method of claim 1 , wherein the object operation is a read operation comprising identifying claim 1 , via the index claim 1 , the physical address corresponding to the object stored on the storage device.5. The method of claim 1 , wherein the object operation is a delete operation comprising:invalidating data corresponding to a deleted object on the storage device; andremoving, from the index, the object identifier corresponding to the deleted object.6. The method of claim 1 , wherein the storage device stores data in a log structure claim 1 , wherein the physical address corresponds to an append point of the log structure.7. The method of claim 1 , wherein the index comprises a table configured to track a location and a size of object ...

Подробнее
15-08-2013 дата публикации

INVALIDATING TRANSLATION LOOKASIDE BUFFER ENTRIES IN A VIRTUAL MACHINE SYSTEM

Номер: US20130212313A1
Принадлежит:

One embodiment of the present invention is a technique to invalidate entries in a translation lookaside buffer (TLB). A TLB in a processor has a plurality of TLB entries. Each TLB entry is associated with a virtual machine extension (VMX) tag word indicating if the associated TLB entry is invalidated according to a processor mode when an invalidation operation is performed. The processor mode is one of execution in a virtual machine (VM) and execution not in a virtual machine. The invalidation operation belongs to a non-empty set of invalidation operations composed of a union of (1) a possibly empty set of operations that invalidate a variable number of TLB entries, (2) a possibly empty set of operations that invalidate exactly one TLB entry, (3) a possibly empty set of operations that invalidate the plurality of TLB entries, (4) a possibly empty set of operations that enable and disable use of virtual memory, and (5) a possibly empty set of operations that configure physical address size, page size or other virtual memory system behavior in a manner that changes the manner in which a physical machine interprets the TLB entries. 1. An apparatus comprising: a translation lookaside buffer (TLB) in a processor having a plurality of TLB entries , each TLB entry being associated with a virtual machine extension (VMX) tag word indicating if the associated TLB entry is invalidated according to the processor mode when an invalidation operation is performed , the processor mode being one of execution in a virtual machine (VM) and execution not in a virtual machine , the invalidation operation belonging to a non-empty set of invalidation operations composed of a union of (1) a possibly empty set of operations that invalidate a variable number of TLB entries , (2) a possibly empty set of operations that invalidate exactly one TLB entry , (3) a possibly empty set of operations that invalidate the plurality of TLB entries , (4) a possibly empty set of operations that enable and ...

Подробнее
15-08-2013 дата публикации

MAINTAINING PROCESSOR RESOURCES DURING ARCHITECTURAL EVENTS

Номер: US20130212314A1
Принадлежит:

In one embodiment of the present invention, a method includes switching between a first address space and a second address space, determining if the second address space exists in a list of address spaces; and maintaining entries of the first address space in a translation buffer after the switching. In such manner, overhead associated with such a context switch may be reduced. 1an execution logic to support a virtual machine monitor (VMM) to provide an abstraction of one or more virtual machines (VMs) to a plurality of guests running on the one or more VMs, each of the plurality of guests to include an operating system (OS) and software, the VMM to provide access to each of the one or more VMs to a set of physical resources including processor resources, memory, and input/output (IO) devices,a translation lookaside buffer (TLB) having a plurality of page table entries (PTEs) to translate virtual addresses to physical addresses of memory pages, each PTE including:an address space identifier (ASID) to identify an address space associated with the PTE,a thread identifier (ID) to identify a thread associated with the PTE,a valid bit; anda current ASID register to store a current ASID that is to be updated to switch to a different address space,wherein the execution logic, in response to a context switch, is to either not invalidate any PTEs of the TLB or to selectively invalidate one or more PTEs of the TLB based on ASIDs of the PTEs and the current ASID stored in the current ASID register.. A processor, comprising: This application is a continuation of U.S. patent application Ser. No. 13/708,547, filed Dec. 7, 2012, and entitled “MAINTAINING PROCESSOR RESOURCES DURING ARCHITECTURAL EVENTS”, which is a continuation of U.S. patent application Ser. No. 13/020,161, filed Feb. 3, 2011, and entitled “MAINTAINING PROCESSOR RESOURCES DURING ARCHITECTURAL EVENTS” which is a continuation of U.S. patent application Ser. No. 12/483,519, filed Jun. 12, 2009, entitled “MAINTAINING ...

Подробнее
05-09-2013 дата публикации

MAINTAINING PROCESSOR RESOURCES DURING ARCHITECTURAL EVENS

Номер: US20130232316A1
Принадлежит:

In one embodiment of the present invention, a method includes switching between a first address space and a second address space, determining if the second address space exists in a list of address spaces; and maintaining entries of the first address space in a translation buffer after the switching. In such manner, overhead associated with such a context switch may be reduced. 1switching between a first address space and a second address space;determining if the second address space exists in a list of address spaces; and. A method comprising: This application is a Continuation of U.S. patent application Ser. No. 13/708,547, filed Dec. 7, 2012, entitled, “MAINTAINING PROCESSOR RESOURCES DURING ARCHITECTURAL EVENTS”, which is a Continuation of U.S. patent application Ser. No. 13/020,161, filed Feb. 3, 2011, and entitled “MAINTAINING PROCESSOR RESOURCES DURING ARCHITECTURAL EVENTS” which is a continuation of U.S. patent application Ser. No. 12/483,519, filed Jun. 12, 2009, entitled “MAINTAINING PROCESSOR RESOURCES DURING ARCHITECTURAL EVENTS” now U.S. Pat. No. 7,889,972, Issued on Mar. 1, 2011, which is a continuation of U.S. patent application Ser. No. 10/903,704 filed Jul. 30, 2004 entitled “MAINTAINING PROCESSOR RESOURCES DURING ARCHITECTURAL EVENTS,” now U.S. Pat. No. 7,562,179, Issued on Jul. 14, 2009, the content of which is hereby incorporated by reference in its entirety into this application.The present invention relates generally to data processing systems, and more particularly to processing in different contexts using a processor. Many current computer systems use virtual memory systems to manage and allocate memory to various processes running within the system, which allow each process running on the system to operate as if it has control of the full range of addresses provided by the system. The operating system (OS) maps the virtual address space for each process to the actual physical address space for the system. Mapping from a physical address to a ...

Подробнее
12-09-2013 дата публикации

Multiple page size memory management unit

Номер: US20130238875A1
Принадлежит: FREESCALE SEMICONDUCTOR INC

A memory management unit can receive an address associated with a page size that is unknown to the MMU. The MMU can concurrently determine whether a translation lookaside buffer data array stores a physical address associated with the address based on different portions of the address, where each of the different portions is associated with a different possible page size. This provides for efficient translation lookaside buffer data array access when different programs, employing different page sizes, are concurrently executed at a data processing device.

Подробнее
26-09-2013 дата публикации

Direct memory access system and method using the same

Номер: US20130254433A1
Автор: Kuo-Cheng Lu
Принадлежит: Ralink Technology Corp Taiwan

The invention discloses a DMA system capable of being adapted to various interfaces. The DMA system includes the following advantages: 1) the software porting effort can be reduced when different interfaces are integrated into a SoC; 2) a flexible DMA that could provide protocol transparency and could be ported into different interfaces easily; 3) a scalable DMA that can support unlimited TX/RX scattering/gathering data segments; 4) a reusable DMA that provides user defined TX information (or RX information) and TX message (or RX message) field; and 5) a high performance DMA that support unaligned segment data pointers and unlimited scattering/gathering data segments, so as to reduce extra memory copies by CPU.

Подробнее
03-10-2013 дата публикации

Data compression for direct memory access transfers

Номер: US20130262538A1
Автор: Albert W. Wegener
Принадлежит: Samplify Systems Inc

Memory system operations are extended for a data processor by DMA, cache, or memory controller to include a DMA descriptor, including a set of operations and parameters for the operations, which provides for data compression and decompression during or in conjunction with processes for moving data between memory elements of the memory system. The set of operations can be configured to use the parameters and perform the operations of the DMA, cache, or memory controller. The DMA, cache, or memory controller can support moves between memory having a first access latency, such as memory integrated on the same chip as a processor core, and memory having a second access latency that is longer than the first access latency, such as memory on a different integrated circuit than the processor core.

Подробнее
03-10-2013 дата публикации

SEMICONDUCTOR INTEGRATED CIRCUIT AND DMA CONTROL METHOD OF THE SAME

Номер: US20130262732A1
Автор: TANABATA Masatoshi
Принадлежит: FUJITSU SEMICONDUCTOR LIMITED

A semiconductor integrated circuit includes a bus, a memory connected to the bus, an arithmetic processing unit connected to the bus, a first DMA controller connected to the bus, and at least one functional block connected to the bus. The functional block includes a functional macro which is configured to perform a process that realizes a given function, a second DMA controller which is configured to control data transfer between the memory and the functional macro, and an access condition setting unit which is configured to set an access condition regarding the DMA transfer between the memory and the functional macro. 1. A semiconductor integrated circuit comprising:a bus;a memory connected to the bus;an arithmetic processing unit connected to the bus;a first DMA controller connected to the bus; andat least one functional block connected to the bus, the functional block including a functional macro which is configured to perform a process that realizes a given function, a second DMA controller which is configured to control data transfer between the memory and the functional macro, and an access condition setting unit which is configured to set an access condition regarding the DMA transfer between the memory and the functional macro.2. The semiconductor integrated circuit as claimed in claim 1 , whereinthe access condition setting unit includes a register and a control code storing unit, which are configured to set the access condition including an address of the memory and transfer size in a unit of instruction.3. The semiconductor integrated circuit as claimed in claim 2 , whereinthe control code storing unit is an instruction memory provided in the second DMA controller, anda control code which defines the access condition set in the unit of instruction is written in the instruction memory at the time of initial setting.4. The semiconductor integrated circuit as claimed in claim 3 , whereinthe control code is written in the instruction memory by the arithmetic ...

Подробнее
03-10-2013 дата публикации

HYBRID ADDRESS TRANSLATION

Номер: US20130262815A1

Embodiments of the invention relate to hybrid address translation. An aspect of the invention includes receiving a first address, the first address referencing a location in a first address space. The computer searches a segment lookaside buffer (SLB) for a SLB entry corresponding to the first address; the SLB entry comprising a type field and an address field and determines whether a value of the type field in the SLB entry indicates a hashed page table (HPT) search or a radix tree search. Based on determining that the value of the type field indicates the HPT search, a HPT is searched to determine a second address, the second address comprising a translation of the first address into a second address space; and based on determining that the value of the type field indicates the radix tree search, a radix tree is searched to determine the second address. 1. A computer implemented method for hybrid address translation in a computer , the method comprising:receiving a first address, the first address referencing a location in a first address space;searching, by the computer, a segment lookaside buffer (SLB) for a SLB entry corresponding to the first address, the SLB entry comprising a type field and an address field;determining whether a value of the type field in the SLB entry indicates a hashed page table (HPT) search or a radix tree search;based on determining that the value of the type field indicates the HPT search, searching a HPT to determine a second address, the second address comprising a translation of the first address into a second address space; andbased on determining that the value of the type field indicates the radix tree search, searching a radix tree to determine the second address.2. The method of claim 1 , wherein searching the HPT to determine the second address comprises:extracting a virtual address associated with the first address from the address field of the SLB entry corresponding to the first address; andsearching the HPT for the virtual ...

Подробнее
10-10-2013 дата публикации

MEMORY CONTROLLERS, MEMORY SYSTEMS, SOLID STATE DRIVES AND METHODS FOR PROCESSING A NUMBER OF COMMANDS

Номер: US20130268701A1
Принадлежит: MICRON TECHNOLOGY, INC.

The present disclosure includes methods and devices for a memory controller. In one or more embodiments, a memory controller includes a plurality of back end channels, and a command queue communicatively coupled to the plurality of back end channels. The command queue is configured to hold host commands received from a host. Circuitry is configured to generate a number of back end commands at least in response to a number of the host commands in the command queue, and distribute the number of back end commands to a number of the plurality of back end channels. 1. A memory system , comprising:a number of memory devices; anda controller having a front end direct memory access module (DMA) and a number of back end channels communicatively coupled between a respective one of the number of memory devices and the front end DMA; the front end DMA being configured to process a payload associated with a single host command communicated by the host, wherein respective portions of the payload are associated with corresponding multiple back end commands that are being substantially simultaneously executed across the number of back end channels.2. The memory system of claim 1 , wherein the single host command is a write command claim 1 , and the front end DMA is configured to distribute the payload associated with the single host command amongst more than one of the number of back end channels corresponding to the multiple back end commands.3. The memory system of claim 1 , wherein the single host command is a read command claim 1 , and the front end DMA is configured to assemble a payload associated with the single host command from amongst more than one of the number of back end channels corresponding to the multiple back end commands.4. The memory system of claim 1 , wherein the front end DMA is configured to determine a logical block address and sector count for each respective portion of the payload associated with each of the multiple back end commands claim 1 , wherein ...

Подробнее
17-10-2013 дата публикации

ADDRESS SPACE MANAGEMENT WHILE SWITCHING OPTICALLY-CONNECTED MEMORY

Номер: US20130275704A1

A remote processor is signaled for receiving a remote machine memory address (RMMA) space that contains data to be transferred. The RMMA space is mapped to a free portion of a system memory address (SMA) space of the remote processor. The entries of a page table corresponding to the address space are created. 1. In an optically-connected memory (OCM) system , a method for address space management , comprising:signaling a remote processor for receiving a remote machine memory address (RMMA) space that contains data to be transferred, andmapping the RMMA space to a free portion of a system memory address (SMA) space of the remote processor, wherein entries of a page table corresponding to the SMA space are created.2. The method of claim 1 , further including performing the signaling and the mapping on a side of a source node.3. The method of claim 1 , further including performing the signaling and the mapping while dynamically switching the memory through the optical-switching fabric using a selected one of a plurality of available communication patterns to transfer the RMMA space in the memory blades from one of the plurality of processors to an alternative one of the plurality of processors in the processor blades without physically copying data in the memory to the plurality of processors.4. The method of claim 3 , further including:supplying a set of remote memory superpages by an optical plane, andassimilating the remote memory superpages by grafting the remote memory superpages into the SMA space of the remote processor for creating a mapping within the page table.5. The method of claim 4 , further including transparently accessing the remote memory superpages on the remote processor by a plurality of applications.6. The method of claim 3 , further including observing claim 3 , for similar data claim 3 , by a memory controller on a remote memory blade for the remote processor claim 3 , the RMMA space that is identical to the RMMA space in a source processor.7. ...

Подробнее
24-10-2013 дата публикации

Methods and Systems for Protecting Data in USB Systems

Номер: US20130282934A1
Принадлежит:

The various embodiments described below are directed to providing authenticated and confidential messaging from software executing on a host (e.g. a secure software application or security kernel) to and from I/O devices operating on a USB bus. The embodiments can protect against attacks that are levied by software executing on a host computer. In some embodiments, a secure functional component or module is provided and can use encryption techniques to provide protection against observation and manipulation of USB data. In other embodiments, USB data can be protected through techniques that do not utilized (or are not required to utilize) encryption techniques. In accordance with these embodiments, USB devices can be designated as “secure” and, hence, data sent over the USB to and from such designated devices can be provided into protected memory. Memory indirection techniques can be utilized to ensure that data to and from secure devices is protected. 1. A method comprising:receiving a request from an application for a USB transaction;querying the application for a memory location that is to be the subject of the transaction;receiving a memory location indication from the application, the memory location indication comprising, in an event that the application is a secure application, an indication associated with protected memory;processing the memory location indication into a transaction description (TD); andprocessing the TD with a host controller effective to either copy in or copy out data relative to the protected memory location associated with the memory location indication.2. The method of claim 1 , wherein protected memory is only accessible by a USB host controller.3. The method of claim 1 , wherein the act of processing the TD comprises copying in or copying out the data only if the protected memory location is associated with a secure USB device that is the subject of the USB transaction.4. The method of claim 1 , wherein the host controller copies ...

Подробнее
24-10-2013 дата публикации

System and method for system wide self-managing storage operations

Номер: US20130282948A1
Принадлежит:

The present invention presents a system and method to provide a storage system wide approach to better manage IO requests and better manage the prefetch transfers of data to and from the drives. 1a) connecting at least one host or client to a to an IO or network connection on one Storage Nodes;b) connecting at least one host or client to a to an IO or network connection on one or more additional Storage Nodes;c) performing multicast IO transfers over an interconnected bus connected to the memory of each Storage Node in the storage system to write to a volume on that node.d) responding to host IO requests and managing a Logical Storage Capacity for each Storage Node to aggregate and track the storage capacities of data drives are available within the Storage Node including which Nodes may access the drives.. A method for providing access from one or more host computer systems to a multi-node data storage system where access to a storage space could be made via any connected host with access permission connected to a Storage Node, said Storage Nodes connected by an Interconnected Bus, said storage space of the Storage Nodes being made up of one or more physical storage elements, or portions of one or more storage elements, comprising the steps of: This application claims priority of U.S. provisional application Ser. No. 61/631,272, filed Dec. 31, 2010, entitled, “System and Method for System Wide Self-Managing Storage Operations”The present invention relates to a computer system, storage system, and more particularly, to disk drive operations.shows a prior art storage system for disk drive operation with a plurality of storage elements . As depicted in , an individual storage element is comprised of a controller interface , processing capability , memory and a driver program .The storage system of the prior art in , shows a plurality of storage elements , connected to a controller or adapter board , and connected via the bus from the controller board to one or more ...

Подробнее
14-11-2013 дата публикации

Managing A Direct Memory Access ('DMA') Injection First-In-First-Out ('FIFO') Messaging Queue In A Parallel Computer

Номер: US20130304948A1
Принадлежит: International Business Machines Corp

Managing a direct memory access (‘DMA’) injection first-in-first-out (‘FIFO’) messaging queue in a parallel computer, including: inserting, by a messaging unit management module, a DMA message descriptor into the injection FIFO messaging queue; determining, by the messaging unit management module, the number of extra slots in an immediate messaging queue required to store DMA message data associated with the DMA message descriptor; and responsive to determining that the number of extra slots in the immediate message queue required to store the DMA message data is greater than one, inserting, by the messaging unit management module, a number of DMA dummy message descriptors into the injection FIFO messaging queue, wherein the number of DMA dummy message descriptors is at least as many as the number of extra slots in the immediate messaging queue that are required to store the DMA message data.

Подробнее
14-11-2013 дата публикации

COMPUTER AND INPUT/OUTPUT CONTROL METHOD OF COMPUTER

Номер: US20130304949A1
Принадлежит: Hitachi, Ltd.

An HBA driver manages a queue number for enqueuing and dequeuing data to an I/O queue by the main storage, and HBA-F/W manages a storage region at inside of HBA. The HBA driver reduces the number of access times by way of the PCIe bus by noticing an enqueued queue number or a dequeued queue number of an I/O queue to HBA-F/W by utilizing an MMIO area of the main storage in which a storage region on HBA is mapped. 1. An input/output control method of a computer comprising CPU , a main storage connected to the CPU via a bridge , and a host bus adapter (HBA) connected to the CPU and the main storage via a PCIe bus connected to the bridge for transmitting and receiving a data to and from an I/O device ,wherein the HBA comprises an HBA firmware and a storage region,wherein the CPU executes an OS and an HBA driver operated on the OS for controlling the HBA,wherein the main storage comprises an I/O queue from which the data is enqueued or dequeued, a management queue of managing a queue number of the data which is enqueued or dequeued from the I/O queue, an Memory Mapped I/O (MMIO) area in which a storage region of the HBA is mapped,wherein the HBA driver writes a piece of management information of an updated management queue to the MMIO area when the management queue is updated, andwherein the OS writes the piece of management information written to the MMIO area to a storage region of the HBA in correspondence with the MMIO area of the main storage.2. The input/output control method according to claim 1 , wherein claim 1 , when the management queue is updated claim 1 , the HBA driver writes a queue number one queue number before the updated management queue to the MMIO area as the piece of management information.3. The input/output control method according to claim 1 , wherein the I/O queue is at least either one of an I/O activation queue and an I/O response queue.4. The input/output control method according to claim 3 ,wherein in a case where the computer receives the ...

Подробнее
28-11-2013 дата публикации

DIRECT MEMORY ACCESS (DMA) CONTROLLED MEDICAL DEVICES

Номер: US20130318259A1
Автор: Sherman Neil S.
Принадлежит: SPINAL MODULATION, INC.

A sub-system for controlling a medical device comprises memory including a first table and a second table. The first table stores blocks of event data corresponding to events that are to be performed during a period of time (e.g., a 0.5 sec. or 1 sec. period of time). The second table stores blocks of time data corresponding to the period of time. The implantable stimulation system also includes a direct memory access (DMA) controller including a first DMA channel and a second DMA channel. The first DMA channel selectively transfers one of the blocks event data from the first table to one or more registers that are used to control events. The second DMA channel selectively transfers one of the blocks of time data from the second table to a timer that is used to control timing associated with the events. 1. A sub-system for use in controlling a medical device , the sub-system comprising:a central processing unit (CPU); a plurality of blocks of event data, wherein each said block of event data corresponds to an event that is to occur during a period of time, and', 'a plurality of blocks of time data, wherein each said block of time data is used to specify when a next event is to occur during the period of time;, 'memory that stores'}one or more registers that are used to control events that are to occur during the period of time;a timer that is used to control timing associated with events that are to occur during the period of time; and a first DMA channel that, without CPU intervention, transfers one of the blocks of event data at a time from the memory to the one or more registers that are used to control events that are to occur during the period of time, and', 'a second DMA channel that, without CPU intervention, transfers one of the blocks of time data at a time from the memory to the timer., 'a direct memory access (DMA) controller including'}2. The sub-system of claim 1 , wherein: a count register that stores a count value and increments the count value in ...

Подробнее
28-11-2013 дата публикации

OFFLOADING OF COMPUTATION FOR RACK LEVEL SERVERS AND CORRESPONDING METHODS AND SYSTEMS

Номер: US20130318275A1
Автор: Dalal Parin Bhadrik
Принадлежит:

A method is disclosed that includes writing data to predetermined physical addresses of a system memory, the data including metadata that identifies a processing type; configuring a processor module to include the predetermined physical addresses, the processor module being physically connected to the memory bus by a memory module connection; and processing the write data according to the processing type with an offload processor mounted on the processor module. 1. A method , comprising:receiving write data over a system memory bus via an in-line module connector, the write data including a metadata portion identifying a processing to be performed on at least a portion of the write data;performing the processing on at least a portion of the write data with at least one offload processor mounted on a module having the in-line module connector to generate processed data; andtransmitting the processed data over the memory bus; whereinthe system memory bus is further connected to at least one processor connector configured to receive at least one host processor different from the at least one offload processor.2. The method of claim 1 , wherein:receiving the session data includes processing a direct memory access (DMA) write request with an interface to the module.3. The method of claim 2 , wherein:the DMA write request is not issued by a host processor.4. The method of claim 1 , wherein:receiving the session data includes storing the write data in a buffer memory of the module.5. The method of claim 1 , further including:in response to predetermined conditions, storing a processing context of the at least one offload processor within the module, and redirecting the offload processor to process other data.6. The method of claim 5 , further including:after processing or terminating processing of the other data, restoring the stored processing context to the offload processor.7. The method of claim 1 , wherein:transmitting the processed data over the system memory bus ...

Подробнее
28-11-2013 дата публикации

PROCESSING STRUCTURED AND UNSTRUCTURED DATA USING OFFLOAD PROCESSORS

Номер: US20130318277A1
Принадлежит:

A structured data processing system is disclosed that can include a plurality of XIMM modules connected to a memory bus in a first server, with the XIMM modules each respectively having a DMA slave module connected to the memory bus and an arbiter for scheduling tasks, with the XIMM modules providing an in-memory database; and a central processing unit (CPU) in the first server connected to the XIMM modules by the memory bus, with the CPU arranged to process and direct structured queries to the plurality of XIMM modules. 1. A structured data processing system , comprising:a plurality of XIMM modules connected to a memory bus in a first server, with the XIMM modules each respectively having a DMA slave module connected to the memory bus and an arbiter for scheduling tasks, with the XIMM modules providing an in-memory database; anda central processing unit (CPU) in the first server connected to the XIMM modules by the memory bus, with the CPU arranged to process and direct structured queries to the plurality of XIMM modules.2. The structured data processing system of claim 1 , wherein the XIMM modules communicate with each other without requiring access to a processor of the first server.3. The structured data processing system of claim 1 , wherein the XIMM modules are mounted on different servers in the same rack claim 1 , and further comprising a top of the rack switch to mediate communication therebetween.4. The structured data processing system of claim 1 , wherein a XIMM driver executes a mmap routine to transfer a query from the CPU to the XIMM in the form of memory reads/writes.5. The structured data processing system of claim 1 , wherein the XIMM module is configured for insertion into a DIMM socket claim 1 , and the XIMM module further comprises offload processors connected to memory and a computational FPGA.6. A data processing system for unstructured data claim 1 , comprising:a plurality of XIMM modules connected to a memory bus, with the XIMM modules each ...

Подробнее
19-12-2013 дата публикации

MANAGING ACCESSING PAGE TABLE ENTRIES

Номер: US20130339659A1

A method for accessing memory locations includes translating, by a processor, a virtual address to locate a first page table entry (PTE) in a page table. The first PTE includes a marker and an address of a page of main storage. It is determined, by the processor, whether a marker is set in the first PTE. A large page size of a large page associated with the first PTE is identified based on determining that the marker is set in the first PTE. The large page is made up of contiguous pages of main storage. An origin address of the large page is determined based on determining that the marker is set in the first PTE. The virtual address is used to index into the large page at the origin address to access main storage. 1. A computer implemented method for accessing memory locations , the method comprising:translating, by a processor, a virtual address to locate a first page table entry (PTE) in the page table, the first PTE comprising a marker and an address of a page of main storage;determining, with the processor, whether a marker is set in the first PTE;identifying a large page size of a large page associated with the first PTE based on determining that the marker is set in the first PTE, wherein the large page consists of contiguous pages of main storage;determining an origin address of the large page based on determining that the marker is set in the first PTE; andusing the virtual address to index into the large page at the origin address to access main storage.2. The method of claim 1 , wherein a range of virtual addresses identify corresponding PTEs comprising said first PTE claim 1 , wherein each PTE of said corresponding PTEs is configured to address a page of main storage claim 1 , each of said pages of main storage being contiguous.3. The method of claim 1 , wherein the method further comprises:storing virtual address information and an address for locating said large page, in a translation look-aside buffer (TLB); andusing the TLB to translate virtual ...

Подробнее
09-01-2014 дата публикации

METHOD AND SYSTEM FOR TRANSFERRING DATA BETWEEN PORTABLE TERMINAL AND EXTERNAL DEVICE

Номер: US20140013015A1
Автор: CHANG Whie
Принадлежит:

A system and method for transmitting and receiving data between a portable terminal and an external device are provided. The system includes a portable terminal for creating a data list according to selection of a user and wirelessly transmitting the data list, and wirelessly transmitting relevant data when data corresponding to the data list is requested; and a wireless data relay device connected to a USB interface of the external device, for converting the data list wirelessly received from the portable terminal into a flash memory data list and transferring the flash memory data list to the external device, and requesting data corresponding to the data list from the portable terminal according to a request of the external device, converting wireless data received from the portable terminal into data of a flash memory data output format and transferring the converted data to the external device. 1. A system for transmitting and receiving data between a portable terminal and an external device , the system comprising:a portable terminal for creating a data list according to selection of a user and wirelessly transmitting the data list, and wirelessly transmitting relevant data when data corresponding to the data list is requested; anda wireless data relay device connected to a USB interface of the external device, for converting the data list wirelessly received from the portable terminal into a flash memory data list and transferring the flash memory data list to the external device, and requesting data corresponding to the data list from the portable terminal according to a request of the external device, converting wireless data received from the portable terminal into data of a flash memory data output format and transferring the converted data to the external device.2. The system according to claim 1 , wherein the portable terminal and the wireless data relay device transmit and receive data in a WiFi communication method.3. The system according to claim 1 , ...

Подробнее
09-01-2014 дата публикации

Universal digital block interconnection and channel routing

Номер: US20140013022A1
Принадлежит: Cypress Semiconductor Corp

A programmable routing scheme provides improved connectivity both between Universal Digital Blocks (UDBs) and between the UDBs and other micro-controller elements, peripherals and external Inputs and Outputs (I/Os) in the same Integrated Circuit (IC). The routing scheme increases the number of functions, flexibility, and the overall routing efficiency for programmable architectures. The UDBs can be grouped in pairs and share associated horizontal routing channels. Bidirectional horizontal and vertical segmentation elements extend routing both horizontally and vertically between different UDB pairs and to the other peripherals and I/O.

Подробнее
16-01-2014 дата публикации

CONTROLLER

Номер: US20140019663A1
Принадлежит: Panasonic Corporation

A controller as an embodiment of the present disclosure controls a timing of transmitting an access request that has been received from an initiator (or its transmission interval). The controller includes: transmitting and receiving circuitry configured to receive an access request related to burst accesses from a first initiator that is connected via a first bus to, and adjacent to, the transmitting and receiving circuitry and configured to transmit the access request to a second bus implemented as a network; and a transmission interval controller configured to control the timing of transmitting the access request that has been received from the first initiator according to density of the burst accesses during a period in which the burst accesses continue and an access load on the second bus. 1. A controller which controls a timing of transmitting an access request that has been received from an initiator , the controller comprising:transmitting and receiving circuitry configured to receive an access request related to burst accesses from a first initiator that is connected via a first bus to, and adjacent to, the transmitting and receiving circuitry and configured to transmit the access request to a second bus implemented as a network; anda transmission interval controller configured to control the timing of transmitting the access request that has been received from the first initiator according to density of the burst accesses during a period in which the burst accesses continue and an access load on the second bus.2. The controller of claim 1 , wherein the transmission interval controller calculates the density of the burst accesses based on how many times the access requests have been received from the first initiator during the period in which the burst accesses continue.3. The controller of claim 2 , wherein either a second initiator or a target is connected to the second bus claim 2 , andwherein the transmission interval controller obtains, as the access ...

Подробнее
16-01-2014 дата публикации

METHOD AND SYSTEM FOR PERFORMING DMA IN A MULTI-CORE SYSTEM-ON-CHIP USING DEADLINE-BASED SCHEDULING

Номер: US20140019664A1
Принадлежит: Cradle IP, LLC

A direct memory access (DMA) engine schedules data transfer requests of a data processing system according to both an assigned transfer priority and the deadline for completing a transfer. 1. A direct memory access (DMA) engine processing transfer requests of a data processing system , comprising:a command processor adapted to receive and interpret transfer requests of the data professing system, transfer requests being received by the command processor through a set of command FIFO registers respectively servicing requests of different transfer priorities;a transfer memory connected to the command processor and having, for each of the transfer requests, data fields including transfer priority and transfer deadline;a transaction dispatcher connected to the transfer memory and having read, read response and write engines adapted to handle command and data octet transfers through a set of read and write FIFO registers to and from a DRAM controller and a global bus interface in accord with transfer requests interpreted by the command processor; anda channel scanner connected to the transfer memory and having a deadline engine and a transaction controller, the deadline engine adapted to determine a transfer urgency, and the transaction controller adapted to schedule among multiple transfer requests interpreted by the command processor based on the determined transfer urgency of the respective transfer requests so as to control the engines of the transaction dispatcher,wherein the transfer urgency is based on both a transfer deadline and a transfer priority, such that higher priority transfers have higher urgency, and equal priority transfers with earlier deadlines have higher urgency, the transfer priority being based on a hardness representing a penalty for missing a deadline and is also assigned to zero-deadline transfer requests for which there is a penalty no matter how early the transfer completes and the penalty increases with completion time of the transfer, the ...

Подробнее
23-01-2014 дата публикации

INPUT/OUTPUT PROCESSING

Номер: US20140025859A1
Автор: Krause Michael R.
Принадлежит:

The present disclosure provides a computer system that includes a processor coupled to a host memory through a memory controller. The computer system also includes an upper device communicatively coupled to the memory controller, the upper device configured to process local input/output received from or sent to a lower device. The computer system also includes a memory comprising a data flow identifier used to associate a data flow resource of the upper device with an external data flow resource corresponding to the lower device. A data packet received by the upper device from the lower device includes the data flow identifier. 1. A computer system , comprising:a processor coupled to a host memory through a memory controller;an upper device communicatively coupled to the memory controller, the upper device configured to process local input/output received from or sent to a lower device;a memory comprising a data flow identifier used to associate a data flow resource of the upper device with an external data flow resource corresponding to the lower device;wherein a data packet received by the upper device from the lower device includes the data flow identifier.2. The computer system of claim 2 , wherein each data flow identifier corresponds to a specific receive queue of the upper device.3. The computer system of claim 1 , wherein the upper device comprises an IOMMU that provides a memory translation based claim 1 , at least in part claim 1 , on the data flow identifier received from the lower device.4. The computer system of claim 3 , wherein the IOMMU is configured to enable the memory translation for a specified amount of time or a specified number of memory read or write operations corresponding to the memory translation.5. The system of claim 1 , wherein the data flow identifier received by the upper device from the lower device corresponds to a plurality of receive queues of the upper device claim 1 , and wherein the payload data associated with the data flow ...

Подробнее
23-01-2014 дата публикации

PROVIDING MULTIPLE QUIESCE STATE MACHINES IN A COMPUTING ENVIRONMENT

Номер: US20140025922A1

An aspect includes a method for operating on translation look-aside buffers (TLBs) in a multiprocessor environment including a plurality of logical partitions as zones. The method includes concurrently receiving a first quiesce request from a first processor of a first zone to quiesce processors of a first set of zones including the first zone and receiving a second quiesce request from a second processor of a second zone to quiesce processors of a second set of zones including the second zone. The second set of zones consists of separate zones from the first set of zones. Based on receiving the first quiesce request, only processors of the first set of zones are quiesced. Based on the processors of the first set of zones being quiesced, a first operation is performed on the TLBs. Based on the first operation being performed, the processors of the first set of zones are un-quiesced. 1. A method for operating on translation look-aside buffers (TLBs) in a multiprocessor environment , the multi-processor environment comprising a plurality of logical partitions as zones , each zone comprising one or more logical processors assigned to physical processors each having at least one of the TLBs , the method comprising:concurrently receiving a first quiesce request from a first processor of a first zone to quiesce processors of a first set of zones comprising the first zone and receiving a second quiesce request from a second processor of a second zone to quiesce processors of a second set of zones comprising the second zone, the second set of zones consisting of separate and distinct zones from the first set of zones;based on receiving the first quiesce request, quiescing only processors of the first set of zones;based on the processors of the first set of zones being quiesced, performing a first operation on the TLBs;based on the first operation being performed, un-quiescing the processors of the first set of zones;based on concurrently receiving the second quiesce request ...

Подробнее
30-01-2014 дата публикации

USB VIRTUALIZATION

Номер: US20140032794A1
Принадлежит: INEDA SYSTEMS PVT. LTD.

Described herein are methods and systems for virtualization of a USB device to enable sharing of the USB device among a plurality of host processors in a multi-processor computing system. A USB virtualization unit for sharing of the USB device include a per-host register unit, each corresponding to a host processor includes one or more of a host register interface, host data interface, configuration registers, and host control registers, configured to receive simultaneous requests from one or more host processors from amongst the plurality of host processors for the USB device. The USB virtualization unit also includes a pre-fetch direct memory access (DMA) configured to pre-fetch DMA descriptors associated with the requests to store in a buffer. The USB virtualization unit further includes an endpoint specific switching decision logic (ESL) configured to schedule data access based on the DMA descriptors from the host processor's local memory corresponding to each request. 1. A method of virtualization of a Universal Serial Bus (USB) device in a multi host computing system comprising:receiving simultaneous requests from a plurality of host processors for the USB device coupled to the multi host computing system, wherein the requests are based on types of endpoint supported by the USB device;pre-fetching, for each of the plurality of host processors, direct memory access (DMA) descriptors, wherein each of the DMA descriptor is indicative of pointers describing location of a local memory of a host processor, the location being associated with a request; andscheduling data access, from the local memory of each of the plurality of host processors based on a class specific driver schedule of the USB device, wherein the class specific driver schedule is based on an endpoint supported by the USB device.2. The method as claimed in claim 1 , wherein the method further comprises:parsing a data access from the local memory of at least one of the plurality of host processors to ...

Подробнее
20-02-2014 дата публикации

All-to-All Comparisons on Architectures Having Limited Storage Space

Номер: US20140052886A1

Mechanisms for performing all-to-all comparisons on architectures having limited storage space are provided. The mechanisms determine a number of data elements to be included in each set of data elements to be sent to each processing element of a data processing system, and perform a comparison operation on at least one set of data elements. The comparison operation comprises sending a first request to main memory for transfer of a first set of data elements into a local memory associated with the processing element and sending a second request to main memory for transfer of a second set of data elements into the local memory. A pair wise comparison computation of the all-to-all comparison of data elements operation is performed at approximately a same time as the second set of data elements is being transferred from main memory to the local memory. 1. A method , in a data processing system having a plurality of processing elements , for performing a portion of an All-to-All comparison of data elements operation , comprising:determining a number of data elements to be included in each set of data elements to be sent to each processing element of the plurality of processing elements when executing their respective portions of the All-to-All comparison of data elements operation, wherein each set of data elements has a same number of data elements; sending, from the processing element, a first request to main memory for transfer of a first set of data elements into a local memory associated with the processing element, the first request being sent in a blocking mode of operation and the first set of data elements being used to perform a current pair wise comparison of the portion of the All-to-All comparison of data elements operation;', 'sending, from the processing element, a second request to main memory for transfer of a second set of data elements into the local memory, the second request being sent in a non-blocking mode of operation and the second set of data ...

Подробнее
20-02-2014 дата публикации

APPARATUSES FOR OPERATING, DURING RESPECTIVE POWER MODES, TRANSISTORS OF MULTIPLE PROCESSORS AT CORRESPONDING DUTY CYCLES

Номер: US20140052887A1
Принадлежит:

A device includes a first processor and a second processor. The first processor is configured to operate in accordance with a first power mode. The first processor includes a first transistor. The first processor is configured to, while operating in accordance with the first power mode, switch the first transistor at a first duty cycle. The second processor is configured to operate in accordance with a second power mode. The second processor includes a second transistor. The second processor is configured to, while operating in accordance with the second power mode, switch the second transistor at a second duty cycle. The second duty cycle is greater than the first duty cycle. The second processor consumes less power while operating in accordance with the second power mode than the first processor consumes while operating in accordance with the first power mode. 1. A device comprising:a first processor configured to operate in accordance with a first power mode, wherein the first processor comprises a first transistor, and wherein the first processor is configured to, while operating in accordance with the first power mode, switch the first transistor at a first duty cycle; and 'wherein the second processor consumes less power while operating in accordance with the second power mode than the first processor consumes while operating in accordance with the first power mode.', 'a second processor configured to operate in accordance with a second power mode, wherein the second processor comprises a second transistor, wherein the second processor is configured to, while operating in accordance with the second power mode, switch the second transistor at a second duty cycle, and wherein the second duty cycle is greater than the first duty cycle,'}2. The device of claim 1 , wherein:the first processor is configured to operate in accordance with the first power mode during a first period of time;the second processor is configured to operate in accordance with the second ...

Подробнее
20-02-2014 дата публикации

SYSTEM TRANSLATION LOOK-ASIDE BUFFER WITH REQUEST-BASED ALLOCATION AND PREFETCHING

Номер: US20140052954A1
Принадлежит: Arteris SAS

A system TLB accepts translation prefetch requests from initiators. Misses generate external translation requests to a walker port. Attributes of the request such as ID, address, and class, as well as the state of the TLB affect the allocation policy of translations within multiple levels of translation tables. Translation tables are implemented with SRAM, and organized in groups. 1. A system translation look-aside buffer comprising:a translation table that stores address translations;an input port enabled to receive an input request from an initiator; andan output port enabled to send an output request corresponding to the input request,wherein if the input request is a prefetch then the output port does not send an output request.2. The system translation look-aside buffer of further comprising a prefetch port dedicated to receiving prefetch requests.3. The system translation look-aside buffer of further comprising a sideband signal that indicates whether the input request is a prefetch.4. The system translation look-aside buffer of wherein the input request comprises an ID that indicates a value claim 1 , the value indicating whether the input request is a prefetch.5. The system translation look-aside buffer of wherein the input request comprises an address claim 1 , the address indicating whether the input request is a prefetch.6. The system translation look-aside buffer of wherein the input request is made in compliance with a standard transaction protocol whether or not it is a prefetch.7. The system translation look-aside buffer of wherein the input request comprises a size that indicates a quantity of data requested by the input request claim 1 , and wherein the size can indicate a quantity of data that is less than the size of a full cache line.8. The system translation look-aside buffer of wherein the size is zero.9. The system translation look-aside buffer of wherein the size is one byte.10. The system translation look-aside buffer of wherein the input ...

Подробнее
27-02-2014 дата публикации

SYNCHRONIZING A TRANSLATION LOOKASIDE BUFFER WITH AN EXTENDED PAGING TABLE

Номер: US20140059320A1
Принадлежит:

A processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a physical address of the host of the virtualization based system (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization based system. 1. A processor comprising:decode logic to decode an instruction providing a guest address of a virtualization based system, and an address space context corresponding to a page table of the virtualization based system; andexecution logic, responsive to the decoded instruction, to invalidate a corresponding Translation Lookaside Buffer (TLB) entry with a mapping from the page table of the virtualization based system for the guest address.2. The processor of claim 1 , wherein the instruction also specifies a first instruction mode claim 1 , wherein the TLB entry for the guest address of the virtualization based system is to be invalidated.3. The processor of claim 1 , wherein invalidating the corresponding TLB entry with the mapping from the page table of the virtualization based system is in order to synchronize TLB entry translations of guest addresses to corresponding host physical addresses of a virtualization based system.4. The processor of claim 1 , wherein the instruction also specifies a global instruction mode claim 1 , wherein TLB entries are to be invalidated based on mappings derived from any address space context corresponding to an extended page table of the virtualization based system.5. A virtualization based system comprising:a memory to store an instruction specifying a context to identify a portion of an address space, and a guest address of the virtualization based system, the guest address to be mapped to a host address by a page table of the virtualization based system; anda processor to execute the instruction to cause an ...

Подробнее
06-03-2014 дата публикации

COMMUNICATION TERMINAL

Номер: US20140068114A1
Автор: Morita Junichi
Принадлежит: Panasonic Corporation

A communication terminal includes a storage section that stores a file to be transmitted to an opponent terminal, a communication section that transmits the file to the opponent terminal, a cluster information calculation section that determines cluster information about clusters, and a DMA transfer section that DMA-transfers the file from the storage section to the communication section on the basis of the cluster information about the clusters to be transferred determined by the cluster information calculation section. The cluster information calculation section determines cluster information about clusters to be transferred next during the course of the DMA transfer. 1. A communication terminal comprising:a storage section configured to store a file to be transmitted to an opponent terminal;a communication section configured to transmit the file to the opponent terminal;a cluster information calculation section configured to determine cluster information about a cluster for each set of data to be transferred; anda DMA transfer section configured to DMA-transfer the file from the storage section to the communication section on the basis of the cluster information about the cluster to be transferred that the cluster information calculation section determined, whereinthe cluster information calculation section is adapted to request cluster information about a next cluster to be transferred during the course of DMA transfer.2. The communication terminal according to claim 1 , wherein the cluster information calculation section includesa cluster search volume calculation section configured to calculate a volume of cluster search;a cluster search section configured to search the storage section for clusters in accordance with a calculation result of the cluster search volume calculation section;a file management section configured to manage the storage section by use of a predetermined file system; anda DMA control section configured to set in the DMA transfer section ...

Подробнее
06-03-2014 дата публикации

MEMORY ADDRESS GENERATION FOR DIGITAL SIGNAL PROCESSING

Номер: US20140068170A1
Автор: Anderson Adrian J.
Принадлежит: IMAGINATION TECHNOLOGIES LIMITED

Memory address generation for digital signal processing is described. In one example, a digital signal processing system-on-chip utilises an on-chip memory space that is shared between functional blocks of the system. An on-chip DMA controller comprises an address generator that can generate sequences of read and write memory addresses for data items being transferred between the on-chip memory and a paged memory device, or internally within the system. The address generator is configurable and can generate non-linear sequences for the read and/or write addresses. This enables aspects of interleaving/deinterleaving operations to be performed as part of a data transfer between internal or paged memory. As a result, a dedicated memory for interleaving operations is not required. In further examples, the address generator can be configured to generate read and/or write addresses that take into account limitations of particular memory devices when performing interleaving, such as DRAM. 1. A digital signal processing system-on-chip , comprising:a first memory storing a plurality of data items arranged in a first sequence, each data item having an associated memory address on the first memory;at least one digital signal processor coupled to the first memory and arranged to read and write data directly to the first memory; anda direct memory access controller coupled to the first memory and comprising a port to a paged memory device, wherein the direct memory access controller is configured to transfer the plurality of data items directly from the first memory to the paged memory device, andwherein the direct memory access controller further comprises a configurable address generator arranged to manipulate the memory address associated with each data item during the transfer by using a selected one of a plurality of read modes and a selected one of a plurality of write modes, such that the data items written to the paged memory device are arranged in a second sequence that ...

Подробнее
06-03-2014 дата публикации

CONFIGURABLE TRANSLATION LOOKASIDE BUFFER

Номер: US20140068225A1
Принадлежит: QUALCOMM INCORPORATED

A particular method includes receiving at least one translation lookaside buffer (TLB) configuration indicator. The at least one TLB configuration indicator indicates a specific number of entries to be enabled at a TLB. The method further includes modifying a number of searchable entries of the TLB in response to the at least one TLB configuration indicator. 1. A method comprising:receiving at least one translation lookaside buffer (TLB) configuration indicator, wherein the at least one TLB configuration indicator indicates a specific number of entries to be enabled at a TLB; andmodifying a number of searchable entries of the TLB in response to the at least one TLB configuration indicator.2. The method of claim 1 , wherein the at least one TLB configuration indicator received from an operating system.3. The method of claim 1 , wherein the at least one TLB configuration indicator includes data in a configuration register.4. The method of claim 1 , wherein modifying the number of searchable entries includes enabling a portion of the TLB to increase the number of searchable entries.5. The method of claim 4 , further comprising setting an invalid indicator for each of the searchable entries in the portion of the TLB that is enabled claim 4 , wherein the invalid indicator indicates that each of the searchable entries in the portion of the TLB that is enabled may store invalid data.6. The method of claim 1 , wherein the at least one TLB configuration indicator is used to determine whether the TLB has a first number of available entries or a second number of available entries after the number of searchable entries of the TLB is modified.7. The method of claim 1 , wherein modifying the number of searchable entries of the TLB includes:disabling a portion of the TLB to decrease the number of searchable entries.8. The method of claim 7 , further comprising:copying data from at least one entry of the portion of the TLB that is disabled to least one other portion of the TLB.9. ...

Подробнее
13-03-2014 дата публикации

Accumulation of Waveform Data using Alternating Memory Banks

Номер: US20140075080A1
Принадлежит: National Instruments Corporation

System and method for hardware implemented accumulation of waveform data. A digitizer is provided that includes: a circuit, and first and second memory banks, coupled to the circuit. The circuit may be configured to: store a first subset of the waveforms in the first memory bank, accumulate each waveform in a chunk-wise manner, where each chunk has a specified size, thereby generating a first bank sum including a first partial accumulation of the set of waveforms, store a second subset of waveforms in the second memory bank concurrently with the accumulation, and accumulate each waveform of the second subset of waveforms in a chunk-wise manner, thereby generating a second bank sum including a second partial accumulation of the set of waveforms, where the first and second partial accumulations of the set of waveforms are useable to generate an accumulated record of the set of waveforms. 1. A system for accumulating waveform data , the system comprising: a circuit;', 'a first memory bank, coupled to the circuit; and', 'a second memory bank, coupled to the circuit;, 'a digitizer, comprising a) store a first subset of the waveforms in the first memory bank;', 'b) accumulate each waveform of the first subset of waveforms in a chunk-wise manner, wherein each chunk has a specified size, thereby generating a first bank sum comprising a first partial accumulation of the set of waveforms;', 'c) store a second subset of waveforms in the second memory bank concurrently with b);', 'd) accumulate each waveform of the second subset of waveforms in a chunk-wise manner, thereby generating a second bank sum comprising a second partial accumulation of the set of waveforms; and, 'wherein the circuit is configured towherein the first and second partial accumulations of the set of waveforms are useable to generate an accumulated record of the set of waveforms.2. The system of claim 1 , wherein the circuit is further configured to:e) accumulate the first and second bank sums into a ...

Подробнее
13-03-2014 дата публикации

Methods and Apparatus for Providing Bit-Reversal and Multicast Functions Utilizing DMA Controller

Номер: US20140075081A1
Принадлежит: Altera Corporation

Techniques for providing improved data distribution to and collection from multiple memories are described. Such memories are often associated with and local to processing elements (PEs) within an array processor. Improved data transfer control within a data processing system provides support for radix 2, 4 and 8 fast Fourier transform (FFT) algorithms through data reordering or bit-reversed addressing across multiple PEs, carried out concurrently with FFT computation on a digital signal processor (DSP) array by a DMA unit. Parallel data distribution and collection through forms of multicast and packet-gather operations are also supported. 1. A method for unpacking data for storage , the method comprising:receiving from a direct memory access (DMA) controller a data type value to be supported for unpacking data from a DMA bus in memory interface units (MIUs);selecting, in the MIUs, a different group of data wires of the DMA bus based on the received data type value and an identification signal unique to each MIU; andreceiving, in the MIUs, data being transmitted on the selected different group of data wires of the DMA bus for storage in memories separately coupled to the MIUs.2. The method of further comprising:receiving a PE operation code in the MIUs from the DMA controller to perform a PE unpack-distribute operation in each MIU according to the data type value and the unique identification signal.3. The method of further comprising:receiving a DMA signal indicating that the DMA data bus contains the PE operation code.4. The method of further comprising:enabling one or more PEs from a plurality of PEs for participating in the PE unpack-distribute operation according to the data type value and the unique identification signal.5. The method of claim 1 , wherein a byte size distribute of a 32-bit word over four PEs specifies that each MIU receives a byte of the 32-bit word to be written in a memory associated with the MIU that receives the byte.6. The method of claim ...

Подробнее
13-03-2014 дата публикации

Concurrent Control For A Page Miss Handler

Номер: US20140075123A1
Принадлежит: Intel Corp

In an embodiment, a page miss handler includes paging caches and a first walker to receive a first linear address portion and to obtain a corresponding portion of a physical address from a paging structure, a second walker to operate concurrently with the first walker, and a logic to prevent the first walker from storing the obtained physical address portion in a paging cache responsive to the first linear address portion matching a corresponding linear address portion of a concurrent paging structure access by the second walker. Other embodiments are described and claimed.

Подробнее
13-03-2014 дата публикации

DETECTION OF CONFLICTS BETWEEN TRANSACTIONS AND PAGE SHOOTDOWNS

Номер: US20140075151A1

There is provided a method for detecting a conflict between a transaction and a TLB (Translation Lookaside Buffer) shootdown in a transactional memory in which a TLB shootdown operation message is received by a processor to invalidate at least one entry in a TLB of the processor corresponding to at least one page. The processor tracks pages touched by the transaction. The processor determines whether the received TLB shootdown operation message is associated with one of the touched pages. The processor aborts the transaction in response to determining that the received TLB shootdown operation message is associated with one of the touched pages. 1. A method for detecting a conflict between a transaction and a TLB (Translation Lookaside Buffer) shootdown in a transactional memory in which a TLB shootdown operation message is received by a processor to invalidate at least one entry in a TLB of said processor corresponding to at least one page , the method comprising:tracking, by said processor, pages touched by said transaction;determining, by said processor, whether said received TLB shootdown operation message is associated with one of said touched pages; andaborting, by said processor, said transaction in response to determining that said received TLB shootdown operation message is associated with said one of said touched pages.2. The method according to claim 1 , further comprising:providing a data structure having entries for said touched pages.3. The method according to claim 2 , further comprising:initializing all said entries in said data structure at a start of said transaction or at an end of said transaction.4. The method according to claim 2 , further comprising:issuing a load or store instruction;determining a page size referenced by said load or store instruction;determining whether the determined page size is tracked by said data structure;determining an effective address referenced by said load or store instruction in response to determining that said ...

Подробнее
20-03-2014 дата публикации

PROCESSING DATA PACKETS FROM A RECEIVE QUEUE IN A REMOTE DIRECT MEMORY ACCESS DEVICE

Номер: US20140082119A1

Processing data packets from a receive queue is provided. It is determined whether packets are saved in a pre-fetched queue. In response to determining that packets are not saved in the pre-fetched queue, a number of packets within the receive queue is determined. In response to determining the number of packets within the receive queue, it is determined whether the number of packets within the receive queue is greater than a number of packets called for by an application. In response to determining that the number of packets within the receive queue is greater than the number of packets called for by the application, an excess number of packets that is above the number of packets called for by the application is saved in the pre-fetched queue. An indication is sent to the application of the excess number of packets. The predetermined number of packets is transferred to the application. 1. A data processing system for processing packets from a receive queue , the data processing system comprising:a bus system;a storage device connected to bus system, wherein the storage device stores computer readable program code; anda processor connected to the bus system, wherein the processor executes the computer readable program code to determine whether packets are saved in a pre-fetched queue from a previous packet retrieval cycle; determine a current number of packets within the receive queue ready to be retrieved in response to determining that packets are not saved in the pre-fetched queue from the previous packet retrieval cycle; determine whether the current number of packets within the receive queue ready to be retrieved is greater than a predetermined number of packets called for by an application of the data processing system in response to determining the current number of packets within the receive queue ready to be retrieved; save an excess number of packets that is above the predetermined number of packets called for by the application in the pre-fetched queue ...

Подробнее
20-03-2014 дата публикации

EMBEDDED MULTIMEDIA CARD (eMMC), HOST FOR CONTROLLING eMMC, AND METHOD OPERATION FOR eMMC SYSTEM

Номер: US20140082250A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

An eMMC includes flash memory including an extended card specific data (CSD) register (“EXT_CSD register”), and an eMMC controller that controls operation of the flash memory. The eMMC controller is receives a clock from a host via a clock line, receives a SEND_EXT_CSD command from the host via a command line, and provides the host with eMMC information stored in the EXT_CSD register via a data bus in response to the SEND_EXT_CSD command, the eMMC information including maximum operating frequency information for the eMMC. 1. An embedded multimedia card (eMMC) comprising:flash memory including an extended card specific data (CSD) register (“EXT_CSD register”); andan eMMC controller that controls operation of the flash memory,wherein the eMMC controller is configured to receive a clock from a host via a clock line, receive a SEND_EXT_CSD command from the host via a command line, and provide the host with eMMC information stored in the EXT_CSD register via a data bus in response to the SEND_EXT_CSD command, the eMMC information including maximum operating frequency information for the eMMC.2. The eMMC of claim 1 , wherein the clock has a first frequency before the information stored in the EXT_CSD register is provided to the host and a second frequency different from the first frequency after the information stored in the EXT_CSD register is provided to the host.3. The eMMC of claim 2 , wherein the EXT_CSD register defines the information stored in the EXT_CSD register according to a least one data field claim 2 , and the at least one data field includes a VENDOR_SPECIFIC_FIELD field that stores the maximum operating frequency information.4. An embedded multimedia card (eMMC) system comprising:an eMMC comprising flash memory and an extended card specific data (CSD) register (“EXT_CSD register”) that stores information including maximum operating frequency information for the eMMC; and a clock generator that generates a clock provided to the eMMC; and', 'a host ...

Подробнее
20-03-2014 дата публикации

PROVIDING USAGE STATISTICS FOR VIRTUAL STORAGE

Номер: US20140082305A1

A method for obtaining a measurement of storage usage includes sending a request, by a processor, for the measurement of storage usage during execution of an application by the processor; counting blocks of storage to generate the measurement of storage usage by the application; and providing the measurement of storage usage to the application. 1. A system comprising:a processor executing an application, the application sending a request for a measurement of storage usage by the application during execution of the application; anda storage manager receiving the request and counting blocks of storage to generate the measurement of storage usage by the application;the storage manager providing the measurement of storage usage to the application.2. The system of wherein:counting blocks of storage includes counting frames of central storage, counting slots of auxiliary storage and counting pages of virtual storage.3. The system of wherein:the request includes an unlock command, wherein the counting blocks of storage is performed without serialization of the storage manager in response to the unlock command.4. The system of wherein:counting blocks of storage includes the storage manager accessing dynamic address translation (DAT) tables to include or exclude blocks of storage from the count. This application is a continuation of U.S. patent application Ser. No. 13/486,020, filed Jun. 1, 2012, the disclosure of which is incorporated by reference herein in its entirety.Embodiments relate generally to storage management in computing systems, and in particular to providing usage statistics for virtual storage above a predefined limit.Existing computing systems providing 64 bit addressing (e.g., servers executing the z/OS® operating system) provide above the bar virtual storage availability of 16 exabytes (EB), or 2bytes, of data to applications running on system. (The “bar” here refers to the limitation of a predecessor architecture, which had only 24 address bits and a ...

Подробнее
27-03-2014 дата публикации

Application-assisted handling of page faults in I/O operations

Номер: US20140089451A1
Принадлежит: MELLANOX TECHNOLOGIES LTD

A method for data transfer includes receiving in an operating system of a host computer an instruction initiated by a user application running on the host processor identifying a page of virtual memory of the host computer that is to be used in receiving data in a message that is to be transmitted over a network to the host computer but has not yet been received by the host computer. In response to the instruction, the page is loaded into the memory, and upon receiving the message, the data are written to the loaded page.

Подробнее
27-03-2014 дата публикации

Adc sequencing

Номер: US20140089536A1
Принадлежит: Atmel Corp

A device comprises a central processing unit (CPU) and a memory configured for storing memory descriptors. The device also includes an analog-to-digital converter controller (ADC controller) configured for managing an analog-to-digital converter (ADC) using the memory descriptors. In addition, the device includes a direct memory access system (DMA system) configured for autonomously sequencing conversion operations performed by the ADC without CPU intervention by transferring the memory descriptors directly between the memory and the ADC controller for controlling the conversion operations performed by the ADC.

Подробнее
27-03-2014 дата публикации

POWER SAVINGS VIA DYNAMIC PAGE TYPE SELECTION

Номер: US20140089631A1
Автор: KING Justin K.

An operating system monitors a performance metric of a direct memory access (DMA) engine on an I/O adapter to update a translation table used during DMA operations. The translation table is used during a DMA operation to map a virtual address provided by the I/O adapter to a physical address of a data page in the memory modules. If the DMA engine is being underutilized, the operating system updates the translation table such that a virtual address maps to physical address corresponding to a memory location in a more energy efficient memory module. However, if the DMA engine is over-utilized, the operating system may update the translation table such that the data used in the DMA engine is stored in memory modules that provide quicker access times—e.g., the operating system may map virtual addresses to physical addresses in DRAM rather than phase change memory. 1. A method of optimizing a computing system , comprising:receiving a performance metric associated with a data access engine, the data access engine is configured to assist in performing at least one memory access operation in one of a first memory module and a second memory module in the computing system, wherein the first and second memory modules are different types of memory devices having different performance attributes; andbased on the performance metric, reconfiguring an address translation table such that a first entry in the table re-maps a first virtual address from a first physical address corresponding to the first memory module to a second physical address corresponding to the second memory module in order to effect a change in the utilization of the data access engine.2. The method of claim 1 , wherein the first memory module consumes less energy to perform the memory access operation than the second memory module and the second memory module requires less time to perform the memory access operation than the first memory module.3. The method of claim 1 , wherein claim 1 , before reconfiguring ...

Подробнее
03-04-2014 дата публикации

Restore PCIe Transaction ID On The Fly

Номер: US20140095741A1

Restoring retired transaction identifiers (TID) associated with Direct Memory Access (DMA) commands without waiting for all DMA traffic to terminate is disclosed. A scoreboard is used to track retired TIDs and selectively restore retired TIDs on the fly. DMA engines fetch a TID, and use it to tag every DMA request. If the request is completed, the TID can be recycled to be used to tag a subsequent request. However, if a request is not completed, the TID is retired. Retired TIDs can be restored without having to wait for DMA traffic to end. Any retired TID value may be mapped to a bit location inside a scoreboard. All processors in the system may have access to read and clear the scoreboard. Clearing the TID scoreboard may trigger a DMA engine to restore the TID mapped to that location, and the TID may be used again. 1. A system for managing retired transaction identifiers (TIDs) , the system comprising:a memory operable to maintain an individual status of each TID of a plurality of TIDs, the plurality of TIDs comprising one or more retired TIDs, wherein a TID is retired if an associated command does not complete; anda processor operable to a selectively restore a first retired TID of the one or more retired TIDs, wherein the first retired TID is selectively restored without requiring the processor to be reset.225-. (canceled)26. The system of claim 1 , wherein the first retired TID is selectively restored without disturbing active DMA operations.27. The system of claim 1 , wherein the first retired TID is selectively restored without impacting new DMA commands.28. The system of claim 1 , wherein the first retired TID is selectively restored without impacting future commands that are in a queue for operation.29. The system of claim 1 , wherein the first retired TID is selectively restored without depending on a status of an active DMA operation.30. The system of claim 1 , wherein the processor is operable to periodically poll the memory to determine when the first ...

Подробнее
03-04-2014 дата публикации

Network interface controller with direct connection to host memory

Номер: US20140095753A1
Принадлежит: MELLANOX TECHNOLOGIES LTD.

A network interface device for a host computer includes a network interface, configured to transmit and receive data packets to and from a network. Packet processing logic transfers data to and from the data packets transmitted and received via the network interface by direct memory access (DMA) from and to a system memory of the host computer. A memory controller includes a first memory interface configured to be connected to the system memory and a second memory interface, configured to be connected to a host complex of the host computer. Switching logic alternately couples the first memory interface to the packet processing logic in a DMA configuration and to the second memory interface in a pass-through configuration. 1. A network interface device for a host computer , the device comprising:a network interface, configured to transmit and receive data packets to and from a network;packet processing logic, configured to transfer data to and from the data packets transmitted and received via the network interface by direct memory access (DMA) from and to a system memory of the host computer; and a first memory interface configured to be connected to the system memory;', 'a second memory interface, configured to be connected to a host complex of the host computer; and', 'switching logic, which alternately couples the first memory interface to the packet processing logic in a DMA configuration and to the second memory interface in a pass-through configuration., 'a memory controller, comprising2. The device according to claim 1 , wherein the system memory includes dynamic random access memory (DRAM) claim 1 , and wherein the first and second memory interface are Double Data Rate (DDR) interfaces.3. The device according to claim 1 , wherein the switching logic is configured claim 1 , in the pass-through configuration claim 1 , as a transparent channel claim 1 , whereby the host complex accesses addresses in the system memory as though the system memory was connected ...

Подробнее
10-04-2014 дата публикации

ADJUNCT COMPONENT TO PROVIDE FULL VIRTUALIZATION USING PARAVIRTUALIZED HYPERVISORS

Номер: US20140101406A1
Автор: Gschwind Michael K.

A system configuration is provided with a paravirtualizing hypervisor that supports different types of guests, including those that use a single level of translation and those that use a nested level of translation. When an address translation fault occurs during a nested level of translation, an indication of the fault is received by an adjunct component. The adjunct component addresses the address translation fault, at least in part, on behalf of the guest. 1. A method of facilitating translation of a guest memory address , said method comprising:obtaining, by an adjunct component, an indication of an address translation fault related to the guest memory address, the adjunct component being separate and distinct from a guest operating system and executing on a processor of a system configuration, the system configuration comprising the guest operating system supported by a hypervisor, the hypervisor being a paravirtualized hypervisor configured such that address translation faults related to host translations of guest memory addresses are managed in part by the guest operating system; andbased on obtaining the indication of the address translation fault, providing, by the adjunct component to the hypervisor, address translation information to enable successful performance of a host translation of the guest memory address.2. The method of claim 1 , wherein the hypervisor supports a first type of guest that uses a single level of translation and a second type of guest that uses a nested level of translation claim 1 , the guest operating system being a second type of guest.3. The method of claim 1 , wherein the address translation fault is based on a translation from a guest physical address to a host physical address.4. The method of claim 3 , wherein the guest physical address is provided as a result of translating a guest virtual address to the guest physical address in a guest level translation.5. The method of claim 1 , wherein the providing comprises updating a ...

Подробнее
10-04-2014 дата публикации

ASYMMETRIC CO-EXISTENT ADDRESS TRANSLATION STRUCTURE FORMATS

Номер: US20140101408A1

An address translation capability is provided in which translation structures of different types are used to translate memory addresses from one format to another format. Multiple translation structure formats (e.g., multiple page table formats, such as hash page tables and hierarchical page tables) are concurrently supported in a system configuration. This facilitates provision of guest access in virtualized operating systems, and/or the mixing of translation formats to better match the data access patterns being translated. 1. A method of facilitating translation of memory addresses , said method comprising:determining, by a processor, whether a first address translation structure of a first type is to be used to translate a memory address;based on the determining that a first address translation structure of the first type is to be used, accessing a second address translation structure of a second type, the second type being different from the first type, to determine a particular first address translation structure to be used and to obtain an origin address of that particular first address translation structure; andusing the particular first address translation structure in translating the memory address.2. The method of claim 1 , wherein the determining comprises checking an indicator to determine whether the first address translation structure is to be used claim 1 , the indicator located in an entry of a data structure located using a portion of the memory address to be translated.3. The method of claim 2 , wherein the indicator is located in a segment lookaside buffer entry (SLBE) claim 2 , the SLBE located using an effective segment identifier field of the memory address.4. The method of claim 3 , wherein the SLBE includes a virtual segment identifier (VSID) field claim 3 , and wherein the accessing the second address translation structure comprises using the VSID to locate an entry in the second address translation structure that includes the origin ...

Подробнее
05-01-2017 дата публикации

SOFTWARE-BASED ULTRASOUND IMAGING SYSTEM

Номер: US20170000464A1
Принадлежит:

A software-based ultrasound imaging system is disclosed. According to some embodiments of the present disclosure, a method and an architecture for efficiently transmitting, processing, and storing channel data in the software-based ultrasound imaging system are provided. 1. An ultrasound diagnostic apparatus , comprising:a front-end unit configured to be electrically connected to a transducer; anda host PC configured to receive channel data from the front-end unit via a data bus and to process the channel data, wherein a system memory,', 'at least one parallel core processor, and', 'a central processing unit (CPU) configured to page-lock a predetermined area (hereinafter, “first area”) in the system memory;, 'the host PC includes'}the front-end unit is configured to transmit the channel data to the first area in a direct memory access (DMA) scheme; andthe parallel core processor is configured to access the first area in the DMA scheme and to perform at least a part of processes for generating an ultrasound image, in a multi-thread processing scheme.2. The ultrasound diagnostic apparatus according to claim 1 , wherein the front-end unit and the parallel core processor are configured to simultaneously access the first area by using an address information of the first area.3. The ultrasound diagnostic apparatus according to claim 2 , wherein the address information of the first area includes either one of a physical address of the first area and a logical address mapped to the physical address.4. The ultrasound diagnostic apparatus according to claim 1 , wherein the front-end unit is configured to generate a data packet for each channel based on the channel data and to assign a destination address of each data packet such that channel-specific data are stored in continuous address spaces in the first area.5. The ultrasound diagnostic apparatus according to claim 4 , wherein the front-end unit is configured to generate the data packet as large as a maximum payload size ...

Подробнее
06-01-2022 дата публикации

Software drive dynamic memory allocation and address mapping for disaggregated memory pool

Номер: US20220004488A1
Принадлежит: Intel Corp

The apparatus of a disaggregated memory architecture (DMA) including a shared memory and multiple nodes is programmable by a primary node of the DMA. The primary node executes a programming agent to, prior to memory access requests to access the shared memory, cause a programming of register entries of one or more registers of a memory pooling circuitry (MPC) with information to be used by a decoder of the MPC to translate host physical addresses (HPA) of memory access requests of the nodes to local memory addresses (LMAs). The LMAs are to be processed by one or more memory controllers (MCs) coupled to the one or more registers based on MC memory regions in each of the one or more MCs, the MC memory regions having a predetermined memory size granularity. At least some of the LMAs map to non-contiguous memory regions of the shared memory and of the one or more MCs.

Подробнее
06-01-2022 дата публикации

Logging Pages Accessed From I/O Devices

Номер: US20220004503A1
Принадлежит: Google LLC

Systems and methods of tracking page state changes are provided. An input/output is communicatively coupled to a host having a memory. The I/O device receives a command from the host to monitor page state changes in a region of the memory allocated to a process. The I/O device, bypassing a CPU of the host, modifies data stored in the region based on a request, for example, received from a client device via a computer network. The I/O device records the modification to a bitmap by setting a bit in the bitmap that corresponds to a location of the data in the memory. The I/O device transfers contents of the bitmap to the CPU, wherein the CPU completes the live migration by copying sections of the first region indicated by the bitmap to a second region of memory. In some implementations, the process can be a virtual machine, a user space application, or a container. 1. A method of tracking page state changes , the method comprising:receiving, at an input/output (I/O) device communicatively coupled to a host having a physical memory, a command from the host to monitor page state changes in a first page of the physical memory allocated to a process executing on the host;modifying, by the I/O device, data stored in a first portion of the first page based on a request;recording, by the I/O device, the modification to a bitmap by setting a bit in the bitmap that corresponds to a location of the data in the physical memory;storing, by the I/O device, the bit in a first buffer in a general purpose memory of the host; andcopying, by the I/O device or the host, the first portion of the first page indicated by the bitmap in the first buffer to a second portion of a second page of physical memory, wherein the second page of physical memory can be in the physical memory of the host, or in a second physical memory of a second host.2. The method of claim 1 , wherein the request includes an I/O virtual address indicating a location of the data in a virtual memory of the process claim ...

Подробнее
06-01-2022 дата публикации

PCIe Device Peer-To-Peer Communications

Номер: US20220004512A1
Принадлежит: Liqid Inc.

Computing architectures, platforms, and systems are provided herein. In one example, system is provided. The system includes a first processor configured to initiate a communication arrangement between a first peripheral component interconnect express (PCIe) device and a second PCIe device. The communication arrangement is configured to detect transfers from the first PCIe device to one or more addresses corresponding to an address range of the second PCIe device, and redirect the transfers to the second PCIe device without passing the transfers through a second processor that initiates the transfers. 1. A system comprising:a user interface configured to receive instructions to initiate a communication arrangement between a first peripheral component interconnect express (PCIe) device and a second PCIe device;wherein the communication arrangement is configured to redirect a transfer from the first PCIe device based on an address corresponding to an address range of the second PCIe device without passing the transfer through a host processor that executes an application initiating the transfer.2. The system of claim 1 , wherein the communication arrangement is established in a PCIe fabric comprising one or more PCIe switch circuits.3. The system of claim 1 , wherein the first PCIe device comprises a Graphics Processing Unit (GPU) and the second PCIe device comprises a storage device.4. The system of claim 1 , wherein the communication arrangement is further established to detect an additional transfer from the second PCIe device to one or more addresses corresponding to an address range for the first PCIe device claim 1 , and redirect the additional transfer to the first PCIe device without passing the additional transfer through the host processor that initiates the additional transfer.5. The system of claim 1 , wherein the address range of the second PCIe device is in addition to a memory mapped address range assigned to the second PCIe device within a memory space ...

Подробнее
05-01-2017 дата публикации

TRANSLATION BUFFER UNIT MANAGEMENT

Номер: US20170004091A1
Принадлежит:

A data processing system incorporates a translation buffer unit and a translation control unit . The translation buffer unit responds to receipt of a memory access transaction for which translation data is unavailable in that translation buffer unit by issuing a request to the translation control unit to provide translation data for the memory access transaction. The translation control unit is responsive to disabling or enabling of address translation for a given type of memory access transaction to an issue invalidate command to all translation buffer units which may be holding translation data for that given type of memory access transaction. When the translation control unit receives a request for translation from the translation buffer unit for a memory access of the given type for which memory address translation is disabled, then the translation control unit responds to returning global translation data to be used by the translation buffer for all memory access translations of that given type. 1. Apparatus for processing data comprising:a translation buffer unit to store translation data to translate an input address of a memory access transaction to an output address; anda translation control unit to provide said translation data to said translation buffer unit, whereinsaid translation buffer unit is responsive to receipt of a memory access transaction for which translation data is unavailable in said translation buffer unit to issue a request to said translation control unit to provide translation data for said memory access transaction;said translation control unit is responsive to a change in enablement of address translation for a given type of memory access transaction to issue an invalidate command to said translation buffer unit to invalidate any translation data for said given type of memory access transaction stored in said translation buffer unit; andsaid translation control unit is responsive to receipt of a request for translation data from said ...

Подробнее
05-01-2017 дата публикации

DIRECT MEMORY ACCESS WITH FILTERING

Номер: US20170004092A1
Принадлежит: Microsoft Technology Licensing, LLC

Methods, apparatus, and computer-readable storage media are disclosed for applying filtering operations to data transferred as part of a direct memory access (DMA) operation. In one example of the disclosed technology, a system includes a processor, memory, and a direct memory access (DMA) engine coupled to the memory for reading a set of data from a selected range of read memory addresses for the memory without using the processor. A line buffer coupled to the DMA engine is configured to receive DMA read data and temporarily store a portion, but not all of the data set being read by the DMA engine in a line buffer. A digital filter is configured to apply a filtering operation to a windowed subset of the buffered portion of the data set, producing filtered data that is stored to a selected range of write memory addresses for the memory, without using the processor. 1. A system , comprising:memory;a direct memory access (DMA) engine coupled to the memory, the DMA engine including a DMA read circuit and a DMA write circuit, the DMA read circuit being configured for reading a set of data from a selected range of read memory addresses for the memory;a buffer coupled to the DMA engine and configured to receive the read data and to temporarily store a portion but not all of the data set being read by the DMA engine as a buffered portion of the data set; anda filter configured to apply a filtering operation to a subset of the buffered portion of the data set, producing filtered data, wherein the DMA write circuit stores the filtered data to a selected range of write memory addresses of the memory.2. The system of claim 1 , further comprising a processor having a plurality of processor registers claim 1 , and wherein the DMA read circuit is configured to read the data set directly from memory without using the processor registers claim 1 , and wherein the DMA write circuit is configured to write directly to the memory without using the processor registers.3. The system of ...

Подробнее
05-01-2017 дата публикации

SECURE DIRECT MEMORY ACCESS

Номер: US20170004100A1
Принадлежит: Intel Corporation

Examples are disclosed for establishing a secure destination address range responsive to initiation of a direct memory access (DMA) operation. The examples also include allowing decrypted content obtained as encrypted content from a source memory to be placed at a destination memory based on whether destination memory addresses for the destination memory fall within the secure destination address range. 129-. (canceled)30. An apparatus comprising: receive a destination address from an application executed by the CPU through a portion of a memory shared by the application and the secure processor;', 'determine whether the destination address corresponds to a secure destination addresses; and', 'place decrypted content in a memory location corresponding to the destination address based on a determination that the destination address corresponds to the secure destination addresses., 'a secure processor coupled to a central processing unit (CPU), the secure processor to31. The apparatus of claim 30 , the decrypted content corresponding to a cryptographic operation claim 30 , the secure processor to terminate the cryptographic operation based on a determination that the destination address does not correspond to the secure destination address.32. The apparatus of claim 31 , the secure processor to send an error indication to the application based on a determination that the destination address does not correspond to the secure destination address claim 31 , the error indication to indicate the destination address does not correspond to the secure destination address.33. The apparatus of claim 30 , the secure processor to:receive a DMA request, the DMA request including an indication to obtain encrypted content from a source memory, decrypt the encrypted content, and place the decrypted content at the destination address; andestablish the secure destination address in response to receiving the DMA request.34. The apparatus of claim 33 , wherein the secure destination ...

Подробнее
05-01-2017 дата публикации

DATA COPYING METHOD, DIRECT MEMORY ACCESS CONTROLLER, AND COMPUTER SYSTEM

Номер: US20170004101A1
Автор: SHAO Fei
Принадлежит:

The present invention provides a data copying method, a direct memory access controller, and a computer system. The data copying method of embodiments of the present invention includes reading, by a DMA controller, target data from storage space corresponding to a source physical address of the target data by using an ACP, where the storage space corresponding to the source physical address includes a first buffer; and storing, by the DMA controller, the target data into storage space corresponding to a destination physical address of the target data by using the ACP, where the storage space corresponding to the destination physical address includes a second buffer. The embodiments of the present invention can lower CPU usage. 1. A data copying method , comprising:reading, by a direct memory access (DMA) controller, target data from storage space corresponding to a source physical address of the target data by using an accelerator coherency port (ACP), wherein the storage space corresponding to the source physical address comprises a first buffer; andstoring, by the DMA controller, the target data into storage space corresponding to a destination physical address of the target data by using the ACP, wherein the storage space corresponding to the destination physical address comprises a second buffer.2. The method according to claim 1 , wherein the reading claim 1 , by a DMA controller claim 1 , target data from storage space corresponding to a source physical address of the target data by using an ACP comprises:sending, by the DMA controller, a source virtual address of the target data to a memory management unit (MMU), so that the MMU converts the source virtual address into the source physical address and sends the source physical address to a cache controller by using the ACP; andreceiving, by the DMA controller, the target data that is returned by the cache controller by using the ACP and the MMU in sequence, wherein the target data is data that is stored in the ...

Подробнее
07-01-2016 дата публикации

NON-VOLATILE RAM AND FLASH MEMORY IN A NON-VOLATILE SOLID-STATE STORAGE

Номер: US20160004452A1
Принадлежит:

A non-volatile solid-state storage is provided. The non-volatile solid state storage includes a non-volatile random access memory (NVRAM) addressable by a processor external to the non-volatile solid state storage. The NVRAM is configured to store user data and metadata relating to the user data. The non-volatile solid state storage includes a flash memory addressable by the processor. The flash memory is configured to store the user data responsive to the processor directing transfer of the user data from the NVRAM to the flash memory. 1. A non-volatile solid-state storage , comprising:a non-volatile random access memory (NVRAM) addressable by a first processor internal to the non-volatile solid-state storage, and by a second processor external to the non-volatile solid-state storage, the NVRAM configured to store user data and metadata relating to the user data, wherein the NVRAM is configured to include a portion dedicated to metadata corresponding to user data stored external to the non-volatile solid-state storage; anda flash memory, addressable by the first processor and by the second processor, the flash memory configured to store the user data responsive to the second processor directing transfer of the user data from the NVRAM to the flash memory.2. The non-volatile solid-state storage of claim 1 , further comprising:a direct memory access (DMA) engine, configured to move the user data from the NVRAM to the flash memory and to perform a cyclic redundancy check (CRC) to verify the user data.3. The non-volatile solid-state storage of claim 1 , wherein:the NVRAM is configured as a workspace for the second processor to apply the metadata.4. The non-volatile solid-state storage of claim 1 , further comprising:a direct memory access (DMA) engine, configured to transfer the user data from the NVRAM to the flash memory responsive to a plurality of writes of the user data to the NVRAM providing sufficient user data in the NVRAM for a page write to the flash memory.5 ...

Подробнее
07-01-2016 дата публикации

DATA TRANSFER APPARATUS AND DATA TRANSFER METHOD

Номер: US20160004477A1
Автор: Okada Masaki
Принадлежит:

In a data transfer apparatus, a data coupling unit includes a command transfer unit that outputs a write address based on transfer requests input from a bus master, a judgment result indicating whether data corresponding to the input transfer requests are continues, and a judgment result indicating whether a size after coupling exceeds an arbitrary burst length. The data coupling unit includes a buffer unit that retains the data input from the bus master according to the write address, a buffer management unit that retains information used by the data coupling unit to couple the transfer requests, and a coupled command transfer unit that generates a transfer request after coupling, having a data size less than or equal to the arbitrary burst length and a start address based on the arbitrary burst length. 1. A data transfer apparatus which transfers data input from a processing unit to a memory , comprising:a command transfer unit including a command queue that retains a plurality of transfer requests input from the processing unit, a first judgment result indicating whether a plurality of data corresponding to the plurality of transfer requests input from the processing unit are continuous data, and a second judgment result indicating whether a size after coupling the plurality of data exceeds a first burst length less than or equal to a column size of the memory, and configured to output a write address based on the first judgment result and the second judgment result;a buffer unit including a plurality of line buffers, respectively having a size greater than or equal to the first burst length and configured to retain the data input from the processing unit in the plurality of line buffers according to the write address;a buffer management unit including a retaining unit that retains coupling information for coupling the transfer requests corresponding to the data retained in each of the plurality of line buffers; anda coupled command transfer unit configured to ...

Подробнее
07-01-2016 дата публикации

CACHING SYSTEMS AND METHODS WITH SIMULATED NVDRAM

Номер: US20160004653A1
Принадлежит:

Systems and methods presented herein provide for simulated NVDRAM operations. In a host system, a host memory is sectioned into pages. An HBA in the host system comprises a DRAM and an SSD for cache operations. The DRAM and the SSD are sectioned into pages and mapped to pages of the host memory. The SSD is further sectioned into regions comprising one or more pages of the SSD. The HBA is operable to load a page of data from the SSD into a page of the DRAM when directed by a host processor, to determine that the page of the DRAM is occupied with other data, to determine a priority of the region of the page of other data occupying the page of the DRAM, and to flush the other data from the DRAM to the SSD based on the determined priority. 1. A system , comprising:a host processor;a host memory communicatively coupled to the host processor and sectioned into pages;a host bus adapter (HBA) communicatively coupled to the host processor and comprising a Dynamic Random Access Memory (DRAM) and a Solid State Memory (SSD) for cache operations andan HBA driver operable on the host processor,wherein the DRAM is sectioned into pages mapped to pages of the host memory and the SSD is sectioned into pages mapped to pages of the DRAM,wherein the SSD is further sectioned into regions comprising one or more pages of the SSD, andwherein the HBA driver is operable to load a page of data from the SSD into a page of the DRAM when directed by the host processor, to determine that the page of the DRAM is occupied with other data, to determine a priority of a region of the page of the other data occupying the page of the DRAM, and to flush the other data from the DRAM to the SSD based on the determined priority.2. The system of claim 1 , further comprising:a storage device comprising an operating system executable by the host processor, wherein the operating system comprises an application that is operable to change priorities of the regions of the SSD.3. The system of claim 2 , wherein:the ...

Подробнее
07-01-2016 дата публикации

SYSTEM FOR MIGRATING STASH TRANSACTIONS

Номер: US20160004654A1
Принадлежит: Freescale Semiconductor, Inc.

A system for migrating stash transactions includes first and second cores, an input/output memory management unit (IOMMU), an IOMMU mapping table, an input/output (I/O) device, a stash transaction migration management unit (STMMU), a queue manager and an operating system (OS) scheduler. The I/O device generates a first stash transaction request for a first data frame. The queue manager stores the first stash transaction request. When the first core executes a first thread, the queue manager stashes the first data frame to the first core by way of the IOMMU. The OS scheduler migrates the first thread from the first core to the second core and generates pre-empt notifiers. The STMMU uses the pre-empt notifiers to update the IOMMU mapping table and generate a stash replay command. The queue manager receives the stash replay command and stashes the first data frame to the second core. 1. A system for migrating at least one stash transaction between a plurality of processor cores , wherein each of the plurality of processor cores includes a cache register , the system comprising:a main memory, for storing a plurality of data frames and an input/output memory management unit (IOMMU) mapping table that includes a mapping between a logical input/output (I/O) device number (LIODN) corresponding to an I/O device and a corresponding stash transaction destination identification (ID), wherein the stash transaction destination ID includes a cache register ID associated with a cache register of one of the processor cores of the plurality of processor cores;a first I/O device, connected to the main memory, for generating the plurality of data frames and initiating direct memory access (DMA) transactions for storing the plurality of data frames in the main memory, and generating a stash transaction request corresponding to each data frame of the plurality of data frames;a queue manager, connected to the first I/O device, for receiving and storing a first stash transaction request ...

Подробнее
07-01-2016 дата публикации

Computing system and operating method of the same

Номер: US20160004655A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

A computing system includes a first unified module including a first storage device and a second storage device that are different from each other, and a unified module interface configured to provide a direct memory access (DMA) request signal to control a first DMA with respect to the first storage device and to perform a second DMA on the second storage device. An application processor is configured to receive the DMA request signal from the unified module interface, and provide a DMA request response signal to the unified module interface and control the second DMA with respect to the second storage device.

Подробнее
04-01-2018 дата публикации

VIRTUAL MACHINE MIGRATION IN RACK SCALE SYSTEMS

Номер: US20180004558A1
Автор: Das Sharma Debendra
Принадлежит:

Virtual machine (VM) migration in rack scale systems is disclosed. A source shared memory controller (SMC) of implementations includes a direct memory access (DMA) move engine to establish a first virtual channel (VC) over a link with a destination SMC, the destination SMC coupled to a destination node hosting a VM that is migrated to the destination node from a source node coupled to the source SMC, and transmit, via the first VC to the destination SMC, units of data corresponding to the VM and directory state metadata associated with each unit of data. The source SMC includes a demand request component to establish a second VC over the link, receive, via the second VC from the destination SMC, a demand request for one of the units of data corresponding to the VM, and transmit, via the second VC, the requested unit of data and corresponding directory state metadata. 1. A source shared memory controller (SMC) comprising: establish a first virtual channel (VC) over a link with a destination SMC, the destination SMC coupled to a destination node hosting a virtual machine (VM), wherein the VM is migrated to the destination node from a source node coupled to the source SMC, and wherein the link is to support memory semantics and an input/output (I/O) protocol; and', 'transmit, via the first VC to the destination SMC, units of data corresponding to the VM and directory state metadata associated with each unit of data; and, 'a direct memory access (DMA) move engine to establish a second VC over the link with the destination SMC, the second VC separate from the first VC;', 'receive, via the second VC from the destination SMC, a demand request for one of the units of data corresponding to the VM; and', 'transmit, via the second VC to the destination SMC, the requested unit of data and corresponding directory state metadata for the requested unit of data., 'a demand request component to2. The source SMC of claim 1 , wherein the link uses at least one of a common set of pins ...

Подробнее
04-01-2018 дата публикации

Multi-Purpose Events for Notification and Sequence Control in Multi-core Processor Systems

Номер: US20180004581A1
Принадлежит:

Techniques are provided for improving the performance of a constellation of coprocessors by hardware support for asynchronous events. In an embodiment, a coprocessor receives an event descriptor that identifies an event and a logic. The coprocessor processes the event descriptor to configure the coprocessor to detect whether the event has been received. Eventually a device, such as a CPU or another coprocessor, sends the event. The coprocessor detects that it has received the event. In response to detecting the event, the coprocessor performs the logic. 1. A method comprising:receiving an event descriptor that identifies an event and a logic;processing said event descriptor to configure a coprocessor of a central processing unit (CPU) to detect whether said event has been received;sending said event by a device of said CPU;detecting, by said coprocessor, that said event is received by said coprocessor;responsive to said detecting, said coprocessor performing said logic.2. The method of wherein said device of said CPU comprises a second coprocessor of said CPU.3. The method of wherein said device of said CPU comprises said coprocessor.4. The method of wherein receiving said event comprises receiving said event descriptor onto a descriptor queue.5. The method of wherein:said logic specifies using a particular resource;the method further comprises said device of said CPU preparing said particular resource after said receiving said event descriptor.6. The method of wherein said device of said CPU comprises at least one of: a direct memory access (DMA) controller or a DMA channel.7. The method of wherein:sending said event comprises sending said event from said device of said CPU to a global event distributor;the method further comprises relaying said event from said global event distributor to a subset of coprocessors of a plurality of coprocessors of said CPU.8. The method of wherein detecting that said event is received comprises said coprocessor executing an ...

Подробнее