Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 8. Отображено 7.
04-04-2017 дата публикации

Network processor with distributed trace buffers

Номер: US0009612934B2

A network processor includes a cache and a several groups of processors for accessing the cache. A memory interconnect provides for connecting the processors to the cache via a plurality of memory buses. A number of trace buffers are also connected to the bus and operate to store information regarding commands and data transmitted across the bus. The trace buffers share a common address space, thereby enabling access to the trace buffers as a single entity.

Подробнее
02-05-2013 дата публикации

NETWORK PROCESSOR WITH DISTRIBUTED TRACE BUFFERS

Номер: US20130111073A1
Принадлежит: Cavium, Inc.

A network processor includes a cache and a several groups of processors for accessing the cache. A memory interconnect provides for connecting the processors to the cache via a plurality of memory buses. A number of trace buffers are also connected to the bus and operate to store information regarding commands and data transmitted across the bus. The trace buffers share a common address space, thereby enabling access to the trace buffers as a single entity. 1. A system comprising:a cache; and a group of processors;', 'a bus, the groups connected to the cache via the respective bus, the bus carrying commands and data between the cache and the processors; and', 'a trace buffer connected to the bus between the group of processors and the cache, the trace buffer configured to store information regarding commands sent by the group of processors along the bus;, 'a plurality of processor subsets configured to access the cache, each processor subset comprisingthe trace buffers at each of the processor subsets sharing a common address space to enable access to the trace buffers as a single entity.2. The system of claim 1 , further comprising a control circuit connected to the bus of each or the plurality of processor subsets claim 1 , the control circuit configured to direct the command and data signals between the cache and the processors.3. The system of claim 2 , wherein the trace buffer of at least one of the plurality of processor subsets is connected to the bus between the respective processor and the control circuit.4. The system of claim 2 , wherein the trace buffer of at least one of the plurality of processor subsets is connected to the bus between the control circuit and the cache.5. The system of claim 1 , wherein the trace buffer is configured to issue a notification through at least one of a central interrupt unit (CIU) and a wire pulse in response to an event.6. The system of claim 5 , wherein the event is one or more of a captured command signal and an ...

Подробнее
02-05-2013 дата публикации

MULTI-CORE INTERCONNECT IN A NETWORK PROCESSOR

Номер: US20130111141A1
Принадлежит: Cavium, Inc.

A network processor includes multiple processor cores for processing packet data. In order to provide the processor cores with access to a memory subsystem, an interconnect circuit directs communications between the processor cores and the L2 Cache and other memory devices. The processor cores are divided into several groups, each group sharing an individual bus, and the L2 Cache is divided into a number of banks, each bank having access to a separate bus. The interconnect circuit processes requests to store and retrieve data from the processor cores across multiple buses, and processes responses to return data from the cache banks. As a result, the network processor provides high-bandwidth memory access for multiple processor cores. 1. A computer system on a computer chip comprising:an interconnect circuit;a plurality of memory buses, each bus connecting a respective group of plural processor cores to the interconnect circuit; anda cache divided into a plurality of banks, each bank being connected to the interconnect circuit via an individual bus;the interconnect circuit configured to distribute a plurality of requests received from the plural processor cores among the plurality of banks.2. The system of claim 1 , wherein the interconnect circuit transforms the requests by modifying an address component of the requests.3. The system of claim 2 , wherein the interconnect circuit performs a hash function on each of the requests claim 2 , the hash function providing a pseudo-random distribution of the requests among the plurality of banks.4. The system of claim 1 , wherein the interconnect circuit is configured to maintain tags indicating a state of an L1 cache coupled to one of the plural processor cores claim 1 , and wherein the interconnect circuit is further configured to direct tags in the plurality of requests to a plurality of channels thereby processing the respective tags concurrently.5. The system of claim 1 , wherein the interconnect circuit further ...

Подробнее
26-03-2020 дата публикации

Managing low-level instructions and core interactions in multi-core processors

Номер: US20200097292A1
Принадлежит: Marvell Asia Pte Ltd

Managing the messages associated with memory pages stored in a main memory includes: receiving a message from outside the pipeline, and providing at least one low-level instruction to the pipeline for performing an operation indicated by the received message. Executing instructions in the pipeline includes: executing a series of low-level instructions in the pipeline, where the series of low-level instructions includes a first (second) set of low-level instructions converted from a first (second) high-level instruction. The second high-level instruction occurs after the first high-level instruction within a series of high-level instructions, and delaying insertion of the low-level instruction provided for performing the operation into an insertion position within the series of low-level instructions, where the delaying causes the insertion position to be between a final low-level instruction converted from the first high-level instruction and an initial low-level instruction converted from the second high-level instruction.

Подробнее
10-09-2015 дата публикации

MULTI-CORE NETWORK PROCESSOR INTERCONNECT WITH MULTI-NODE CONNECTION

Номер: US20150254182A1
Принадлежит: Cavium, Inc.

According to at least one example embodiment, a method of data coherence is employed within a multi-chip system to enforce cache coherence between chip devices of the multi-node system. According at least one example embodiment, a message is received by a first chip device of the multiple chip devices from a second chip device of the multiple chip devices. The message triggers invalidation of one or more copies, if any, of a data block. The data block stored in a memory attached to, or residing in, the first chip device. Upon determining that one or more remote copies of the data block are stored in one or more other chip devices, other than the first chip device, the first chip device sends one or more invalidation requests to the one or more other chip devices for invalidating the one or more remote copies of the data block. 1. A method of providing data coherence among multiple chip devices of a multi-chip system , the method comprising:receiving, by a first chip device of the multiple chip devices, a message from a second chip device of the multiple chip devices, the message triggering invalidation of one or more copies, if any, of a data block, the data block stored in a memory attached to, or residing in, the first chip device; andupon determining that one or more remote copies of the data block are stored in one or more other chip devices, other than the first chip device, sending one or more invalidation requests to the one or more other chip devices for invalidating the one or more remote copies of the data block.2. The method as recited in claim 1 , wherein the message received includes a store command to update the data block with a modified copy of the data block.3. The method as recited in claim 1 , wherein the message received includes a request for an exclusive copy of the data block.4. The method as recited in further comprising determining that one or more remote copies of the data block are stored in the one or more other chip devices by checking a ...

Подробнее
03-05-2016 дата публикации

Multi-core interconnect in a network processor

Номер: US9330002B2
Принадлежит: Cavium LLC

A network processor includes multiple processor cores for processing packet data. In order to provide the processor cores with access to a memory subsystem, an interconnect circuit directs communications between the processor cores and the L2 Cache and other memory devices. The processor cores are divided into several groups, each group sharing an individual bus, and the L2 Cache is divided into a number of banks, each bank having access to a separate bus. The interconnect circuit processes requests to store and retrieve data from the processor cores across multiple buses, and processes responses to return data from the cache banks. As a result, the network processor provides high-bandwidth memory access for multiple processor cores.

Подробнее
08-08-2024 дата публикации

Mehrkernverknüpfung in einem Netzprozessor

Номер: DE112012004551B4
Принадлежит: Marvell Asia Pte Ltd

Computersystem auf einem Computerchip, das Folgendes aufweist:eine Verknüpfungsschaltung (244);eine Vielzahl von Speicherbussen (225A-D), wobei jeder Bus eine jeweilige Gruppe von mehreren Prozessorkernen (220A-D) mit der Verknüpfungsschaltung (244) verbindet; undeinen in eine Vielzahl von Bänken (230A-D) aufgeteilten Cache (130), wobei jede Bank über einen Einzelbus (235A-D) mit der Verknüpfungsschaltung (244) verbunden ist;die Verknüpfungsschaltung (244), eingerichtet zum Verteilen einer Vielzahl von empfangenen Anforderungen von den mehreren Prozessorkernen (220A-D) unter der Vielzahl von Bänken (230A-D); wobei die Verknüpfungsschaltung (244) eingerichtet ist, die Anforderungen durch Abändern einer Adresskomponente der Anforderungen umzuformen; und wobei die Verknüpfungsschaltung (244) zum Unterhalten von einen Zustand eines an einen der mehreren Prozessorkerne angekoppelten L1-Cache anzeigenden Kennzeichen eingerichtet ist und wobei die Verknüpfungsschaltung (244) weiterhin zum Leiten von Kennzeichen in der Vielzahl von Anforderungen zu einer Vielzahl von Kanälen eingerichtet ist, womit die jeweiligen Kennzeichen gleichzeitig verarbeitet werden.

Подробнее