Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 1503. Отображено 100.
26-01-2012 дата публикации

System and method for exchanging information among exchange applications

Номер: US20120023193A1
Принадлежит: FireStar Software Inc

A system and method for communicating transaction information includes a plurality of client application devices distributed among one or more local client application devices and one or more remote client application devices, and a plurality of gateways distributed among one or more local gateways and one or more remote gateways. The one or more local gateways are configured to communicate the transaction information with the one or more local client application devices, with which the one or more local gateways are associated, using one or more local data formats. The one or more remote gateways are configured to communicate the transaction information with the one or more remote client application devices, with which the one or more remote gateways are associated, using one or more remote data formats. The one or more local gateways are configured to transform the transaction information in the one or more local data formats into one or more common data formats that are shared with the one or more remote gateways. The one or more remote gateways are configured to transform the transaction information in the one or more common data formats into the one or more remote data formats. The transaction information from the one or more local client application devices is communicated to the one or more remote client application devices for completing a transaction.

Подробнее
23-05-2013 дата публикации

Packet processor and method for processing packets by means of internal control packets

Номер: US20130128900A1
Автор: Shugo Shiba
Принадлежит: Oki Electric Industry Co Ltd

A packet processor for processing an input packet includes an information generator for generating process control information for processing the input packet, an internal packet generator for receiving the input packet as an packet to be processed and adding the process control information to the packet to be processed to produce an internal packet, an internal packet processor for processing the internal packet supplied from the internal packet generator on the basis of the process control information added to the internal packet, and a packet transmitter for extracting an output packet from the internal packet processed by the internal packet processor to transmit the output packet. The packet processor can reduce the amount of communication between modules even when the packet processor includes plural modules.

Подробнее
08-08-2013 дата публикации

HEADER REPLICATION IN ACCELERATED TCP (TRANSPORT CONTROL PROTOCOL) STACK PROCESSING

Номер: US20130201998A1
Принадлежит:

In one embodiment, a method is provided. The method of this embodiment provides storing a packet header at a set of at least one page of memory allocated to storing packet headers, and storing the packet header and a packet payload at a location not in the set of at least one page of memory allocated to storing packet headers. 120-. (canceled)21. A method comprising:performing packet processing on a packet;using a data movement circuit to place the payload corresponding to said packet into a read buffer substantially simultaneously while performing packet processing and substantially simultaneously with a direct memory access (DMA) operation of the data movement circuit.22. The method of claim 21 , wherein said performing packet processing is performed by a transport protocol driver.23. The method of claim 21 , wherein said using a data movement circuit to place said payload comprises programming the data movement circuit to write said payload to the read buffer.24. The method of claim 21 , wherein the data movement circuit comprises a DMA engine.25. The method of claim 24 , wherein the DMA engine resides on a chipset.26. An apparatus comprising: perform packet processing on a packet; and', 'substantially simultaneously with the circuitry to perform packet processing, the circuitry to use a data movement circuit to place payload corresponding to said packet into a read buffer;, 'circuitry towherein movement of said payload and said packet processing associated with said packet substantially simultaneously overlap and substantially simultaneously with a direct memory access (DMA) operation of the data movement circuit.27. The apparatus of claim 26 , wherein said circuitry to perform packet processing includes circuitry in a transport protocol driver.28. The apparatus of claim 26 , wherein said circuitry to use a data movement circuit to place said payload comprises circuitry to program the data movement circuit to write said payload to the read buffer.29. The ...

Подробнее
19-09-2013 дата публикации

System and method for efficient shared buffer management

Номер: US20130247071A1
Принадлежит: Juniper Networks Inc

A method for managing a shared buffer between a data processing system and a network. The method provides a communication interface unit for managing bandwidth of data between the data processing system and an external communicating interface connecting to the network. The method performs, by the communication interface unit, a combined de-queue and head drop operation on at least one data packet queue within a predefined number of clock cycles. The method also performs, by the communication interface unit, an en-queue operation on the at least one data packet queue in parallel with the combined de-queue operation and head drop operation within the predefined number of clock cycles.

Подробнее
05-12-2013 дата публикации

Router and many-core system

Номер: US20130322459A1
Автор: HUI XU
Принадлежит: Toshiba Corp

According to one embodiment, a router includes a plurality of input ports and a plurality of output ports. The input ports receive a packet including control information indicating a type of access. Each of the input ports includes a first buffer and a second buffer which store the packet. The output ports output the packet. Each of the input ports selects at least one of the first buffer and the second buffer as a buffer in which the packet is stored on the basis of the control information and a state of the output port serving as a destination port of the packet.

Подробнее
06-02-2014 дата публикации

Priority Driven Channel Allocation for Packet Transferring

Номер: US20140036930A1
Принадлежит: FutureWei Technologies Inc

A method comprising advertising to a second node a total allocation of storage space of a buffer, wherein the total allocation is less than the capacity of the buffer, wherein the total allocation is partitioned into a plurality of allocations, wherein each of the plurality of allocations is advertised as being dedicated to a different packet type, and wherein a credit status for each packet type is used to manage the plurality of allocations, receiving a packet of a first packet type from the second node, and storing the packet to the buffer, wherein the space in the buffer occupied by the first packet type exceeds the advertised space for the first packet type due to the packet.

Подробнее
03-04-2014 дата публикации

METHOD AND APPARATUS FOR REDUCING POOL STARVATION IN A SHARED MEMORY SWITCH

Номер: US20140092743A1
Автор: Brown David A.
Принадлежит: MOSAID TECHNOLOGIES INCORPORATED

A switch includes a reserved pool of buffers in a shared memory. The reserved pool of buffers is reserved for exclusive use by an egress port. The switch includes pool select logic which selects a free buffer from the reserved pool for storing data received from an ingress port to be forwarded to the egress port. The shared memory also includes a shared pool of buffers. The shared pool of buffers is shared by a plurality of egress ports. The pool select logic selects a free buffer in the shared pool upon detecting no free buffer in the reserved pool. The shared memory may also include a multicast pool of buffers. The multicast pool of buffers is shared by a plurality of egress ports. The pool select logic selects a free buffer in the multicast pool upon detecting an IP Multicast data packet received from an ingress port. 1. A switch comprising:a plurality of ingress ports;a plurality of egress ports; a plurality of reserved pools of buffers in a shared memory, first and second reserved pools of the plurality of reserved pools of buffers reserved for respective first and second egress ports of the plurality of egress ports; i) select a buffer to allocate from the first reserved pool when there is at least one free buffer in the first reserved pool that includes all buffers of the switch reserved for the first egress port, the buffer in the first reserved pool configured to store data received from any ingress port of the ingress ports that is to be forwarded to the first egress port; and', 'ii) otherwise select a free buffer in the shared pool when there is no free buffer in the first reserved pool to store the data., 'a shared pool of buffers in the shared memory, the shared pool of buffers shared by the plurality of egress ports; and pool select logic configured to2. The switch as claimed in wherein if the data is stored in the buffer in the first reserved pool claim 1 , the pool select logic is further configured to deallocate the buffer in the first reserved pool ...

Подробнее
02-01-2020 дата публикации

Network packet templating for gpu-initiated communication

Номер: US20200004610A1
Принадлежит: Advanced Micro Devices Inc

Systems, apparatuses, and methods for performing network packet templating for graphics processing unit (GPU)-initiated communication are disclosed. A central processing unit (CPU) creates a network packet according to a template and populates a first subset of fields of the network packet with static data. Next, the CPU stores the network packet in a memory. A GPU initiates execution of a kernel and detects a network communication request within the kernel and prior to the kernel completing execution. Responsive to this determination, the GPU populates a second subset of fields of the network packet with runtime data. Then, the GPU generates a notification that the network packet is ready to be processed. A network interface controller (NIC) processes the network packet using data retrieved from the first subset of fields and from the second subset of fields responsive to detecting the notification.

Подробнее
05-01-2017 дата публикации

APPARATUS AND METHOD FOR STORING DATA TRAFFIC ON FLOW BASIS

Номер: US20170005952A1
Принадлежит:

An apparatus and method for storing data traffic on a flow basis. The apparatus for storing data traffic on a flow basis includes a packet storage unit a flow generation unit, and a metadata generation unit. The packet storage unit receives packets corresponding to data traffic, and temporarily stores the packets using queues. The flow generation unit generates flows by grouping the packets by means of a hash function using information about each of the packets as input, and to store the flows. The metadata generation unit generates metadata and index data corresponding to each of the flows, and stores the metadata and the index data. 1. An apparatus for storing data traffic on a flow basis , comprising:a packet storage unit configured to receive packets corresponding to data traffic, and to temporarily store the packets using queues;a flow generation unit configured to generate flows by grouping the packets by means of a hash function using information about each of the packets as input, and to store the flows; anda metadata generation unit configured to generate metadata and index data corresponding to each of the flows, and to store the metadata and the index data2. The apparatus of claim 1 , wherein the flow generation unit comprises:a hash value generation unit configured to generate a hash value based on an IP address of each sender, an IP address of each recipient, a port address of the sender, and a port address of the recipient, which correspond to the packets;a generation unit configured to sort the packets according to their flows based on the hash values, to generate flows by grouping the packets, and to store the flows in flow buffers; anda flow storage unit configured to store the flows, stored in the flow buffers, on hard disks.3. The apparatus of claim 2 , wherein the flow storage unit stores each of the flows on the hard disks when a size of the flow stored in the flow buffers exceeds a specific value or the flow is terminated.4. The apparatus of ...

Подробнее
04-01-2018 дата публикации

TECHNOLOGIES FOR SCALABLE PACKET RECEPTION AND TRANSMISSION

Номер: US20180006970A1
Принадлежит:

Technologies for scalable packet reception and transmission include a network device. The network device is to establish a ring that is defined as a circular buffer and includes a plurality of slots to store entries representative of packets. The network device is also to generate and assign receive descriptors to the slots in the ring. Each receive descriptor includes a pointer to a corresponding memory buffer to store packet data. The network device is further to determine whether the NIC has received one or more packets and copy, with direct memory access (DMA) and in response to a determination that the NIC has received one or more packets, packet data of the received one or more packets from the NIC to the memory buffers associated with the receive descriptors assigned to the slots in the ring. 1. A network device to process packets , the network device comprising:one or more processors that include a plurality of cores;a network interface controller (NIC) coupled to the one or more processors; and establish a ring in a memory of the one or more memory devices, wherein the ring is defined as a circular buffer and includes a plurality of slots to store entries representative of packets;', 'generate and assign receive descriptors to the slots in the ring, wherein each receive descriptor includes a pointer to a corresponding memory buffer to store packet data;', 'determine whether the NIC has received one or more packets; and', 'copy, with direct memory access (DMA) and in response to a determination that the NIC has received one or more packets, packet data of the received one or more packets from the NIC to the memory buffers associated with the receive descriptors assigned to the slots in the ring., 'one or more memory devices having stored therein a plurality of instructions that, when executed by the one or more processors, cause the network device to2. The network device of claim 1 , wherein to generate and assign the receive descriptors to the slots in the ...

Подробнее
03-01-2019 дата публикации

MULTIPLEXING METHOD FOR SCHEDULED FRAMES IN AN ETHERNET SWITCH

Номер: US20190007344A1
Автор: Mangin Christophe
Принадлежит: Mitsubishi Electric Corporation

The method comprises the steps of: a) providing a plurality of memory buffers, associated to respective indexes of priority, each buffer comprising one queue of frames having a same index of priority, b) sorting the received frames in a chosen buffer according to their index of priority, c) in each buffer, sorting the frames according to their respective timestamps, for ordering the queue of frames in each buffer from the earliest received frame on top of the queue to the latest received frame at the bottom of the queue, and d) feeding the transmitting ports with each frame or block of frame to transmit, in an order determined according to the index of priority of the frame, as well as an order of the frame or of the block of frame in the queue associated to the index of priority of the frame. 1. A method for multiplexing data frames , in a packet-switched type network , a first plurality of receiving ports, receiving said data frames, and', 'a second plurality of transmitting ports, for transmitting at least blocks of said data frames,, 'at least a part of said network comprising one or several switches havingeach frame including a data field comprising information related to an index of priority for transmitting the frame,wherein a clock is provided to said switches so as to apply a timestamp of reception of each frame in each receiving port, and a memory medium is further provided so as to store transitorily each received frame along with its timestamp,and wherein the method comprises the steps of:a) providing a plurality of memory buffers , associated to respective indexes of priority, each buffer comprising one queue of frames having a same index of priority,b) sorting the received frames in a chosen buffer according to their index of priority,c) in each buffer, sorting the frames according to their respective timestamps, for ordering the queue of frames in each buffer from the earliest received frame on top of the queue to the latest received frame at the ...

Подробнее
08-01-2015 дата публикации

METHOD AND DEVICE FOR VIDEO PROCESSING

Номер: US20150009284A1
Автор: YIN Chengguo
Принадлежит:

A method and a device are provided for processing a video transmitted over a communication network. The video processing may include timing playback of an image frame of the video so as to maintain a continuous playback of the video regardless of network delays and other adverse factors. An ith image frame of the video may be fetched for processing from a buffer queue. A sampling interval of the ith image frame may be calculated. Further, a waiting time and a regulated waiting time of the ith image frame may be calculated. A playing interval of the ith image frame may be determined based on the regulated waiting time. If the time elapsed since start of playback of an (i−1)th image frame of the video is not shorter than the playing interval of the ith image frame, the ith image frame may be played at the current time point. 1. A method for processing a video in a network device that comprises a processor , the method comprising:fetching, by the processor, from a buffer queue, a current image frame of the video;calculating, by the processor, a sampling interval of the current image frame, wherein the sampling interval is a temporal difference between a first time point at which the current image frame is sampled and a second time point at which a previous image frame of the video is sampled, wherein the previous image frame is an image frame of the video that was fetched from the buffer queue immediately before the current image frame;calculating, by the processor, a waiting time of the current image frame, wherein the waiting time is a time period between the current image frame being added into the buffer queue and the current image frame being fetched from the buffer queue;calculating, by the processor, a regulated waiting time of the current image frame based on the waiting time of the current image frame and a regulated waiting time of the previous image frame;determining, by the processor, a playing interval of the current image frame based on the regulated ...

Подробнее
20-01-2022 дата публикации

Coalescing packets based on hints generated by network adapter

Номер: US20220021629A1
Принадлежит:

A network node includes a network adapter and a host. The network adapter is coupled to a communication network. The host includes a processor running a client process and a communication stack, and is configured to receive packets from the communication network, and classify the received packets into respective flows that are associated with respective chunks in a receive buffer, to distribute payloads of the received packets among the chunks so that payloads of packets classified to a given flow are stored in a given chunk assigned to the given flow, and to notify the communication stack of the payloads in the given chunk, for transferring the payloads in the given chunk to the client process. 1. A network node , comprising:a network adapter coupled to a communication network; anda host comprising a processor running a client process and a communication stack; receive packets from the communication network, and classify the received packets into respective flows that are associated with respective chunks in a receive buffer;', 'distribute payloads of the received packets among the chunks so that payloads of packets classified to a given flow are stored in a given chunk assigned to the given flow; and', 'notify the communication stack of the payloads in the given chunk, for transferring the payloads in the given chunk to the client process., 'wherein the network adapter is configured to2. The network node according to claim 1 , wherein the processor is further configured to run a driver that mediates between the network adapter and the communication stack claim 1 , wherein the network adapter is configured to notify the communication stack of the payloads in the given chunk claim 1 , via the driver.3. The network node according to claim 2 , wherein the driver is configured to construct a coalesced payload comprising two or more consecutive payloads in the given chunk claim 2 , and to notify the communication stack of the coalesced payload.4. The network node ...

Подробнее
14-01-2016 дата публикации

Data Matching Using Flow Based Packet Data Storage

Номер: US20160014051A1
Принадлежит:

A system for matching data using flow based packet data storage includes a communications interface and a processor. A communications interface receives a packet between a source and a destination. The processor identifies a flow between the source and the destination based on the packet. The processor determines whether some of packet data of the packet indicates a potential match to data in storage using hashes. The processor then stores the data from the most likely data match and second most likely data match without a packet header in a block of memory in the storage based on the flow. 1. A system for matching data using flow based packet data storage , the system comprising:a communications interface that receives at least one data packet at a network device between a source and a destination, the at least one data packet including data and flow information; and identifies a flow between the source and the destination based on the flow information in the at least one data packet;', 'determines whether at least a portion of the data from the received at least one data packet indicates one or more potential matches to data in storage;', 'retrieves a list of possible data matches;', 'determines match sizes of at least two likely data matches by directly comparing packet bytes and matched data bytes; and', 'stores the data from the at least one data packet without a packet header in a block of memory allocated for the flow, or', 'generates a retrieve instruction for the data match depending on the match sizes., 'a processor that2. The system of claim 1 , wherein the processor moves the storage data between a fast memory and a slow memory.3. The system of claim 1 , wherein the flow comprises a session between the source and the destination.4. The system of claim 1 , wherein the processor allocates the block of the memory for the identified flow.5. The system of claim 1 , wherein the processor transmits the packet data.6. The system of claim 1 , wherein the block of ...

Подробнее
19-01-2017 дата публикации

Packet reception apparatus

Номер: US20170019352A1
Принадлежит: NTT Electronics Corp

A reception buffer of a packet reception apparatus includes a plurality of storage addresses. A packet determination unit receives a packet from a plurality of lines including a main system and an auxiliary system. The packet determination unit obtains a storage address corresponding to a unique number assigned to the packet, and overwrites and stores data of the packet onto the storage address. A packet extraction/transmission unit extracts and transmits the data stored in the reception buffer.

Подробнее
03-02-2022 дата публикации

Programmatically configured switches and distributed buffering across fabric interconnect

Номер: US20220038391A1
Принадлежит:

Programmable switches and routers are described herein for enabling their internal network fabric to be configured with a topology. In one implementation, a programmable switch is arranged in a network having a plurality of switches and an internal fabric. The programmable switch includes a plurality of programmable interfaces and a buffer memory component. Also, the programmable switch includes a processing component configured to establish each of the plurality of programmable interfaces to operate as one of a user-facing interface and a fabric-facing interface. Based on one or more programmable interfaces being established as one or more fabric-facing interfaces, the buffer memory component is configured to store packets received from a user-facing interface of an interconnected switch of the plurality of switches via one or more hops into the internal fabric. 1. A programmable switch arranged in a network having a plurality of switches and an internal fabric , the programmable switch comprisinga plurality of programmable interfaces,a buffer memory component, anda processing component configured to establish each of the plurality of programmable interfaces to operate as one of a user-facing interface and a fabric-facing interface,wherein, based on one or more programmable interfaces being established as one or more fabric-facing interfaces, the buffer memory component is configured to store packets received from a user-facing interface of an interconnected switch of the plurality of switches via one or more hops into the internal fabric.2. The programmable switch of claim 1 , wherein the network is arranged with a flat internal fabric and full-mesh configuration.3. The programmable switch of claim 2 , wherein the flat internal fabric includes one or more of Direct Attach Cables (DACs) claim 2 , Active Electrical Cables (AECs) claim 2 , Active Optical Cables (AOCs) claim 2 , passive optical cables claim 2 , silicon photonics claim 2 , and Printed Circuit Board ( ...

Подробнее
21-01-2021 дата публикации

OPEN AND SAFE MONITORING SYSTEM FOR AUTONOMOUS DRIVING PLATFORM

Номер: US20210021442A1
Принадлежит:

In one embodiment, a system for operating an autonomous driving vehicle (ADV) includes a number of modules. These modules include at least a perception module to perceive a driving environment surrounding the ADV and a planning module to plan a path to drive the ADV to navigate the driving environment. The system further includes a bus coupled to the modules and a sensor processing module communicatively coupled to the modules over the bus. The sensor processing module includes a bus interface coupled to the bus, a sensor interface to be coupled to a first set of one or more sensors mounted on the ADV, a message queue to store messages published by the sensors, and a message handler to manage the messages stored in the message queue. The messages may be subscribed by at least one of the modules to allow the modules to monitor operations of the sensors. 1. A system for operating an autonomous driving vehicle (ADV) , the system comprising:a plurality of modules, including a perception module to perceive a driving environment surrounding the ADV and a planning module to plan a path to control the ADV to navigate the driving environment;a bus coupled to the plurality of modules; and a bus interface coupled to the bus,', 'a sensor interface to be coupled to a first set of one or more sensors mounted on the ADV,', 'a message queue to store a plurality of messages published by the sensors, and', 'a message handler to manage the messages stored in the message queue, which are subscribed by at least one of the modules to allow the modules to monitor operations of the sensors., 'a sensor processing module coupled to the bus, wherein the sensor processing module comprises'}2. The system of claim 1 , wherein the message queue comprises a plurality of message buffers claim 1 , each of the message buffers corresponding to one of the plurality of sensors.3. The system of claim 1 , wherein the message handler is configured to:in response to a first message received from a first ...

Подробнее
25-01-2018 дата публикации

Packet buffering

Номер: US20180026902A1
Автор: Xiaohu Tang, Zhuxun Wang

A first device as a buffer server in an Ethernet transmits a first buffer client querying packet from a port of enabling a distributed buffer function of the first device, receives a first buffer client registering packet from a second device through the port, and adds the second device into a distributed buffer group of the port. When the first device detects that a sum of sizes of packets entering the port and not transmitted reaches a preset first flow-splitting threshold in a first preset time period, the first device forwards a packet entering the port and not transmitted to a buffer client selected from the distributed buffer group of the port.

Подробнее
10-02-2022 дата публикации

NON-DISRUPTIVE IMPLEMENTATION OF POLICY CONFIGURATION CHANGES

Номер: US20220045907A1
Принадлежит:

Techniques for non-disruptive configuration changes are provided. A packet is received at a network device, and the packet is buffered in a common pool shared by a first processing pipeline and a second processing pipeline, where the first processing pipeline corresponds to a first policy and the second processing pipeline corresponds to a second policy. A first copy of a packet descriptor for the packet is queued in a first scheduler based on processing the first copy of the packet descriptor with the first processing pipeline. A second copy of the packet descriptor is queued in a second scheduler associated based on processing the second copy of the packet descriptor with the second processing pipeline. Upon determining that the first policy is currently active on the network device, the first copy of the packet descriptor is dequeued from the first scheduler. 1. A method , comprising:receiving, at a network device; a packet;buffering the packet in a common pool shared by a first processing pipeline and a second processing pipeline, wherein the first processing pipeline corresponds to a first policy and the second processing pipeline corresponds to a second policy;queueing a first copy of a packet descriptor for the packet in a first scheduler based on processing the first copy of the packet descriptor with the first processing pipeline;queueing a second copy of the packet descriptor in a second scheduler associated based on processing the second copy of the packet descriptor with the second processing pipeline; andupon determining that the first policy is currently active on the network device, dequeueing the first copy of the packet descriptor from the first scheduler.2. The method of claim 1 , further comprising:retrieving the packet from the common pool, based on the first copy of the packet descriptor; andprocessing the packet based on the first policy.3. The method of claim 1 , further comprising:receiving an instruction to activate the second policy on the ...

Подробнее
04-02-2021 дата публикации

Packet Processing Device and Packet Processing Method

Номер: US20210034559A1
Принадлежит:

A packet processing device includes: a line adapter configured to receive packets from a communication line; a packet combining unit configured to generate a combined packet by combining a plurality of packets received from the communication line; a packet memory configured to store packets received from the communication line; and a combined packet transferring unit configured to DMA transfer the combined packet generated by the packet combining unit to the packet memory. The combined packet transferring unit writes information of an address of first data of each packet inside the combined packet on the packet memory into a descriptor that is a data area on a memory set in advance. 16.-. (canceled)7. A packet processing device comprising:a line adapter configured to receive a plurality of packets from a communication line;a packet combiner configured to generate a combined packet by combining the plurality of packets received from the communication line;a packet memory configured to store packets received from the communication line; and direct memory access (DMA) transfer the combined packet to the packet memory or write the combined packet to the packet memory using a processor; and', 'write information of an address of each of the plurality of packets in the combined packet into a descriptor that is a data area of a memory, wherein the data area of the memory is set in advance., 'a combined packet transferor configured to8. The packet processing device according to claim 7 , wherein the combined packet transferor is further configured to write a received data size indicating a packet length of each of the plurality of packets in the combined packet into the descriptor.9. The packet processing device according to claim 7 , further comprising one or more processors configured to:read packets stored in the packet memory based on information written in the descriptor; andperform processing of the packets read from the packet memory.10. The packet processing device ...

Подробнее
04-02-2021 дата публикации

SYSTEMS AND METHODS FOR EFFICIENTLY STORING A DISTRIBUTED LEDGER OF RECORDS

Номер: US20210036970A1
Принадлежит:

Systems and methods for efficiently storing a distributed ledger of records. In an exemplary aspect, a method may include generating a record comprising a payload and a header, wherein the payload stores a state of a data object associated with a distributed ledger and the header stores a reference to state information in the payload. The method may further comprise including the record in a trunk filament comprising a first plurality of records indicative of historic states of the data object, wherein the trunk filament is part of a first lifeline. The method may include identifying a jet of the distributed ledger, wherein the jet is a logical structure storing a second lifeline with a second plurality of records. In response to determining that the first plurality of records is related to the second plurality of records, the method may include storing the first lifeline in the jet. 1. A method for storing a distributed ledger of records , the method comprising:generating a record comprising a payload and a header, wherein the payload stores a state of a data object associated with a distributed ledger and the header stores a reference to state information in the payload;including the record in a trunk filament comprising a first plurality of records indicative of historic states of the data object, wherein the trunk filament is part of a first lifeline that further comprises one or more branch filaments comprising auxiliary information associated with the data object;identifying a jet of the distributed ledger, wherein the jet is a logical structure storing several lifelines with a second plurality of records indicative of historic states of a plurality of data objects; andin response to determining that the first plurality of records is related to the second plurality of records, storing the first lifeline in the jet.2. The method of claim 1 , wherein the plurality of data objects can be reordered between a plurality of jets in accordance with a statistic of ...

Подробнее
09-02-2017 дата публикации

COMMUNICATION APPARATUS

Номер: US20170041253A1
Автор: NISHIKAWA Kouichi
Принадлежит:

A packet communication apparatus is configured to relay packets transmitted and received between information processing apparatuses. The packet communication apparatus includes: a network interface connectable to a network; a CPU to be a destination of at least one of a plurality of packets to be received through the network interface; a first buffer configured to hold the packets destined to the CPU in order to output the packets to the CPU; a second buffer having a plurality of planes and configured to hold copies of the packets destined to the CPU held in the first buffer in one of the plurality of planes; and a reception history controller configured to store a copy of a packet to a specified plane of the second buffer or to save copies of packets held in the second buffer to another storage area based on usage of the first buffer. 1. A packet communication apparatus configured to relay packets transmitted and received between information processing apparatuses , the packet communication apparatus comprising:a network interface connectable to a network;a CPU to be a destination of at least one of a plurality of packets to be received through the network interface;a first buffer configured to hold the packets destined to the CPU in order to output the packets to the CPU;a second buffer having a plurality of planes and configured to hold copies of the packets destined to the CPU held in the first buffer in one of the plurality of planes; anda reception history controller configured to store a copy of a packet to a specified plane of the second buffer or to save copies of packets held in the second buffer to another storage area based on usage of the first buffer.2. The packet communication apparatus according to claim 1 , further comprising a packet reception management unit configured to:monitor the usage of the first buffer;instruct the reception history controller to change a plane of the second buffer to store copies of packets when the usage reaches a first ...

Подробнее
11-02-2016 дата публикации

System and Method for Photonic Networks

Номер: US20160044393A1
Автор: Alan Frank Graves

In one embodiment, a photonic switching fabric includes a first stage including a plurality of first switches and a second stage including a plurality of second switches, where the second stage is optically coupled to the first stage. The photonic switching fabric also includes a third stage including a plurality of third switches, where the third stage is optically coupled to the second stage, where the photonic switching fabric is configured to receive a packet having a destination address, where the destination address includes a group destination address, and where the second stage is configured to be connected in accordance with the group destination address.

Подробнее
07-02-2019 дата публикации

System and method for implementing virtualized network functions with a shared memory pool

Номер: US20190042294A1
Принадлежит: Intel Corp

A method and system for implementing virtualized network functions (VNFs) in a network. Physical resources of the network are abstracted into virtual resource pools and shared by virtual network entities. A virtual channel is set up for communicating data between a first VNF and a second VNF. A memory pool is allocated for the virtual channel from a set of memory pools. New interfaces are provided for communication between VNFs. The new interfaces may allow to push and pull payloads or data units from one VNF to another. The data may be stored in a queue in the pooled memory allocated for the VNFs/services. Certain processing may be performed before the data is stored in the memory pool.

Подробнее
07-02-2019 дата публикации

Methods and arrangements to accelerate array searches

Номер: US20190044890A1
Принадлежит: Intel Corp

Logic may store at least a portion of an incoming packet at a memory location in a host device in response to a communication from the host device. Logic may compare the incoming packet to a digest in an entry of a primary array. When the incoming packet matches the digest, logic may retrieve a full entry from the secondary array and compare the full entry with the first incoming packet. When the full entry matches the first incoming packet, logic may store at least a portion of the first incoming packet at the memory location. And, in the absence of a match between the first incoming packet and the digest or full entry, logic may compare the first incoming packet to subsequent entries in the primary array to identify a full entry in the secondary array that matches the first incoming packet.

Подробнее
26-02-2015 дата публикации

TRAFFIC AND LOAD AWARE DYNAMIC QUEUE MANAGEMENT

Номер: US20150055456A1
Принадлежит: VMWARE, INC.

Some embodiments provide a queue management system that efficiently and dynamically manages multiple queues that process traffic to and from multiple virtual machines (VMs) executing on a host. This system manages the queues by (1) breaking up the queues into different priority pools with the higher priority pools reserved for particular types of traffic or VM (e.g., traffic for VMs that need low latency), (2) dynamically adjusting the number of queues in each pool (i.e., dynamically adjusting the size of the pools), (3) dynamically reassigning a VM to a new queue based on one or more optimization criteria (e.g., criteria relating to the underutilization or overutilization of the queue). 1. For an electronic device that comprises a network interface card (NIC) with a plurality of queues for temporarily storing data traffic through the NIC , a method of managing the queues , the method comprising:assigning a subset of data traffic to a set of queues;monitoring the subset of data traffic through the set of queues; andbased on the monitoring, modifying the set of queues.2. The method of claim 1 , wherein modifying the set of queues comprises assigning a new queue to the set of queues when data traffic through at least a subset of the set of queues exceeds a maximum threshold amount.3. The method of claim 2 , wherein the subset of queues includes all the queues in the set of queues.4. The method of claim 2 , wherein the subset of queues does not include all the queues in the set of queues.5. The method of claim 1 , wherein modifying the set of queues comprises removing a particular queue from the set of queues when the data traffic through the particular queue is below the minimum threshold amount.6. The method of claim 1 , wherein modifying the set of queues comprises removing a particular queue from the set of queues when the data traffic through the particular queue is below the minimum threshold amount for a duration of time.7. The method of claim 1 , wherein ...

Подробнее
26-02-2015 дата публикации

Traffic and load aware dynamic queue management

Номер: US20150055467A1
Принадлежит: VMware LLC

Some embodiments provide a queue management system that efficiently and dynamically manages multiple queues that process traffic to and from multiple virtual machines (VMs) executing on a host. This system manages the queues by (1) breaking up the queues into different priority pools with the higher priority pools reserved for particular types of traffic or VM (e.g., traffic for VMs that need low latency), (2) dynamically adjusting the number of queues in each pool (i.e., dynamically adjusting the size of the pools), (3) dynamically reassigning a VM to a new queue based on one or more optimization criteria (e.g., criteria relating to the underutilization or overutilization of the queue).

Подробнее
03-03-2022 дата публикации

DATA LINK LAYER DEVICE AND PACKET ENCAPSULATION METHOD THEREOF

Номер: US20220070120A1
Автор: JIN Jie, Li Junping, Li Ranyue

A data link layer device and a packet encapsulation method are provided. The data link layer device includes a first and a second first-in-first-out (FIFO) module. The first FIFO module receives and stores multiple first data from an upper-layer module, and removes data gaps from the first data to store the first data in a continuous form. When the first FIFO module is not empty, the first FIFO module generates data of different lengths based on the current amount of data stored temporarily in the first FIFO module and a preset data length. When the data queue of the second FIFO module has enough space to receive the first data, the first FIFO module transfers the first data to the second FIFO module, and the first FIFO module transfers a header including the data length to a header queue of the second FIFO module.

Подробнее
22-05-2014 дата публикации

Pipeline for handling network packets

Номер: US20140140342A1
Автор: Charles E. Narad
Принадлежит: Individual

Methods and apparatus relating to a tightly coupled scalar and Boolean processor are described. In an embodiment, a Boolean unit may include a result vector subunit. The result vector subunit may be controlled by an instruction flow that is managed by a scalar unit. Other embodiments are also disclosed.

Подробнее
03-03-2016 дата публикации

COMMUNICATION SYSTEM AND ELECTRONIC COMPONENT MOUNTING DEVICE

Номер: US20160065504A1
Принадлежит: Fuji Machine Mfg. Co., Ltd.

A communication system in which a transmission line performs data transmission using multiplexing. Data extraction sections of an optical wireless device extract data output from multiple electric devices based on a start bit of the respective data, and output the data to multiple first buffers which are disposed corresponding to the electric devices. A control section selects any one of the first buffers, and outputs the data from the first buffers to a second buffer. A control section adds an identification information ID to the data indicating from which electric device the data are obtained, and stores the data in the second buffer. The data and the identification information ID of the second buffer are input to a multiplexing device from an input port. The multiplexing device multiplexes the data together with other data as a frame. 1. A communication system comprising:multiple electric devices that output actual data in which a start bit indicating data starting is set;a data extraction section that is connected to the multiple electric devices, and that extracts the actual data based on the start bit;multiple first buffers that are disposed corresponding to each of the multiple electric devices, and that accumulate the actual data extracted by the data extraction section corresponding to the multiple electric devices;a second buffer that sequentially selects one of the multiple first buffers, and that accumulates the actual data accumulated in the selected first buffer together with identification information of the electric device which outputted the actual data; anda transmitter-side multiplexing device that inputs the actual data and the identification information from the second buffer and transmits the actual data and the identification information as multiplexed data.2. The communication system according to claim 1 , further comprising:a receiver-side multiplexing device that has multiple output ports, and that outputs the actual data and the ...

Подробнее
01-03-2018 дата публикации

SELF TUNING BUFFER ALLOCATION IN A SHARED-MEMORY SWITCH

Номер: US20180063038A1
Принадлежит: DELL PRODUCTS L.P.

An N-port, shared-memory switch allocates a shared headroom buffer pool (Ps) for a priority group (PG). Ps is smaller than a worst case headroom buffer pool (Pw), where Pw equals the sum of worst case headrooms corresponding to each port-priority tuple (PPT) associated with the PG. Each worst case headroom comprises headroom required to buffer worst case, post-pause, traffic received on that PPT. Subject to a PPT maximum, each PPT may consume Ps as needed. Because rarely will all PPTs simultaneously experience worst case traffic, Ps may be significantly smaller than Pw, e.g., Ps<(Pw/A) where M>=2. Ps may be size-adjusted based on utilization of Ps, without halting traffic to or from the switch. If Ps utilization exceeds an upper utilization threshold, Ps may be increased, subject to a maximum threshold (Pmax). Conversely, if utilization falls below a lower utilization threshold, Ps may be decreased. 1. A switching method for a shared-memory switch comprising a plurality of ports supporting a plurality of priority levels , the method comprising:for each port-priority tuple (PPT) associated with a priority group (PG), estimating a worst case headroom (Hw) for a connection to a peer, wherein the shared-memory switch supports a pause command to suspend traffic via the connection and wherein each worst cast headroom is indicative of a headroom buffer required to buffer PPT traffic received from the peer via the connection;allocating a shared headroom buffer pool (Ps) shared by all PPTs in the priority group, wherein Ps is less than a worst case headroom buffer pool (Pw), and Pw is equal to the sum of each Hw corresponding to a PPT within the priority group; andsubject to a PPT maximum, permitting any particular PPT in the priority group to consume Ps as needed for traffic received after sending a pause command.2. The method of claim 1 , wherein the PPT maximum for a particular PPT comprises the worst case headroom claim 1 , Hw claim 1 , for the particular PPT.3. The ...

Подробнее
02-03-2017 дата публикации

Systems and methods for performing packet reorder processing

Номер: US20170063733A1
Принадлежит: Cisco Technology Inc

A method for performing packet reorder processing is disclosed. The method comprises receiving, at a packet receive buffer, a data packet, the packet receive buffer comprising a plurality of N-sized pages. The method also comprises storing the received data packet across a plurality of pages of the packet receive buffer. The method further comprises writing, at storage of each of the plurality of pages, a pointer to a next page in which a subsequent portion of the data packet is stored. The method also comprises transmitting the pointer to a ring buffer. The method further comprises calculating an offset to the ring based on a sequence number of the corresponding packet, and storing the pointer to a first page in the calculate offset of the ring buffer.

Подробнее
17-03-2022 дата публикации

Communication control apparatus, communication system, communication control method, and storage medium

Номер: US20220086101A1
Принадлежит: Honda Motor Co Ltd

There is provided a communication control apparatus comprising a processor. The processor receives, from a first mobile communication apparatus, a request for permission of data transmission to a second mobile communication apparatus, the second mobile communication apparatus including a buffer memory to store received data. The processor determines whether to permit the data transmission based on a free capacity of the buffer memory and a reserved capacity indicated by a reservation setting of the buffer memory. The processor updates, when the data transmission is determined to be permitted, the reservation setting such that a capacity for received data corresponding to the data transmission is added to the reserved capacity. The processor transmits, when the data transmission is determined to be permitted, a response indicating that the data transmission is permitted to the first mobile communication apparatus.

Подробнее
08-03-2018 дата публикации

SYSTEMS AND METHODS FOR STORING MESSAGE DATA

Номер: US20180069810A1
Автор: Hafri Younes
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, are described for storing message data in a PubSub system. In certain examples, messages are received from a plurality of publishers for a plurality of channels. The messages are stored in a writable portion of a respective buffer for the channel according to an order, wherein messages in the writable portion of the buffer are inaccessible to subscribers. The method may also include advancing a pointer demarcating a boundary between the writable portion and a readable portion of the buffer such that the message is in the readable portion after the pointer has advanced. 1. A method , comprising:receiving messages on each of a plurality of channels;storing, by one or more computer processors, each message of each of the plurality of channels in a writable portion of a respective buffer for the channel according to an order, wherein messages in the writable portion of the buffer are inaccessible to subscribers; andadvancing, by the one or more computer processors, a pointer demarcating a boundary between the writable portion and a readable portion of the buffer such that the message is in the readable portion after the pointer has advanced.2. The method of claim 1 , further comprising:allowing one or more subscribers to read from the readable portion of one or more of the buffers during the storing.3. The method of claim 1 , wherein the pointer is advanced in an atomic operation.4. The method of claim 3 , wherein the atomic operation cannot be interrupted by another process or thread of execution.5. The method of claim 1 , wherein storing each message comprises:storing a length of the message at a first location in the writable portion; andstoring the message in the writable portion following the first location.6. The method of claim 1 , wherein advancing the pointer demarcating the boundary between the writable portion and the readable portion of the buffer comprises: ...

Подробнее
26-03-2015 дата публикации

HEADER REPLICATION IN ACCELERATED TCP (TRANSPORT CONTROL PROTOCOL) STACK PROCESSING

Номер: US20150085873A1
Принадлежит: Intel Corporation

In one embodiment, a method is provided. The method of this embodiment provides storing a packet header at a set of at least one page of memory allocated to storing packet headers, and storing the packet header and a packet payload at a location not in the set of at least one page of memory allocated to storing packet headers. 1. A method comprising:performing packet processing on a plurality of packets including extracting payloads from said plurality of packets;using a direct memory access (DMA) operation of a data movement circuit to place the payloads corresponding to said plurality of packets into a read buffer substantially simultaneously while performing packet processing.2. The method of claim 1 , wherein said performing packet processing is performed by a transport protocol driver.3. The method of claim 1 , wherein using said DMA operation of said data movement circuit to place said payloads comprises programming the data movement circuit to write said payloads to the read buffer with a processor.4. The method of claim 1 , wherein the data movement circuit comprises a DMA engine.5. The method of claim 4 , wherein the DMA engine resides on a chipset.6. An apparatus comprising: perform packet processing on a plurality of packets including extracting payloads from said plurality of packets; and', 'substantially simultaneously with the circuitry to perform packet processing, the circuitry to use a direct memory access (DMA) operation of a data movement circuit to place said payload corresponding to said plurality of packets into a read buffer., 'circuitry to7. The apparatus of claim 6 , wherein said circuitry to perform packet processing includes circuitry in a transport protocol driver.8. The apparatus of claim 6 , wherein said circuitry to use said DMA operation of said data movement circuit to place said payloads comprises circuitry to program the data movement circuit to write said payloads to the read buffer.9. The apparatus of claim 6 , wherein the data ...

Подробнее
23-03-2017 дата публикации

RECEPTION DEVICE, LINE NUMBER RECOGNITION CIRCUIT, LINE NUMBER RECOGNITION METHOD, AND PROGRAM

Номер: US20170085503A1
Автор: KATAOKA Hiroaki
Принадлежит: NEC Corporation

A reception device that communicates with a transmission device which divides a frame into a plurality of divided frames and transmits the divided frames to be distributed through a plurality of lines. The device includes a reception unit that receives the divided frames; a restoration unit that restores the frame by combining the received divided frames; and a recognition unit that recognizes, according to control information included in the divided frames, the number of lines used by the transmission device which transmitted the divided frames. 1. A reception device that communicates with a transmission device which divides a frame into a plurality of divided frames and transmits the divided frames to be distributed through a plurality of lines , the device comprising:a reception unit that receives the divided frames;a restoration unit that restores the frame by combining the received divided frames; anda recognition unit that recognizes, according to control information included in the divided frames, the number of lines used by the transmission device which transmitted the divided frames.2. The reception device in accordance with claim 1 , further comprising:a first buffer unit that includes one or more buffers having a first attribute and storing the received divided frames;a second buffer unit that includes one or more buffers having a second attribute and storing the received divided frames; and determines, according to the control information included in the divided frames stored in the first buffer unit, validity of the order in which the divided frames were received, and', 'acquires each divided frame, which was received in a correct order, from the first buffer unit;, 'a control unit thatwherein when the control unit detects a divided frame which was received in an incorrect order, the recognition unit controls the control unit to continuously store each received divided frame stored in the buffer unit that stores the relevant frame until the total number ...

Подробнее
02-04-2015 дата публикации

FLUCTUATION ABSORBING DEVICE, COMMUNICATION DEVICE, AND CONTROL PROGRAM

Номер: US20150092787A1
Автор: TANGE Masahiko
Принадлежит: ResoNetz LLC

A fluctuation absorbing device includes a buffer for temporarily storing packets, a pulse generating section for generating a pulse at a same interval as a transmitting interval of the packets, a fluctuation time derivation section () for deriving a fluctuation time of delay of the packets based on the pulse, a maximum fluctuation time estimate section () for estimating a maximum fluctuation time based on a plurality of fluctuation time derived in the fluctuation time derivation section (), and a setting section () for setting data storage capacity of the buffer based on the maximum fluctuation time. 1. A fluctuation absorbing device comprising:a buffer for temporarily storing packets;a pulse generating section for generating a pulse at a same interval as a transmitting interval of the packets;a fluctuation time derivation section for deriving a fluctuation time of delay of the packets based on the pulse;a maximum fluctuation time estimate section for estimating a maximum fluctuation time based on a plurality of fluctuation times derived in the fluctuation time derivation section; anda setting section for setting data storage capacity of the buffer based on the maximum fluctuation time.2. A fluctuation absorbing device according to claim 1 , wherein the fluctuation time derivation section measures a time from a generating timing of the pulse to a receiving timing of the packets claim 1 , and derives the fluctuation time based on a measured time.3. A fluctuation absorbing device according to claim 1 , comprising a delay time derivation section for deriving a delay time of the packets on a transmission line claim 1 , wherein the setting section sets the data storage capacity of the buffer based on the maximum fluctuation time and the delay time.4. A fluctuation absorbing device according to claim 1 , wherein the maximum fluctuation time estimate section calculates a standard deviation of a plurality of fluctuation times claim 1 , and estimates the maximum fluctuation ...

Подробнее
05-05-2022 дата публикации

Communications for workloads

Номер: US20220138021A1
Автор: Mark Debbage, Todd Rimmer
Принадлежит: Intel Corp

Examples described herein relate to a sender process having a capability to select from use of a plurality of connections to at least one target process, wherein the plurality of connections to at least one target process comprise a connection for the sender process and/or one or more connections allocated per job. In some examples, the connection for the sender process comprises a datagram transport for message transfers. In some examples, the one or more connections allocated per job utilize a kernel bypass datagram transport for message transfers. In some examples, the one or more connections allocated per job comprise a connection oriented transport and wherein multiple remote direct memory access (RDMA) write operations for a plurality of processes are to be multiplexed using the connection oriented transport.

Подробнее
21-03-2019 дата публикации

COMMUNICATION APPARATUS AND CONTROL METHOD FOR COMMUNICATION APPARATUS

Номер: US20190089654A1
Автор: Suzuki Tomoya
Принадлежит:

A communication apparatus capable of efficiently creating transmission packets even when a free space of a storage unit storing transmission packet headers is insufficient includes a first storage unit to store a header of a transmission packet when the transmission packet is created in a first processing procedure, a first creation unit to create the transmission packet in the first processing procedure using the first storage unit, a second storage unit to store the transmission packet header when the transmission packet is created in a second processing procedure, a second creation unit to create the transmission packet in the second processing procedure using the second storage unit, and a control unit to control which one of the first and second creation units is used based on a data size necessary for the first creation unit to create the header and a free space of the first storage unit. 1. A communication apparatus comprising:a first storage unit configured to store a header of a transmission packet when the transmission packet is created in a first processing procedure;a first creation unit configured to create the transmission packet in the first processing procedure using the first storage unit;a second storage unit configured to store the header of the transmission packet when the transmission packet is created in a second processing procedure different from the first processing procedure;a second creation unit configured to create the transmission packet in the second processing procedure using the second storage unit; anda control unit configured to perform control to determine which one of the first creation unit and the second creation unit is used based on a data size for the first creation unit to create the header and a free space of the first storage unit.2. The communication apparatus according to claim 1 , further comprising a processor configured to create the header of the transmission packet claim 1 ,wherein the first storage unit is an ...

Подробнее
26-06-2014 дата публикации

PARALLEL PROCESSING USING MULTI-CORE PROCESSOR

Номер: US20140177643A1
Автор: Finney Damon, Mathur Ashok
Принадлежит: Unbound Networks, Inc.

Disclosed are methods, systems, paradigms and structures for processing data packets in a communication network by a multi-core network processor. The network processor includes a plurality of multi-threaded core processors and special purpose processors for processing the data packets atomically, and in parallel. An ingress module of the network processor stores the incoming data packets in the memory and adds them to an input queue. The network processor processes a data packet by performing a set of network operations on the data packet in a single thread of a core processor. The special purpose processors perform a subset of the set of network operations on the data packet atomically. An egress module retrieves the processed data packets from a plurality of output queues based on a quality of service (QoS) associated with the output queues, and forwards the data packets towards their destination addresses. 1. A method comprising:receiving, at an ingress module of a network processor, a data packet from a computer network, the data packet to be processed by one of a plurality of core processors of the network processor;storing, in a memory having a plurality of buffers, the data packet in the memory, the storing including storing distinct portions of the data packet in one or more buffers if the packet size exceeds a size of a buffer; and creating a plurality of packet buffer structures for the data packet, wherein one of the packet buffer structures is a header packet buffer structure corresponds to a first buffer of the one or more buffers containing a first portion of the data packet, and wherein another one of the packet buffer structures is a tail packet buffer structure that corresponds to a last buffer of the one or more buffers containing a last portion of the data packet, and', 'linking each of the packet buffer structures from the header packet buffer structure to the tail packet buffer structure., 'generating a packet buffer chain for the data packet, ...

Подробнее
19-03-2020 дата публикации

ULTRA-SCALABLE, DISAGGREGATED INTERNET PROTOCOL (IP) AND ETHERNET SWITCHING SYSTEM FOR A WIDE AREA NETWORK

Номер: US20200092228A1
Автор: Cai Biaodong
Принадлежит:

Systems and Methods for IP and Ethernet switching in an ultra-scalable disaggregated wide area common carrier (WACC) disaggregated networking switching system. The WACC network switching system may include an Ethernet fabric having a set of M Ethernet switches each including a set of N switch ports, and a set of N input/output (IO) devices each including a set of W IO ports, a set of M Ethernet ports, an IO side packet processor (IOSP), and a fabric side packet processor (FSP). Each Ethernet switch may establish switch queues. Each IO device may establish a set of M hierarchical virtual output queues each including a set of N ingress-IOSP queues and ingress-virtual output queues, a set of W egress-IOSP queues, a set of M ingress-FSP queues, and a set of N hierarchical virtual input queues each including a set of N egress-FSP queues and egress-virtual input queues. 1. A wide area common carrier (WACC) disaggregated networking switching system comprising:an Ethernet fabric including a set of M Ethernet switches each comprising a set of N switch ports, each Ethernet switch to establish switch queues, wherein a variable i having a value ranging from 1 to M to denote the ith Ethernet switch of the set of M Ethernet switches, wherein a variable j having a value ranging from 1 to N to denote the jth switch port of the set of N switch ports; and a set of W IO ports, wherein a variable x having a value ranging from 1 to W to denote the xth IO port of the W IO ports;', 'a set of M Ethernet ports, wherein the ith Ethernet port of the jth IO device is connected to the jth switch port of the ith Ethernet switch;', establish a set of M hierarchical virtual output queues (H-VOQs) each comprising a set of N ingress-IOSP queues (I-IOSPQs) and I-VOQs, wherein the ith H-VOQ corresponds to the ith Ethernet port of the jth IO device, and wherein the jth I-IOSPQ of the ith H-VOQ corresponds to the jth IO device; and', 'establish a set of W egress-IOSP queues (E-IOSPQs), wherein the xth E ...

Подробнее
28-03-2019 дата публикации

UNIVERSAL MULTIPROTOCOL INDUSTRIAL DATA LOGGER

Номер: US20190097950A1
Принадлежит:

A data capture module includes a first port configured to receive first data transmitted from a first component to a second component of a substrate processing system, a second port configured to received second data transmitted from the second component to the first component, a first data stream forwarding module configured to duplicate the first data, forward the duplicated first data to the second port, and output the first data, and a second data stream forwarding module configured to duplicate the second data, forward the duplicated second data to the first port, and output the second data. The first port is configured to transmit the duplicated second data to the first component and the second port is configured to transmit the duplicated first data to the second component. A data compression module is configured to compress the first and second data. Data storage is configured to store the compressed data. 1. A data capture module for capturing data transmitted between first and second components of a substrate processing system , the data capture module comprising:a first port configured to receive first data transmitted from the first component to the second component;a second port configured to received second data transmitted from the second component to the first component;a first data stream forwarding module configured to (i) duplicate the first data, (ii) forward the duplicated first data to the second port, and (iii) output the first data;a second data stream forwarding module configured to (i) duplicate the second data, (ii) forward the duplicated second data to the first port, and (iii) output the second data,wherein the first port is configured to transmit the duplicated second data to the first component and the second port is configured to transmit the duplicated first data to the second component;a data compression module configured to compress the first data output from the first data stream forwarding module and the second data output from ...

Подробнее
13-04-2017 дата публикации

SYSTEMS AND METHODS FOR STORING MESSAGE DATA

Номер: US20170104696A1
Автор: Hafri Younes
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, are described for storing message data in a PubSub system. In certain examples, messages are received from a plurality of publishers for a plurality of distinct channels. The messages are ordered and stored in a plurality of buffers, with each channel having its own respective buffer. After a message has been written to a writable portion of the buffer for a channel, a pointer demarking a boundary between a readable portion of the buffer and the writeable portion of the buffer is advanced in an atomic operation. Following the atomic operation, the message resides in the readable portion and may be accessed by PubSub system components and/or processes. In general, one or more subscribers, components, or processes may read messages from the readable portion, in parallel. 1. A computer-implemented method , comprising:receiving a plurality of messages from a plurality of publishers, wherein each of the messages is associated with one of a plurality of channels; and storing the message in a writable portion of the buffer; and', 'advancing a pointer demarking a boundary between a readable portion of the buffer and the writeable portion of the buffer in an atomic operation such that the message is in the readable portion of the buffer after the atomic operation has completed., 'storing each message of each of the channels in a respective buffer for the channel according to an order of the messages assigned to the channel, wherein storing comprises2. The method of claim 1 , comprising:allowing one or more subscribers to read from the readable portion of one or more of the buffers during the storing.3. The method of claim 1 , wherein the atomic operation cannot be interrupted by another process or thread of execution.4. The method of claim 1 , wherein storing the message in the writable portion of the buffer comprises:storing a length of the message at a first location in the ...

Подробнее
21-04-2016 дата публикации

DECOUPLING RADIO FREQUENCY (RF) AND BASEBAND PROCESSING

Номер: US20160112885A1
Принадлежит:

Methods, systems, and devices are described for wireless communication. A first device, such as a user equipment (UE) may be configured with a peak data rate that corresponds to the radio frequency (RF) capacity of a modem and a sustained data rate that corresponds to the baseband capacity. The first device may receive a set of data blocks during a transmission burst from a second device. The quantity of data blocks in the burst may be based on the peak data rate. The first device may store time domain samples or frequency tones for the data and then power down the RF components for an interval based on how long it will take to process the data. The first device may then process the data at the sustained data rate. After the rest interval, the first device may power up the RF components and receive another burst of data. 1. A method of wireless communication at a first device , comprising:receiving a set of data blocks during a first interval from a second device, wherein a quantity of the set of data blocks is based on a peak data rate of a modem and a size of each data block in the set of data blocks is based on a sustained data rate of the modem;powering down one or more radio frequency (RF) components of the modem during a second interval following the first interval, wherein a length of the second interval is based on one or more parameters from a group consisting of the peak data rate, the sustained data rate, and a memory buffer capacity; andprocessing the set of data blocks during the second interval based on the sustained data rate.2. The method of claim 1 , further comprising:storing the set of data blocks in a memory buffer, wherein storing the set of data blocks comprises storing a set of time domain samples or frequency tones corresponding to the set of data blocks in the memory buffer, wherein the memory buffer is a component of an RF front end of the modem, and wherein the memory buffer capacity is based on the memory buffer.3. The method of claim 1 , ...

Подробнее
09-06-2022 дата публикации

Packet processing system, method and device having reduced static power consumption

Номер: US20220182339A1
Автор: Enrique Musoll
Принадлежит: Marvell Asia Pte Ltd, Xpliant Inc

A buffer logic unit of a packet processing device including a power gate controller. The buffer logic unit for organizing and/or allocating available pages to packets for storing the packet data based on which of a plurality of separately accessible physical memories that pages are associated with. As a result, the power gate controller is able to more efficiently cut off power from one or more of the physical memories.

Подробнее
09-06-2022 дата публикации

Packet Processing Device and Packet Processing Method

Номер: US20220182340A1
Принадлежит:

The packet processing apparatus includes a packet memory, a transmission processing unit that writes a plurality of packets to be transmitted to the packet memory to generate a combination packet into which the plurality of packets have been concatenated, a line handling unit that sends packets to a communication line, and a combination packet transfer unit that DMA-transfers the combination packet from the packet memory to the line handling unit. The transmission processing unit writes information on an address in the packet memory of beginning data of an individual packet in the combination packet to a descriptor. The line handling unit separates the DMA-transferred combination packet into a plurality of packets and sends the plurality of packets to the communication line. 17.-. (canceled)8. A packet processing apparatus comprising:a first packet memory configured to store a packet to be transmitted; concatenate a plurality of packets to be transmitted to generate a first combination packet; and', 'write the first combination packet to the first packet memory;, 'a first packet combiner configured toa first line handler configured to send a packet to be transmitted to a communication line; and DMA-transfer the first combination packet from the first packet memory to the first line handler; or', 'read the first combination packet from the first packet memory and write the first combination packet to the first line handler through a processor;, 'a first combination packet transferor configured towherein the first packet combiner is further configured to write first information on a respective address in the first packet memory of a respective beginning of each of the plurality of packets of the first combination packet to a first descriptor, the first descriptor being a predetermined data area in a memory; andwherein the first line handler is configured to separate the first combination packet into the plurality of packets and send the plurality of packets to the ...

Подробнее
13-05-2021 дата публикации

CIRCUIT FOR A BUFFERED TRANSMISSION OF DATA

Номер: US20210144105A1
Принадлежит: WAGO Verwaltungsgesellschaft mbH

A circuit with a first buffer, a second buffer, a third buffer, a fourth buffer, a first data input for first data, a second data input for second data, a data output, and control logic is disclosed. The control logic connects the first data input to one of the buffers, connects the second data input to one of the buffers, and connects the data output to one of the buffers, swap the buffer currently connected to the first data input for a non-connected buffer when first data have been validly written through the first data input into the buffer currently connected to the first data input, swap the buffer currently connected to the second data input for the non-connected buffer when second data have been validly written through the second data input into the buffer currently connected to the second data input. 1. A circuit comprising:a first buffer;a second buffer;a third buffer;a fourth buffer;a first data input for first data;a second data input for second data;a data output; andcontrol logic adapted to connect the first data input to one of the buffers, adapted to connect the second data input to one of the buffers, adapted to connect the data output to one of the buffers, adapted to swap the buffer currently connected to the first data input for a non-connected buffer when first data have been validly written through the first data input into the buffer currently connected to the first data input, adapted to swap the buffer currently connected to the second data input for the non-connected buffer when second data have been validly written through the second data input into the buffer currently connected to the second data input, and, adapted, for readout of data, to swap the buffer currently connected to the data output for the non-connected buffer when the non-connected buffer has newer validly written data.2. The circuit according to claim 1 , wherein claim 1 , for the purpose of readout of data claim 1 , instead of swapping the buffer currently connected to ...

Подробнее
03-05-2018 дата публикации

HIGH PERFORMANCE NETWORK I/O IN A VIRTUALIZED ENVIRONMENT

Номер: US20180123984A1
Автор: Sharma Pratik
Принадлежит: ORACLE INTERNATIONAL CORPORATION

From received data packets intended for a target virtual machine of a virtualization system, a destination network address of the target virtual machine is determined, and a current write buffer pointer is identified that points to a buffer associated with the identified target virtual machine corresponding to the destination network address. If the identified write buffer pointer indicates that the buffer has sufficient available space to accept the data packets, and if the associated buffer has sufficient available space, the data packets are placed in the associated buffer in buffer data locations according to a calculated new write buffer pointer value, and a wakeup byte data message is sent to a designated socket of the target virtual machine. Generally, the target virtual machine detects the wakeup byte data message at the designated socket and, in response, retrieves the data packets from the associated buffer in accordance with the new write buffer pointer value. 1. A method of processing data in a virtualization system that is managed by a virtual machine manager , the method comprising:receiving one or more incoming data packets at a network interface card of the virtualization system wherein the data packets are intended for a target virtual machine of the virtualization system, and determining a destination network address of the target virtual machine from one or more of the received data packets;identifying a corresponding current write buffer pointer from one or more of the received data packets that points to a buffer associated with the identified target virtual machine that corresponds to the destination network address;determining if the identified write buffer pointer indicates that the associated buffer has sufficient available space to accept the data packets;responsive to determining that the associated buffer has sufficient available space, placing the data packets in the associated buffer by the virtual machine manager in buffer data ...

Подробнее
16-04-2020 дата публикации

Receive buffer management

Номер: US20200117605A1
Принадлежит: Intel Corp

Examples described herein can be used to allocate replacement receive buffers for use by a network interface, switch, or accelerator. Multiple refill queues can be used to receive identifications of available receive buffers. A refill processor can select one or more identifications from a refill queue and allocate the identifications to a buffer queue. None of the refill queues is locked from receiving identifications of available receive buffers but merely one of the refill buffers is accessed at a time to provide identifications of available receive buffers. Identifications of available receive buffers from the buffer queue are provide to the network interface, switch, or accelerator to store content of received packets.

Подробнее
27-05-2021 дата публикации

PERFORMING DISTRIBUTED DYNAMIC FREQUENCY SELECTION USING A SHARED CACHE

Номер: US20210160198A1
Принадлежит:

Embodiments herein describe a group of APs that uses a shared radar cache to select a new channel after vacating a current channel when performing dynamic frequency selection (DFS). The group of APs can set aside memory to store status information about the DFS channels in the frequency band. For example, when one AP detects a radar event (and has to vacate a DFS channel), the AP updates an entry for that channel in the shared radar cache. The APs can also query the cache to determine a new channel after vacating its current channel. That is, the shared radar cache may store the most recent radar events occurring in a channel. In this manner, the APs can select a new channel that has little or no recent radar events, which reduces the likelihood the AP will have to vacate the new channel. 1. A method , comprising:detecting, at a first access point (AP), a first event when operating on a dynamic frequency selection (DFS) channel causing the first AP to vacate the DFS channel;updating an entry corresponding to the DFS channel in a shared cache in response to the first event, wherein the shared cache is shared by a plurality of APs and is hosted in memory of at least one of the plurality of APs; andselecting a new DFS channel at the first AP based on information stored in the shared cache associated with the new DFS channel.2. The method of claim 1 , wherein the shared cache is distributed across memories in multiple ones of the plurality of APs claim 1 , wherein the memories store entries of the shared cache that correspond to different DFS channels.3. The method of claim 2 , wherein updating the entry corresponding to the DFS channel comprises:determining which memory of the memories contains the entry corresponding to the DFS channel; andtransmitting a message to a second AP of the plurality of APs that includes the memory with the entry corresponding to the DFS channel.4. The method of claim 3 , wherein determining which memory of the memories contains the entry ...

Подробнее
12-05-2016 дата публикации

Network Based Service Function Chaining

Номер: US20160134531A1
Принадлежит: Broadcom Corp

Service aware network devices coordinate function chains of virtual functions. The network devices are aware of which virtual functions exist and how to interconnect them in the most efficient manner and define and process service graphs that can be maintained, monitored and redirected. The network devices themselves implement and manage the service graphs, as opposed to the virtual servers that host the virtual functions.

Подробнее
07-08-2014 дата публикации

RECEIVED DATA PROCESSING APPARATUS AND METHOD OF PROCESSING THE SAME

Номер: US20140219166A1
Автор: Morikawa Yasuyuki
Принадлежит: KABUSHIKI KAISHA TOSHIBA

According to one embodiment, a received data processing apparatus includes a FIFO memory, a forwarding designation unit, and an output processing unit. The FIFO memory stores received data in reception order. The forwarding designation unit analyzes an ID added to the received data having reached a head of the FIFO memory, and outputs a forwarding designation signal that indicates a forwarding destination in the case where there is designation of the forwarding destination, and indicates non-designation in the case where there is no designation of a forwarding destination. The output processing unit performs output processing of the received data having reached the head of the FIFO memory based on the forwarding designation signal. 1. A received data processing apparatus comprising:a FIFO memory to store received data in reception order;a forwarding designation unit to analyze an ID added to the received data having reached a head of the FIFO memory, and to output a forwarding designation signal indicating a forwarding destination in the case where there is designation of the forwarding destination and indicating non-designation in the case where there is no designation of a forwarding destination; andan output processing unit to perform output processing of the received data having reached the head of the FIFO memory based on the forwarding designation signal.2. The apparatus according to claim 1 , wherein the output processing unitoutputs the received data having reached the head of the FIFO memory to the forwarding destination when the forwarding designation signal indicates the forwarding destination, andreads and throws the received data having reached the head of the FIFO memory when the forwarding designation signal indicates the non-designation.3. The apparatus according to claim 1 , wherein the forwarding designation unit reads out the ID stored in an ID store unit and added to the received data.4. The apparatus according to claim 1 , further comprising:a ...

Подробнее
09-05-2019 дата публикации

TRAFFIC AND LOAD AWARE DYNAMIC QUEUE MANAGEMENT

Номер: US20190140984A1
Принадлежит:

Some embodiments provide a queue management system that efficiently and dynamically manages multiple queues that process traffic to and from multiple virtual machines (VMs) executing on a host. This system manages the queues by (1) breaking up the queues into different priority pools with the higher priority pools reserved for particular types of traffic or VM (e.g., traffic for VMs that need low latency), (2) dynamically adjusting the number of queues in each pool (i.e., dynamically adjusting the size of the pools), (3) dynamically reassigning a VM to a new queue based on one or more optimization criteria (e.g., criteria relating to the underutilization or overutilization of the queue). 1. For an electronic device that comprises a network interface card (NIC) with a plurality of queues , a method of managing the queues , the method comprising:monitoring data traffic to or from the NIC;based on the monitoring, specifying a pool and assigning a set of the queues to the pool, said pool having a set of criteria for managing data traffic through the set of queues; anddirecting a subset of the data traffic to the set of queues based on the set of criteria.2. The method of claim 1 , wherein the pool is a first pool claim 1 , the set of queues is a first set of queues claim 1 , the set of criteria is a first set of criteria claim 1 , and the subset of data traffic is a first subset of data traffic claim 1 , the method comprising:based on the monitoring, specifying a second pool and assigning a second set of the queues to the second pool, said second pool having a second set of criteria for managing data traffic through the second set of queues; anddirecting a second subset of the data traffic to the second set of queues based on the second set of criteria;wherein the first set of criteria differs from the second set of criteria.3. The method of claim 2 , wherein each particular pool's set of criteria specifies a maximum threshold amount of data traffic for passing through ...

Подробнее
10-06-2021 дата публикации

COMBINED INPUT AND OUTPUT QUEUE FOR PACKET FORWARDING IN NETWORK DEVICES

Номер: US20210176171A1
Принадлежит:

An apparatus for switching network traffic includes an ingress packet forwarding engine and an egress packet forwarding engine. The ingress packet forwarding engine is configured to determine, in response to receiving a network packet, an egress packet forwarding engine for outputting the network packet and enqueue the network packet in a virtual output queue. The egress packet forwarding engine is configured to output, in response to a first scheduling event and to the ingress packet forwarding engine, information indicating the network packet in the virtual output queue and that the network packet is to be enqueued at an output queue for an output port of the egress packet forwarding engine. The ingress packet forwarding engine is further configured to dequeue, in response to receiving the information, the network packet from the virtual output queue and enqueue the network packet to the output queue. 1. An apparatus for switching network traffic , the apparatus comprising: determine, in response to receiving a network packet, an egress packet forwarding engine for outputting the network packet; and', 'enqueue the network packet in a virtual output queue for output to the egress packet forwarding engine;, 'an ingress packet forwarding engine implemented in circuitry and configured tothe egress packet forwarding engine implemented in processing circuitry and configured to, in response to a first scheduling event, output, to the ingress packet forwarding engine, information indicating the network packet in the virtual output queue and that the network packet is to be enqueued at an output queue for an output port of the egress packet forwarding engine; dequeue the network packet from the virtual output queue; and', 'enqueue the network packet to the output queue; and, 'wherein the ingress packet forwarding engine is further configured to, in response to receiving the information dequeue the network packet from the output queue; and', 'output the network packet at ...

Подробнее
10-06-2021 дата публикации

Forwarding element data plane with computing parameter distributor

Номер: US20210176194A1
Принадлежит: Barefoot Networks Inc

Some embodiments provide a network forwarding element with a data-plane forwarding circuit that has a parameter collecting circuit to store and distribute parameter values computed by several machines in a network. In some embodiments, the machines perform distributed computing operations, and the parameter values that compute are parameter values associated with the distributed computing operations. The parameter collecting circuit of the data-plane forwarding circuit (data plane) in some embodiments (1) stores a set of parameter values computed and sent by a first set of machines, and (2) distributes the collected parameter values to a second set of machines once it has collected the set of parameter values from all the machines in the first set. The first and second sets of machines are the same set of machines in some embodiments, while they are different sets of machines (e.g., one set has at least one machine that is not in the other set) in other embodiments. In some embodiments, the parameter collecting circuit performs computations on the parameter values that it collects and distributes the result of the computations once it has processed all the parameter values distributed by the first set of machines. The computations are aggregating operations (e.g., adding, averaging, etc.) that combine corresponding subset of parameter values distributed by the first set of machines.

Подробнее
24-05-2018 дата публикации

DETECTING ATTACKS USING PASSIVE NETWORK MONITORING

Номер: US20180145995A1
Принадлежит:

Embodiments are directed to detecting one or more attacks in a network. One or more network flows may be monitored using one or more network monitoring computers (NMCs). If one or more file write operations are detected based on information included in one or more packets of the one or more network flows, one or more detection rules may be executed to analyze one or more portions of the one or more packets to identify file information that is associated with the one or more file write operations. One or more metrics may be provided based on the one or more detection rules and one or more of the file information, the one or more file write operations, or the like. If one or more metrics exceed one or more threshold values, one or more reports of one or more attacks may be provided. 1passively monitoring one or more network flows using the one or more NMCs; and executing one or more detection rules to analyze one or more portions of the one or more packets to identify file information that is associated with the one or more file write operations;', 'providing one or more metrics based on the one or more detection rules and a comparison of the one or more of the file information or the one or more file write operations; and', 'responsive to one or more of the one or more metrics exceeding one or more threshold values, providing one or more reports of one or more attacks based on the one or more exceeded threshold values., 'responsive to detecting one or more file write operations based on information included in one or more packets of the one or more network flows, performing further actions, including. A method for detecting one or more attacks in a network, wherein one or more processors in one or more network monitoring computers (NMCs) execute instructions to perform actions, comprising: This Utility Patent Application is a Continuation of U.S. patent application Ser. No. 15/356,381 filed on Nov. 18, 2016, now U.S. Pat. No. 9,756,061 issued on Sep. 5, 2017, the ...

Подробнее
15-09-2022 дата публикации

DILATED CONVOLUTION USING SYSTOLIC ARRAY

Номер: US20220292163A1
Принадлежит:

In one example, a non-transitory computer readable medium stores instructions that, when executed by one or more hardware processors, cause the one or more hardware processors to: load a first weight data element of an array of weight data elements from a memory into a systolic array; select a subset of input data elements from the memory into the systolic array to perform first computations of a dilated convolution operation, the subset being selected based on a rate of the dilated convolution operation and coordinates of the weight data element within the array of weight data elements; and control the systolic array to perform the first computations based on the first weight data element and the subset to generate first output data elements of an output data array. An example of a compiler that generates the instructions is also provided. 1. A method comprising:loading a first weight data element of a set of weight data elements from a memory into a systolic array;determining a first set of memory fetch parameters based on a first computation instruction, the first set of memory fetch parameters including a first start address of a first subset of a set of input data elements in the memory, a gap between elements of the first subset in the memory, and a number of the first subset;controlling a memory access circuit using the first set of memory fetch parameters to fetch the first subset from the memory to the systolic array; andcontrolling the systolic array to perform first computations based on the first weight data element and the first subset to compute first partial sums.2. The method of claim 1 , further comprising:loading a second weight data element of the set of weight data elements from the memory into the systolic array;determining a second set of memory fetch parameters based on a second computation instruction, the second set of memory fetch parameters including a second start address of a second subset of the set of input data elements in the memory, ...

Подробнее
16-05-2019 дата публикации

TIME SLOT DESIGNING DEVICE, TIME SLOT DESIGNING METHOD, AND RECORDING MEDIUM HAVING TIME SLOT DESIGNING PROGRAM STORED THEREON

Номер: US20190149466A1
Автор: Yamazaki Satoshi
Принадлежит: NEC Corporation

A time slot designing device capable of outputting a correction location and a correction reason of a constraint relating to a slot allocation result that satisfies a corrected constraint is provided. A slot designing device includes an output means for outputting a correction constraint being a constraint as a correction target included in a constraint group relating to a constraint satisfaction problem from which a satisfiable solution is not derived, and a correction reason being a reason why the correction constraint is corrected; and a derivation means for deriving a satisfiable solution of a constraint satisfaction problem generated based on the constraint group in which the output correction constraint is corrected. The derivation means outputs information indicating the correction constraint and the correction reason that are output up until the satisfiable solution is derived. 1. A time slot designing device comprising:output unit outputting a correction constraint being a constraint as a correction target included in a constraint group relating to a constraint satisfaction problem from which a satisfiable solution is not derived, and a correction reason being a reason why the correction constraint is corrected; andderivation unit deriving a satisfiable solution of a constraint satisfaction problem generated based on the constraint group in which the output correction constraint is corrected, whereinthe derivation unit outputs information indicating the correction constraint and the correction reason that are output up until the satisfiable solution is derived.2. The time slot designing device according to claim 1 , further comprisinggeneration unit generating a constraint included in the constraint group.3. The time slot designing device according to claim 2 , whereinthe generation unit includes correction candidate constraint generation unit generating a constraint as a correction candidate, and fixed constraint generation unit generating a fixed ...

Подробнее
16-05-2019 дата публикации

SYSTEMS AND METHODS FOR STORING MESSAGE DATA

Номер: US20190149487A1
Автор: Hafri Younes
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, are described for storing message data in a PubSub system. In certain examples, the method includes storing messages of each of a plurality of channels in a writable portion of a respective buffer for the channel. The method may also include moving a pointer delineating a boundary between the writable portion and a readable portion of the buffer such that the messages are in the readable portion after the pointer has moved. The method may also include allowing one or more subscribers to read from the readable portion of one or more of the buffers during the storing. 1. A method , comprising:storing messages of each of a plurality of channels in a writable portion of a respective buffer for the channel;moving a pointer delineating a boundary between the writable portion and a readable portion of the buffer such that the messages are in the readable portion after the pointer has moved; andallowing one or more subscribers to read from the readable portion of one or more of the buffers during the storing.2. The method of claim 1 , comprising:receiving the messages from a plurality of publishers on each of the plurality of channels.3. The method of claim 1 , wherein messages in the writable portion of the buffer are inaccessible to subscribers.4. The method of claim 1 , wherein the pointer is moved in an atomic operation.5. The method of claim 4 , wherein the atomic operation cannot be interrupted by another process or thread of execution.6. The method of claim 1 , wherein each message is stored in the writable portion and moved to the readable portion before another message is stored in the writable portion.7. The method of claim 1 , wherein each buffer for a particular channel expires at a different time based on a time-to-live for the buffer.8. The method of claim 1 , wherein each buffer comprises a respective time-to-live upon expiration of which will cause the buffer ...

Подробнее
07-06-2018 дата публикации

DATA ENQUEUING METHOD, DATA DEQUEUING METHOD, AND QUEUE MANAGEMENT CIRCUIT

Номер: US20180159802A1
Автор: Bao Yalin
Принадлежит: Huawei Technologies Co., Ltd.

The disclosure describes a data enqueuing method. The method may include: receiving a to-be-enqueued data packet, dividing the data packet into several slices to obtain slice information of the slices, and marking a tail slice of the data packet with a tail slice identifier; enqueuing corresponding slice information according to an order of the slices in the data packet, and in a process of enqueuing the corresponding slice information, if a slice is marked with the tail slice identifier, determining that the slice is the tail slice of the data packet, and generating a first-type node; and determining whether a target queue is empty, and if the target queue is empty, writing slice information of the tail slice into the target queue, and updating a head pointer of a queue head list according to the first-type node. 1. A data enqueuing method , applied to a queue management circuit in a queue management system of a communications processing chip , wherein several queues , a queue head list , a queue tail list , and a total queue linked list are established in the queue management system , the total queue linked list comprises several queue linked sublists , each queue is corresponding to a queue head list , a queue tail list , and a queue linked sublist , the queue head list comprises at least a head pointer , and the queue tail list comprises at least a tail pointer; several communications ports are further disposed on the communications processing chip , and each communications port is corresponding to at least two queues; and the method comprises:receiving a to-be-enqueued data packet, dividing the data packet into several slices to obtain slice information of the slices, and marking a tail slice of the data packet with a tail slice identifier, wherein the slice information comprises at least a port number, a priority, and a buffer address of the slice in a memory, and the tail slice identifier is used to indicate that the tail slice is the last slice of the data ...

Подробнее
07-06-2018 дата публикации

HEADER REPLICATION IN ACCELERATED TCP (TRANSPORT CONTROL PROTOCOL) STACK PROCESSING

Номер: US20180159803A1
Принадлежит: Intel Corporation

In one embodiment, a method is provided. The method of this embodiment provides storing a packet header at a set of at least one page of memory allocated to storing packet headers, and storing the packet header and a packet payload at a location not in the set of at least one page of memory allocated to storing packet headers. 1. An apparatus , comprising:a processor;a port to receive a packet comprising a header of a particular protocol and a payload; and receive the packet, and', 'split the packet to cause the header to be pushed into a first portion of computer memory and the payload to be pushed directly into a second, different portion of the computer memory;, 'programmable logic towherein the first portion of the computer memory comprises a cache of the processor.2. The apparatus of claim 1 , further comprising the computer memory.3. The apparatus of claim 2 , wherein the computer memory comprises off-chip memory.4. The apparatus of claim 1 , further comprising a graphic processing unit (GPU).5. The apparatus of claim 1 , further comprising a coprocessor.6. The apparatus of claim 1 , further comprising a network interface controller (NIC).7. The apparatus of claim 1 , wherein the payload is pushed directly to the second portion of the computer memory via a direct memory access.8. The apparatus of claim 1 , wherein the second portion of the computer memory comprises system memory.9. The apparatus of claim 1 , wherein the particular protocol comprises an Ethernet-based protocol.10. The apparatus of claim 1 , wherein the packet comprises a Transmission Control Protocol/Internet Protocol (TCP/IP) packet.11. The apparatus of claim 10 , wherein the header comprises a TCP/IP header.12. A method claim 10 , comprising:receiving a packet comprising a header of a particular protocol and a payload; andsplitting the packet to cause the header to be pushed into a first portion of a memory and the payload to be pushed directly into a second, different portion of the memory; ...

Подробнее
18-06-2015 дата публикации

Packet transfer system and method for high-performance network equipment

Номер: US20150169454A1
Автор: Yong Sig Jin
Принадлежит: WINS Co Ltd

The present disclosure relates to a packet transfer system and method, which can greatly improve the efficiency of a packet transfer scheme using a memory pool technique. The packet transfer system for high-performance network equipment includes a memory pool processor configured to include therein one or more memory blocks and store packet information input to an NIC. A memory allocation manager is configured to control allocation and release of the memory blocks, update information of memory blocks in response to a request of a queue or an engine, and transfer memory block addresses. The queue is configured to request a memory block from the memory allocation manager, and transfer a received memory block address to outside of the queue. The engine is configured to receive the memory block address from the queue, and perform a predefined analysis task with reference to packet information.

Подробнее
22-09-2022 дата публикации

OPERATIONS TO COPY PORTIONS OF A PACKET

Номер: US20220303230A1
Принадлежит:

Examples described herein relate to a network interface device to perform header splitting with payload reordering for one or more packets received at the network interface device and copy headers and/or payloads associated with the one or more packets to at least one memory device. 1. An apparatus comprising:a network interface device comprising:circuitry to perform header splitting with payload reordering for one or more packets received at the network interface device andcircuitry to copy headers and/or payloads associated with the one or more packets to at least one memory device.2. The apparatus of claim 1 , wherein the perform header splitting with payload reordering for one or more packets received at the network interface device comprises perform payload reordering into buffers based on a transmitter-specified order.3. The apparatus of claim 1 , wherein the perform header splitting with payload reordering for one or more packets received at the network interface device comprises:split one or more received packets into headers and payloads;store a header of the headers into a first buffer;select a second buffer based on offset specified in a received packet of the one or more received packets; andstore a payload of the payloads into the second buffer.4. The apparatus of claim 1 , wherein contents of the one or more received packets comprises an offset and wherein the perform header splitting with payload reordering for one or more packets received at the network interface device comprises determine at least one buffer to which to copy portions of the one or more received packets based on a base address of a destination memory address and the offset.5. The apparatus of claim 4 , wherein the offset is based on one or more of: sequence numbers claim 4 , length claim 4 , line number claim 4 , length claim 4 , or a base sequence number.6. The apparatus of claim 1 , comprising processor-executed software to perform header reordering into at least one buffer for the ...

Подробнее
14-05-2020 дата публикации

STREAMING PLATFORM FLOW AND ARCHITECTURE

Номер: US20200153756A1
Принадлежит: XILINX, INC.

A system includes a host system and an integrated circuit coupled to the host system through a communication interface. The integrated circuit is configured for hardware acceleration. The integrated circuit includes a direct memory access circuit coupled to the communication interface, a kernel circuit, and a stream traffic manager circuit coupled to the direct memory access circuit and the kernel circuit. The stream traffic manager circuit is configured to control data streams exchanged between the host system and the kernel circuit. 1. A system , comprising:a host system; and a direct memory access circuit coupled to the communication interface;', 'a kernel circuit; and', 'a stream traffic manager circuit coupled to the direct memory access circuit and the kernel circuit, wherein the stream traffic manager circuit is configured to control data streams exchanged between the host system and the kernel circuit., 'an integrated circuit coupled to the host system through a communication interface and configured for hardware acceleration, wherein the integrated circuit includes2. The system of claim 1 , wherein the host system and the integrated circuit communicate by exchanging packetized data.3. The system of claim 1 , wherein the integrated circuit includes:an interconnect circuitry connecting the stream traffic manager circuit and the kernel circuit.4. The system of claim 3 , wherein the kernel circuit is one of a plurality of kernel circuits and the stream traffic manager circuit is configured to interleave data streams provided to the plurality of kernel circuits.5. The system of claim 3 , wherein the integrated circuit further comprises:an input buffer coupled to the interconnect circuitry and the kernel circuit, wherein the input buffer is configured to temporarily hold packetized data from the stream traffic manager circuit and convert the packetized data into a data stream provided to the kernel circuit; andan output buffer coupled to the interconnect ...

Подробнее
14-06-2018 дата публикации

SYSTEM AND METHOD TO EFFICIENTLY SERIALIZE PARALLEL STREAMS OF INFORMATION

Номер: US20180167332A1
Автор: HUANG Tony C.
Принадлежит:

A system and method for serializing parallel streams of information. The system and method employ a plurality of buffers and a controller. The plurality of buffers are configured to store information received from a demodulator and output the stored information to a decoder. The controller is configured to store a plurality of frames of information output in a parallel manner from the demodulator into the plurality of buffers, and control the output of the plurality of buffers such that each of the plurality of frames is output to the decoder once stored. 1. A system for serializing parallel streams of information comprising:a plurality of buffers configured to store information received from a demodulator and output the stored information to a decoder; anda controller configured to store a plurality of frames of information output in a parallel manner from the demodulator into the plurality of buffers, and control the output of the plurality of buffers such that each of the plurality of frames is output to the decoder once stored.2. The system according to claim 1 , whereinthe controller is configured to control multiple buffers to store one of the plurality of frames of the information in an order in which the information in the one of the plurality of frames is output from the demodulator when a size of the one of the plurality of frames is too large to fit in a single one of the plurality of buffers, and the controller is further configured to control the multiple buffers to output the information in the one of the plurality of frames to the decoder in the order in which the information was stored in the multiple buffers.3. The system according to claim 1 , further comprisinga storage queue configured to store buffer identification information; andwherein the controller is further configured to update the buffer identification information in the storage queue to indicate that a buffer is available to store the information when the buffer is empty.4. The system ...

Подробнее
14-06-2018 дата публикации

TECHNOLOGIES FOR MULTI-CORE WIRELESS NETWORK DATA TRANSMISSION

Номер: US20180167340A1
Принадлежит:

Technologies for multi-core wireless data transmission include a computing device having a processor with multiple cores and a wireless network interface controller (NIC). The computing device establishes multiple transmission queues that are each associated with a processor core. A driver receives a packet for transmission from an application in the execution context of the application, determines a current processor core of the execution context, adds metadata to the packet indicative of the current core, and enqueues the packet in the transmission queue associated with the current core. The wireless NIC merges the packet with packet data from the other transmission queues, adds a sequence number to each packet, and transmits each packet. The wireless NIC may determine the current processor core based on the metadata of the packet and raise an interrupt to the current processor core in response to transmitting the packet. Other embodiments are described and claimed. 1. A computing device for wireless data transmission , the computing device comprising:a processor that includes a plurality of processor cores; and allocate a plurality of transmission queues, wherein each transmission queue is associated with a processor core of the plurality of processor cores;', 'receive a packet for transmission from an application, wherein the wireless driver is invoked in an execution context of the application;', 'determine a current processor core of the execution context of the wireless driver in response to receipt of the packet;', 'add metadata to the packet indicative of the current processor core; and', 'enqueue the packet in a first transmission queue in response to adding of the metadata to the packet, wherein the first transmission queue is associated with the current processor core., 'a wireless driver to2. The computing device of claim 1 , wherein each transmission queue of the plurality of transmission queues is further associated with a network peer and a traffic ...

Подробнее
15-06-2017 дата публикации

MULTIPLEXING DEVICE AND MULTIPLEXING METHOD

Номер: US20170170975A1
Принадлежит: KABUSHIKI KAISHA TOSHIBA

According to an embodiment, a multiplexing device includes: a packet generating unit which generates one or more third packets based on at least one of one or more first packets and a second packet; a main signal generating unit which generates from the third packets a main signal; an information generating unit which generates transmission multiplexing control information; a slot generating unit which generates a slot by combining the transmission multiplexing control information and the main signal corresponding to the information described in the transmission multiplexing control information having been generated a predetermined number of frames prior to the currently generated transmission multiplexing control information; and a time writing unit which writes a time in the second packet in the main signal included in the generated slot. 1. A multiplexing device comprising:a packet generating unit which generates one or more third packets based on at least one of one or more first packets and a second packet;a main signal generating unit which generates from the one or more third packets a main signal for digital broadcasting including the second packet at a predetermined cycle;an information generating unit which generates transmission multiplexing control information for the digital broadcasting based on information obtained in a process of generating the main signal;a slot generating unit which generates a slot by combining the generated transmission multiplexing control information and the main signal corresponding to the information described in the transmission multiplexing control information having been generated a predetermined number of frames prior to the currently generated transmission multiplexing control information; anda time writing unit which, when the generated slot is being transmitted to a receiving side, writes a time in the second packet included in the main signal included in the generated slot.2. The multiplexing device according to claim ...

Подробнее
15-06-2017 дата публикации

METHOD AND ELECTRONIC DEVICE FOR DISPLAYING INFORMATION

Номер: US20170171376A1
Принадлежит:

A method for displaying information and an electronic device, wherein the method includes: receiving a new information; acquiring a level of the new information; acquiring a current state of an information display window; and determining a display mode of the new information for displaying the new information according to the level of the new information and the current state of the information display window, wherein the display mode comprises a direct display mode, a waiting in queue mode and a discarding mode. In the embodiments of the disclosure, levels are assigned to information; after a terminal equipment receives a piece of new information, the terminal equipment will no longer directly display the new information, instead, it will determine a display mode based on the information level and the state of the information display window. Thus, some unimportant information may be filtered off while display of important information is guaranteed, and an overall amount of information to be displayed is reduced; thereby it may be avoided that the display of information occupies many system resources, and the processing load on the terminal equipment may be reduced. 1. A method for displaying information applied to an electronic device , comprising:receiving a new information;acquiring a level of the new information;acquiring a current state of an information display window; anddetermining a display mode of the new information for displaying the new information according to the level of the new information and the current state of the information display window, wherein the display mode comprises a direct display mode, a waiting in queue mode and a discarding mode.2. The method according to claim 1 , wherein the level comprises a first level claim 1 , a second level and a third level; and the current state of the information display window comprises a buffer pool state of the information display window;the step of the determining the display mode of the new ...

Подробнее
30-05-2019 дата публикации

Method of Operating a Protocol Translator

Номер: US20190166232A1
Автор: BULLOCK GREGORY
Принадлежит:

Disclosed is method for operating a protocol translator between an upstream device and a downstream device including receiving, at the protocol translator from the upstream device, a first plurality of packets according to a first protocol, extracting a payload from each of the first plurality of packets according to the first protocol, constructing a message from the extracted payloads, slicing the message into a second plurality of packets according to a second protocol, storing the second plurality of packets in a retransmit queue, sending the second plurality of packets to the downstream device, receiving an acknowledgement from the downstream device, and removing from the retransmit queue, one or more packets identified by the acknowledgement. 1. A method for operating a protocol translator between an upstream device and a downstream device , the method comprising:receiving, at the protocol translator from the upstream device, a first plurality of packets according to a first protocol;extracting a payload from each of the first plurality of packets according to the first protocol;constructing a message from the extracted payloads;slicing the message into a second plurality of packets according to a second protocol;storing the second plurality of packets in a retransmit queue;sending the second plurality of packets to the downstream device;receiving an acknowledgement from the downstream device; andremoving from the retransmit queue, one or more packets identified by the acknowledgement.2. The method of further comprising:leaving an unacknowledged packet in the retransmit queue; andretransmitting the unacknowledged packet to the downstream device.3. The method of further comprising:verifying the first plurality of packets conforms to the first protocol.4. The method of further comprising:filtering from the first plurality of packets a packet not addressed to the downstream device.5. The method of further comprising:maintaining, for the downstream device, a ...

Подробнее
25-06-2015 дата публикации

METHOD AND AN APPARATUS FOR VIRTUALIZATION OF A QUALITY-OF-SERVICE

Номер: US20150180793A1
Принадлежит: Cavium, Inc.

A method and a system embodying the method for virtualization of a quality of service, comprising associating a packet received at an interface with an aura via an aura identifier; determining configuration parameters for the aura; determining a pool for the aura; determining the state of the pool resources, the resources comprising a level of buffers available in the pool and a level of buffers allocated to the aura; and determining a quality of service for the packet in accordance with the determined state of the pool and the configuration parameters for the aura, is disclosed. 1. A method for virtualization of a quality of service , comprising:associating a packet received at an interface with an aura via an aura identifier;determining configuration parameters for the aura;determining a pool for the aura;determining the state of the pool resources, the resources comprising a level of buffers available in the pool and a level of buffers allocated to the aura; anddetermining a quality of service for the packet in accordance with the determined state of the pool and the configuration parameters for the aura.2. The method as claimed in claim 1 , wherein the determining a quality of service for the packet comprises:comparing the determined level of buffers available in the pool with a first threshold;comparing the determined level of buffers allocated to the aura with a second threshold; andproviding an interrupt when either the determined level of buffers available in the pool crosses the first threshold and/or when the determined level of buffers allocated to the aura crosses the second threshold.3. The method as claimed in claim 2 , further comprising:adding resources to or removing resources from the pool in accordance with the provided interrupt and the direction of crossing the first and/or the second threshold.4. The method as claimed in claim 1 , wherein the determining a quality of service for the packet comprises:comparing the determined level of the buffers ...

Подробнее
01-07-2021 дата публикации

PACKET STORAGE BASED ON PACKET PROPERTIES

Номер: US20210203622A1
Принадлежит:

In some examples, a system on chip (SOC) comprises a network switch configured to receive a packet and to identify a flow identifier (ID) corresponding to a header of the packet. The SOC comprises a direct memory access (DMA) controller coupled to the network switch, where the DMA controller is configured to divide the packet into first and second fragments based on the flow ID and to assign a first hardware queue to the first fragment and a second hardware queue to the second fragment, and wherein the DMA controller is further configured to assign memory regions to the first and second fragments based on the first and second hardware queues. The SOC comprises a snoopy cache configured to store the first fragment to the snoopy cache or to memory based on a first cache allocation command, where the first cache allocation command is based on the memory region assigned to the first fragment, where the snoopy cache is further configured to store the second fragment to the snoopy cache or to memory based on a second cache allocation command, and where the second cache allocation command is based on the memory region assigned to the second fragment. 1. A system on chip (SOC) , comprising:a network switch configured to receive a packet and to identify a flow identifier (ID) corresponding to a header of the packet;a direct memory access (DMA) controller coupled to the network switch, the DMA controller configured to divide the packet into first and second fragments based on the flow ID and to assign a first hardware queue to the first fragment and a second hardware queue to the second fragment, the DMA controller further configured to assign memory regions to the first and second fragments based on the first and second hardware queues; anda snoopy cache configured to store the first fragment to the snoopy cache or to memory based on a first cache allocation command, the first cache allocation command based on the memory region assigned to the first fragment, the snoopy ...

Подробнее
06-06-2019 дата публикации

Packet descriptor storage in packet memory with cache

Номер: US20190173809A1
Автор: Dror Bromberg, Rami Zemach
Принадлежит: Marvell Israel MISL Ltd

A first memory device stores (i) a head part of a FIFO queue structured as a linked list (LL) of LL elements arranged in an order in which the LL elements were added to the FIFO queue and (ii) a tail part of the FIFO queue. A second memory device stores a middle part of the FIFO queue, the middle part comprising a LL elements following, in an order, the head part and preceding, in the order, the tail part. A queue controller retrieves LL elements in the head part from the first memory device, moves LL elements in the middle part from the second memory device to the head part in the first memory device prior to the head part becoming empty, and updates LL parameters corresponding to the moved LL elements to indicate storage of the moved LL elements changing from the second memory device to the first memory device.

Подробнее
02-07-2015 дата публикации

Ultra Low Latency Network Buffer Storage

Номер: US20150188850A1
Принадлежит:

Buffer designs and write/read configurations for a buffer in a network device are provided. According to one aspect, a first portion of the packet is written into a first cell of a plurality of cells of a buffer in the network device. Each of the cells has a size that is less than a minimum size of packets received by the network device. The first portion of the packet can be read from the first cell while concurrently writing a second portion of the packet to a second cell. 1. A method comprising:receiving a packet at a port of a network device;writing a first portion of the packet into a first cell of a plurality of cells of a buffer in the network device, wherein each of the plurality of cells are configured to be written to independently; andreading the first portion of the packet from the first cell while concurrently writing a second portion of the packet to a second cell.2. The method of claim 1 , wherein the plurality of cells each have a size that is less than a minimum size of packets received by the network device.3. The method of claim 1 , wherein the plurality of cells each have a size such that latency associated with writing of a packet to the buffer and reading a packet from the buffer is independent of the size of the packet.4. The method of claim 1 , wherein the plurality of cells each have a size such that latency associated with writing of a packet to the buffer and reading a packet from the buffer is independent of port speed.5. The method of claim 1 , wherein receiving comprises receiving packets at a plurality of ports of the network device claim 1 , and writing comprises simultaneously writing portions of packets received at the plurality of ports to different cells of the buffer.6. The method of claim 5 , wherein writing comprises writing with an arbitration scheme in which a number of cells of the buffer are write conflict-free for data of packets arriving at the plurality of ports.7. The method of claim 1 , further comprising:providing a ...

Подробнее
02-07-2015 дата публикации

Parallel information system utilizing flow control and virtual channels

Номер: US20150188987A1
Принадлежит: Interactic Holdings LLC

Embodiments of a data handling apparatus can include a network interface controller configured to interface a processing node to a network. The network interface controller can include a network interface, a register interface, a processing node interface, and logic. The network interface can include lines coupled to the network for communicating data on the network. The register interface can include lines coupled to multiple registers. The processing node interface can include at least one line coupled to the processing node for communicating data with a local processor local to the processing node wherein the local processor can read data to and write data from the registers. The logic can receive packets including a header and a payload from the network and can insert the packets into the registers as indicated by the header.

Подробнее
30-06-2016 дата публикации

DISASTER RECOVERY OF MOBILE DATA CENTER VIA LOCATION-AWARE CLOUD CACHING

Номер: US20160188689A1
Автор: SINGH Rajesh
Принадлежит:

A method for copying first data stored at a primary data center to a secondary data center is provided. The method includes initiating a first replication task to copy the first data from the primary data center to the secondary data center. The method also includes receiving a first portion of the first data from the primary data center via a first access point, wherein a first bandwidth between the primary data center and the first access point is greater than a second bandwidth between the primary data center and the secondary data center. The method further includes storing the first portion of data in a first cache associated with the first access point. The method also includes transmitting the first portion of data from the first cache to the secondary data center. A system and non-transitory computer-readable medium are also provided. 1. A method for copying first data stored at a primary data center to a secondary data center , the method comprising:initiating a first replication task to copy the first data from the primary data center to the secondary data center;receiving a first portion of the first data from the primary data center via a first access point, wherein a first bandwidth between the primary data center and the first access point is greater than a second bandwidth between the primary data center and the secondary data center;storing the first portion of data in a first cache associated with the first access point; andtransmitting the first portion of data from the first cache to the secondary data center.2. The method of claim 1 , further comprising:determining that the first replication task is not complete; andwaiting to receive a second portion of the first data from the primary data center before transmitting the first portion of data from the first cache to the secondary data center.3. The method of claim 2 , further comprising:determining that a geographical location of the primary data center has changed while the first replication ...

Подробнее
29-06-2017 дата публикации

Technologies for inline network traffic performance tracing

Номер: US20170187587A1
Принадлежит:

Technologies for tracing network performance in a high performance computing (HPC) network include a network computing device configured to receive a network packet from a source endpoint node and store the header and trace data of the received network packet to a trace buffer of the network computing device. The network computing device is further configured to retrieve updated trace data from the trace buffer and update the trace data portion of the network packet to include the retrieved updated trace data from the trace buffer. Additionally, the network computing device is configured to transmit the updated network packet to a target endpoint node, in which the trace data of the updated network packet is usable by the target endpoint node to determine inline performance of the network relative to a flow of the network packet. Other embodiments are described and claimed herein. 1. A network computing device for tracing network performance , the network computing device comprising:one or more processors; and receive a network packet generated by a source endpoint node, wherein the network packet includes a header and a trace data portion;', 'generate trace data corresponding to the received network packet;', 'update the trace data portion of the network packet to include the trace data generated for the received network packet; and', 'transmit the updated network packet towards a target endpoint node, wherein the updated network packet is usable to determine one or more network performance characteristics., 'one or more data storage devices having stored therein a plurality of instructions that, when executed by the one or more processors, cause the network computing device to2. The network computing device of claim 1 , wherein the plurality of instructions further cause the network computing device to:extract at least a portion of the trace data from the trace data portion of the network packet;store the extracted portion of trace data to a trace buffer of the ...

Подробнее
13-06-2019 дата публикации

METHODS AND APPARATUSES FOR DYNAMIC RESOURCE AND SCHEDULE MANAGEMENT IN TIME SLOTTED CHANNEL HOPPING NETWORKS

Номер: US20190182854A1
Принадлежит:

The present application is at least directed to an apparatus operating on a network. The apparatus includes a non-transitory memory including an interface queue designated for a neighboring device and having instructions stored thereon for enqueuing a received packet. The apparatus also includes a processor, operably coupled to the non-transitory memory, configured to perform a set of instructions. The instructions include receiving the packet in a cell from the neighboring device. The instructions also include checking whether a track ID is in the received packet. The instructions also include checking a table stored in the memory to find a next hop address. Further, the instructions include inserting the packet into a subqueue of the interface queue. The application is also directed to a computer-implemented apparatus configured to dequeu a packet. The application is also directed to a computer-implemented apparatus configured to adjust a bundle of a device. The application is further directed to a computer-implemented apparatus configured to process a bundle adjustment request from a device. 1. An apparatus operating on a network comprising:a non-transitory memory including an interface queue that stores a packet for a neighbor device, the interface queue having subqueues including a high priority subqueue, a track subqueue, and a best effort subqueue; anda processor, operably coupled to the non-transitory memory, configured to perform the instructions of determining which of the subqueues to store the packet.2. The apparatus of claim 1 , wherein the track subqueue includes an allocated queue with a maximum size equal to the number of cells reserved on a track in the network.3. The apparatus of claim 2 , wherein the track subqueue includes an overflow queue to hold the packet when the allocated queue is full.4. An apparatus operating on a network comprising:a non-transitory memory including an interface queue designated for a neighboring device and having ...

Подробнее
04-06-2020 дата публикации

USE OF STASHING BUFFERS TO IMPROVE THE EFFICIENCY OF CROSSBAR SWITCHES

Номер: US20200177521A1
Принадлежит: NVIDIA Corp.

A switch architecture enables ports to stash packets in unused buffers on other ports, exploiting excess internal bandwidth that may exist, for example, in a tiled switch. This architecture leverages unused port buffer memory to improve features such as congestion handling and error recovery. 1. A crossbar switch comprising:a switching fabric; anda plurality of stash partitions coupled to the switching fabric, the stash partitions forming a stash storage pool from both of input buffers and output buffers of the crossbar switch.2. The switch of claim 1 , further comprising one or more storage virtual channels coupled to the stash partitions.3. The switch of claim 2 , the storage virtual channels coupling the input buffers to the stash partitions.4. The switch of claim 1 , further comprising logic to select packets from either the input buffers or from the stash partitions to rows of the switching fabric.5. The switch of claim 1 , further comprising one or more retrieval virtual channels.6. The switch of claim 5 , the retrieval virtual channel coupling the stash partitions to the output buffers.7. The switch of claim 1 , further comprising logic to implement one or more storage virtual channels and one or more retrieval virtual channels on each column of the switching fabric.8. The switch of claim 1 , further comprising logic to route packets to the stash partitions based on a join-shortest-queue algorithm.9. The switch of claim 1 , further comprising logic to route packets to the stash partitions based on a credit-based flow control algorithm.10. The switch of claim 1 , wherein the switch is a crossbar switch.11. The switch of claim 1 , wherein the switch is part of a dragonfly network.12. A crossbar switch comprising:a plurality of input ports and output ports; anda switching fabric coupling the input ports to the output ports; anda plurality of stash partitions in a packet recirculating data path interposed between the input ports and the output ports through the ...

Подробнее
15-07-2021 дата публикации

Vehicular micro clouds for on-demand vehicle queue analysis

Номер: US20210218692A1

The disclosure includes embodiments for a connected vehicle to form a vehicular micro cloud. In some embodiments, a method includes determining, by an onboard vehicle computer, that a queue is present in a roadway environment and that a vehicle that includes the onboard vehicle computer is present in the queue. The method includes causing a set of member vehicles to form a vehicular micro cloud in the roadway environment responsive to determining that the queue is present in the roadway environment so that determining that the queue is present triggers a formation of the vehicular micro cloud, where the vehicular micro cloud includes a set of vehicles which each share all of their unused vehicular computing resources with one another to generate a pool of vehicular computing resources that exceeds a total vehicular computing resources of any single member vehicle and is used to benefit the set of member vehicles.

Подробнее
18-06-2020 дата публикации

REAL-TIME ON-CHIP DATA TRANSFER SYSTEM

Номер: US20200195589A1
Принадлежит:

A system and method for real-time data transfer on a system-on-chip (SoC) allows MIPI-CSI (camera serial interface) data received on a first interface to be output on another MIPI-CSI interface without using system memory or delaying the loopback path. The system includes a CSI receiver, a loopback buffer, and a CSI transmitter. The loopback buffer is used for the data transfer between the CSI receiver and the CSI transmitter. The CSI transmitter receives a payload included in a data packet from the CSI receiver by way of the loopback buffer. The CSI receiver communicates a packet header of the data packet to the CSI transmitter. The CSI transmitter reads the payload from the loopback buffer based on the packet header and at least one of a buffer threshold capacity and payload size. 1. A system-on-chip (SoC) having two or more high-speed serial interfaces , the SoC comprising:a camera serial interface (CSI) receiver that receives sensor data via a first serial interface and generates a data packet that includes a payload;a loopback buffer connected to the CSI receiver, wherein the CSI receiver writes the payload into the loopback buffer; anda CSI transmitter connected between a second serial interface and the loopback buffer, wherein the CSI transmitter reads the payload from the loopback buffer when at least one of a threshold capacity of the loopback buffer is reached and the payload is received completely by the loopback buffer, and transmits the read payload over the second serial interface.2. The SoC of claim 1 , wherein the threshold capacity is based on a depth of the loopback buffer.3. The SoC of claim 1 , wherein the data packet further includes a packet header.4. The SoC of claim 3 , wherein the CSI receiver comprises:a first digital physical layer (DPHY) connected to an external camera sensor by way of the first interface for receiving the sensor data and a first clock signal, wherein the first DPHY outputs the sensor data, and generates active, sync, and ...

Подробнее
29-07-2021 дата публикации

METHODS AND SYSTEMS FOR DATA TRANSMISSION

Номер: US20210234808A1
Автор: WU Huimin
Принадлежит: ZHEJIANG DAHUA TECHNOLOGY CO., LTD.

A method for data transmission may be implemented on an electronic device having one or more processors. The one or more processors may include a master queue including a master queue head and a plurality of primary ports that are connected to each other using a serial link. The method may include operating the master queue head to obtain a message. The method may also include operating the master queue head to segment the message into a plurality of segments. The method may also include operating the master queue head to transmit the plurality of segments to a first primary port of the plurality of primary ports in the master queue. The method may also include operating the first primary port to transmit the plurality of segments to a second primary port of the plurality of primary ports in the master queue. 1. A method for data transmission implemented on an electronic device having one or more processors , comprising:receiving a message to be transmitted;determining a size of the message and a size of a message head of the message;storing the message in a total storage space, a size of the total storage space being greater than or equal to a sum of the size of the message and the size of the message head of the message, a tail of the message being aligned with a tail of the total storage space;determining whether the size of the message is greater than a message segment size (MSS);segmenting the message into a plurality of segments in response to a determination that the size of the message is greater than the MSS, a size of each of the plurality of segments being less than or equal to the MSS;determining a sequence number for each of the plurality of segments; anddividing the plurality of segments into two or more data groups, each of the two or more data groups including at least two of the plurality of segments, the sequence numbers of any two of the segments included in each of the two or more data groups being not adjacent.2. The method of claim 1 , further ...

Подробнее
25-06-2020 дата публикации

PACKET PROCESSING SYSTEM, METHOD AND DEVICE HAVING REDUCED STATIC POWER CONSUMPTION

Номер: US20200204504A1
Автор: Musoll Enrique
Принадлежит:

A buffer logic unit of a packet processing device including a power gate controller. The buffer logic unit for organizing and/or allocating available pages to packets for storing the packet data based on which of a plurality of separately accessible physical memories that pages are associated with. As a result, the power gate controller is able to more efficiently cut off power from one or more of the physical memories. 1. A packet processing system comprising:a non-transitory computer-readable packet memory comprising a plurality of physical memory units logically divided into a plurality of pages such that each of the pages is associated with a separate portion of one or more of the physical memory units;a non-transitory computer-readable buffer memory comprising one or more page buffers, wherein each of the page buffers is filled with one or more of the pages; and allocate a page of the plurality of pages that was last added within one of the page buffers to store the portion of the packet data;', 'identify the allocated page as in use while the portion of the packet data is stored on the portion of the physical memory units associated with the allocated page; and', 'identify the allocated page as available when the portion of the packet data is no longer stored on the physical memory units associated with the allocated page., 'a buffer memory logic coupled with the buffer memory, wherein for each portion of packet data that needs to be stored, the buffer memory logic is configured to2. The system of claim 1 , wherein packet data of incoming packets is stored on the physical memory units at the separate portions of the memory units based on the pages.3. The system of claim 2 , wherein the buffer memory logic initially fills each of the page buffers with the pages such that the pages are grouped according to the portion of the plurality of the physical memory units associated with the pages.4. The system of claim 3 , further comprising a power gate controller ...

Подробнее
25-06-2020 дата публикации

DE-DUPLICATING REMOTE PROCEDURE CALLS

Номер: US20200204650A1
Принадлежит:

A method, computer program product, and a computing system are provided for de-duplicating remote procedure calls at a client. In an implementation, the method may include generating a plurality of local pending remote procedure calls. The method may also include identifying a set of duplicate remote procedure calls among the plurality of remote procedure calls. The method may also include associating each remote procedure call within the set of duplicate remote procedure calls with one another. The method may also include executing a remote procedure call of the set of duplicate remote procedure calls. The method may further include providing a response for the remote procedure call of the set of duplicate remote procedure calls with the other remote procedure calls of the set of duplicate remote procedure calls. 1. A computer-implemented method comprising:generating, on a processor, a plurality of local pending remote procedure calls;identifying, on the processor, a set of duplicate remote procedure calls among the plurality of remote procedure calls, wherein at least one of the remote procedure calls of the set of duplicate remote procedure calls is a foreground request and at least one of the remote procedure calls of the set of duplicate remote procedure calls is a background request;associating, on the processor, each remote procedure call within the set of duplicate remote procedure calls with one another; 'wherein at least one of the foreground request is given priority over the background request even in the event where the background request was added earlier to the queue, and the background request is given priority over the foreground request even in the event where the foreground request was added earlier to the queue; and', 'executing, on the processor, a remote procedure call of the set of duplicate remote procedure calls,'}providing, on the processor, a response for the remote procedure call of the set of duplicate remote procedure calls to the other ...

Подробнее
04-07-2019 дата публикации

SWITCH AND DATA ACCESSING METHOD THEREOF

Номер: US20190207874A1
Принадлежит:

A data accessing method of a switch for transmitting data packets between a first source node and a first target node and between a second source node and a second target node includes: transmitting a data packet to the switch via at least one of the first communication link and the third communication link and configuring the control unit to store information contained in the data packet into the storage unit; and retrieving the information contained in the data packet from the storage unit via at least one of the second communication link and the fourth communication link. The first source node, the second source node, the first target node and the second target node share the same storage blocks. 1. A data accessing method of a switch for transmitting data packets between a first source node and a first target node and between a second source node and a second target node , and the data accessing method comprising:transmitting a data packet to the switch via at least one of the first communication link and the third communication link and configuring the control unit to store information contained in the data packet into the storage unit; andretrieving the information contained in the data packet from the storage unit via at least one of the second communication link and the fourth communication link,wherein the first source node, the second source node, the first target node and the second target node share the same storage blocks,wherein the switch comprises the storage unit, the control unit, a first port, a second port, a third port and a fourth port, the first communication link is established between the first source node and the control unit via the first port, the second communication link is established between the first target node and the control unit via the second port, the third communication link is established between the second source node and the control unit via the third port, and the fourth communication link being established between the ...

Подробнее
03-08-2017 дата публикации

METHOD, SERVER AND BASEBOARD MANAGEMENT CONTROLLER FOR INTERRUPTING A PACKET STORM

Номер: US20170222955A1
Автор: KUO Ming-I
Принадлежит:

A method for interrupting a packet storm in a server is implemented by a baseboard management controller (BMC) included in the server and includes the steps of: assigning a setting value included in firmware of the BMC to a first value so as to enable receipt of specific packets from a network, the specific packets being transmitted using a specific routing scheme; determining whether a packet storm has occurred according to a number of the specific packets that are received; and assigning the setting value to a second value so as to disable receipt of the specific packets when it is determined that the packet storm has occurred. 1. A method for interrupting a packet storm in a server , the method to be implemented by a baseboard management controller (BMC) included in the server and comprising the steps of:a) assigning a setting value included in firmware of the BMC regarding allowance for receipt of specific packets to a first value so as to enable receipt of specific packets from a network, the specific packets being transmitted using a specific routing scheme;b) determining whether a packet storm has occurred according to a number of the specific packets that are received after step a); andc) assigning the setting value to a second value so as to disable receipt of the specific packets when it is determined that the packet storm has occurred.2. The method of claim 1 , the BMC including a queue buffer claim 1 , wherein said method further comprises claim 1 , after step a) claim 1 , the step of storing network packets received by the server in the queue buffer.3. The method of claim 2 , wherein step b) includesidentifying the specific packets from the network packets which are stored in the queue buffer, based on an identification code included in each of the network packets;calculating a total number of the specific packets received within a predetermined time period; andwhen it is determined that the total number of the specific packets received within the ...

Подробнее
16-08-2018 дата публикации

DISASTER RECOVERY OF MOBILE DATA CENTER VIA LOCATION-AWARE CLOUD CACHING

Номер: US20180232429A1
Автор: SINGH Rajesh
Принадлежит: VMWARE, INC.

A method for copying first data stored at a primary data center to a secondary data center is provided. The method includes initiating a first replication task to copy the first data from the primary data center to the secondary data center. The method also includes receiving a first portion of the first data from the primary data center via a first access point, wherein a first bandwidth between the primary data center and the first access point is greater than a second bandwidth between the primary data center and the secondary data center. The method further includes storing the first portion of data in a first cache associated with the first access point. The method also includes transmitting the first portion of data from the first cache to the secondary data center. A system and non-transitory computer-readable medium are also provided. 1. A method for copying data stored at a mobile data center to a secondary data center , the secondary data center located within a distributed computing system that includes multiple access points , the method comprising:at a location-aware agent within the distributed computing system, identifying a first access point to receive data for copying from the mobile data center to the secondary data center;beginning to copy the data from the mobile data center to the distributed computing system via the first access point;at the location-aware agent within the distributed computing system, determining that a geographical location of the mobile data center has changed during the copying and in response to determining that the geographical location of the mobile data center has changed during the copying, selecting a second access point to receive data for copying from the mobile data center to the secondary data center; andcontinuing to copy the data from the mobile data center to the secondary data center via the second access point.2. The method of claim 1 , wherein the second access point is selected based on the ability of an ...

Подробнее
18-08-2016 дата публикации

Communication Nodes, Methods Therein, Computer Programs and a Computer-Readable Storage Medium

Номер: US20160241483A1
Принадлежит:

Embodiments herein relate to a method in a first communication node (′) for transmitting a packet in a first packet network operated by a first network operator towards a destination node (). The first communication node (′) is comprised in the first packet network. The first communication node (′) receives a packet with a first value related to resource sharing in a second packet network operated by a second network operator, wherein the first value indicates a level of importance of the packet relative importance of another packet along a scale of the second packet network. The first communication node (′) remarks the packet with a second value related to resource sharing in the first packet network, wherein the second value indicates a level of importance of the packet relative importance of another packet along a scale of the first packet network. The first communication node (′) transmits, over the first packet network, the remarked packet towards the destination node (). 123-. (canceled)24. A method in a first communication node for transmitting a packet in a first packet network operated by a first network operator towards a destination node , wherein the first communication node is part of the first packet network , the method comprising:receiving a packet with a first value related to resource sharing in a second packet network operated by a second network operator, wherein the first value indicates a level of importance of the packet relative to importance of another packet, along a scale of the second packet network;remarking the packet with a second value related to resource sharing in the first packet network, wherein the second value indicates a level of importance of the packet relative to importance of another packet, along a scale of the first packet network; andtransmitting, over the first packet network, the remarked packet towards the destination node.25. The method of claim 24 , further comprising encapsulating the first value in the packet or a ...

Подробнее
25-08-2016 дата публикации

APPARATUS AND METHOD FOR USE IN A SPACEWIRE-BASED NETWORK

Номер: US20160248705A9
Принадлежит: ASTRIUM LIMITED

An apparatus for use in a SpaceWire-based network is configured to send and receive data packets, and process data included in a received data packet. A header of the received data packet is stored in a buffer whilst the data is being processed, a processed data packet including the stored header and the processed data is generated, and the processed data packet is transmitted. The header of the received data packet may be modified, and the modified header attached to the processed data to generate the processed data packet. When the data packet is received via a first port, the processed data packet may be transmitted via the first port, or may be transmitted via a second port. 1. Apparatus for use in a SpaceWire-based network , the apparatus comprising:an input-output IO module configured to send and receive data packets;a processing module configured to process data included in a received data packet; anda buffer for storing a header of the received data packet whilst the data is being processed by the processing module,wherein the apparatus is configured to generate a processed data packet including the stored header and the processed data, and transmit the processed data packet.2. The apparatus of claim 1 , wherein the apparatus is further configured to modify the header of the received data packet and attach the modified header to the processed data to generate the processed data packet.3. The apparatus of claim 1 , wherein the IO module is configured to receive the received data packet via a first port and transmit the processed data packet via the first port.4. The apparatus of claim 1 , wherein the IO module is configured to receive the received data packet via a first port claim 1 ,wherein the apparatus is configured to select a second port from a plurality of available ports based on address information included in the header of the received data packet, andwherein the IO module is configured to transmit the processed data packet via the second port.5. ...

Подробнее
01-08-2019 дата публикации

TRANSMITTING CREDITS BETWEEN ACCOUNTING CHANNELS

Номер: US20190238485A1
Автор: Seely Jonathan M.
Принадлежит:

Example implementations relate to transmitting credits between accounting channels. A first number of credits may be transmitted to a source accounting buffer over a first accounting channel that is inactive. A second accounting channel may be inactivated and the first accounting channel may be activated. Any remaining credits received via the second accounting channel may be transmitted from the source accounting buffer to a destination accounting buffer. 1. A method comprising:transmitting a first number of credits to a source accounting buffer over a first accounting channel that is inactive, wherein each credit defines an amount of data available to a source to transmit to a destination memory, the source accounting buffer to track credits available to the source and a destination accounting buffer to track credits available to the destination memory;inactivating a second accounting channel for transmitting credits between the source accounting buffer and the destination accounting buffer;activating the first accounting channel for the transmission of credits between the source accounting buffer and the destination accounting buffer; andtransmitting any remaining credits received via the second accounting channel from the source accounting buffer to the destination accounting buffer.2. The method of claim 1 , further comprising adding the remaining credits transmitted from the source accounting buffer to a credit pool of the destination accounting buffer.3. The method of claim 2 , further comprising comparing a number of credits initially allocated to the source through the second accounting channel to a number of the remaining credits transmitted claim 2 , and adding a number of credits to the credit pool equal to the difference between the number of credits initially allocated to the source over the second accounting channel and the number of remaining credits transmitted.4. The method of claim 1 , further comprising waiting a predetermined period of time for ...

Подробнее
09-09-2021 дата публикации

COMMUNICATION APPARATUS, SYSTEM, ROLLBACK METHOD, AND NON-TRANSITORY MEDIUM

Номер: US20210281482A1
Принадлежит: NEC Corporation

A communication apparatus comprises a rollback control unit to create a second process to roll back a currently working first process thereto; a storage to store states shared by the first and the second processes, the second process taking over a state(s) stored in the storage unit; a buffer; and a timing control unit that controls of timing of rollback. The rollback control unit starts event buffering to store in the buffer all of an event(s) received during when the first process is processing and destined to the first process, and upon completion of the processing of the event by the first process, the rollback control unit performs switching of a working process from the first process to the second process, sends the event(s) stored therein from start of the event buffering to the second process and stop event buffering. 1. A communication apparatus , comprising:a rollback control unit that creates a second process to roll back a currently working first process thereto;a storage unit that stores one or more states shared by the first process and the second process, the storage unit enabling the second process to take over the one or more states stored therein;a buffer; anda timing control unit configured to control timing of rollback,wherein the rollback control unit controls to start event buffering such that the buffer is set to store all of one or more events received and destined to the first process, when the first process is processing an event during the rollback,wherein, under a control of the timing control unit, upon completion of the processing of the event by the first process, the rollback control unit performs switching of a working process from the first process to the second process, andthe rollback control unit controls to send the all of one or more events stored from start of the event buffering in the buffer to the second process switched from the first process and to stop the event buffering.2. The communication apparatus according to claim ...

Подробнее
30-07-2020 дата публикации

MULTI-PORT QUEUE GROUP SYSTEM

Номер: US20200244601A1
Принадлежит:

A multi-port queue group system an Network Processing Unit coupled to ingress port(s) and an egress port group having a first egress port and a second egress port. The NPU includes an egress queue group having a first egress queue associated with the first egress port and a second egress queue associated with the second egress port. The NPU receives data packets that are each directed to the egress port group via the ingress port(s), and buffers a first subset of the data packets in the first egress queue included in the egress queue group, and a second subset of the data packets in the second egress queue included in the egress queue group. The NPU then transmits at least one of the data packets via at least one of the first egress port and the second egress port included in the egress port group. 1. A multi-port queue group system , comprising:at least one ingress port; a first egress port; and', 'a second egress port; and, 'an egress port group including [ a first egress queue associated with the first egress port; and', 'a second egress queue associated with the second egress port; and, 'an egress queue group including, receive, via the at least one ingress port, a plurality of data packets;', 'determine that each of the plurality of data packets are directed to the egress port group;', 'buffer a first subset of the plurality of data packets in the first egress queue included in the egress queue group;', 'buffer a second subset of the plurality of data packets in the second egress queue included in the egress queue group; and', 'transmit at least one of the plurality of data packets via at least one of the first egress port and the second egress port included in the egress port group., 'a packet processing engine that is configured to'}], 'a Network Processing Unit (NPU) that is coupled to the at least one ingress port and each of the first egress port and the second egress port, wherein the NPU includes2. The system of claim 1 , wherein the NPU is configured to ...

Подробнее
15-08-2019 дата публикации

De-duplicating remote procedure calls

Номер: US20190253522A1
Принадлежит: International Business Machines Corp

A method, computer program product, and a computing system are provided for de-duplicating remote procedure calls at a client. In an implementation, the method may include generating a plurality of local pending remote procedure calls. The method may also include identifying a set of duplicate remote procedure calls among the plurality of remote procedure calls. The method may also include associating each remote procedure call within the set of duplicate remote procedure calls with one another. The method may also include executing a remote procedure call of the set of duplicate remote procedure calls. The method may further include providing a response for the remote procedure call of the set of duplicate remote procedure calls with the other remote procedure calls of the set of duplicate remote procedure calls.

Подробнее
13-09-2018 дата публикации

PROCESSING PACKETS ACCORDING TO HIERARCHY OF FLOW ENTRY STORAGES

Номер: US20180262434A1
Принадлежит:

Some embodiments provide a method for processing a packet received by a managed forwarding element. The method performs a series of packet classification operations based on header values of the received packet. The packet classifications operations determine a next destination of the received packet. When the series of packet classification operations specifies to send the packet to a network service that performs payload transformations on the packet, the method (1) assigns a service operation identifier to the packet that identifies the service operations for the network service to perform on the packet, (2) sends the packet to the network service with the service operation identifier, and (3) stores a cache entry for processing subsequent packets without the series of packet classification operations. The cache entry includes the assigned service operation identifier. The network service uses the assigned service operation identifier to process packets without performing its own classification operations. 120-. (canceled)21. A non-transitory machine readable medium storing a program for execution by at least one hardware processing unit , the program for implementing a managed forwarding element , the program comprising sets of instructions for:storing, in a flow-entry first storage, a first set of flow entries provided by a network controller;after processing a first packet by reference to a first flow entry in the first set of flow entries stored in the flow-entry first storage, generating second and third flow entries for processing packets sharing first and second sets of attributes with the first packet, storing the second flow entry with a second set of flow entries in an aggregate-cache second storage, and storing the third flow entry with a third set of flow entries in an exact-match third storage; andprocessing a subsequent, second packet by first examining the exact-match third storage, then examining the aggregate-cache second storage, and then ...

Подробнее
01-10-2015 дата публикации

Flow Cache Hierarchy

Номер: US20150281098A1
Принадлежит: Nicira Inc

Some embodiments provide a managed forwarding element (MFE that includes a set of flow tables including a first set of flow entries for processing packets received by the MFE. The MFE includes an aggregate cache including a second set of flow entries for processing packets received by the MFE. Each of the flow entries of the second set is for processing packets of multiple data flows. At least a subset of packet header fields of the packets of the multiple data flows have a same set of packet header field values, and a same set of operations is applied to said packets. The MFE includes an exact-match cache including a third set of flow entries for processing packets received by the MFE. Each of the flow entries of the third set is for processing packets for a single data flow having a unique set of packet header field values.

Подробнее
01-10-2015 дата публикации

CACHING OF SERVICE DECISIONS

Номер: US20150281125A1
Принадлежит:

Some embodiments provide a method for processing a packet received by a managed forwarding element. The method performs a series of packet classification operations based on header values of the received packet. The packet classifications operations determine a next destination of the received packet. When the series of packet classification operations specifies to send the packet to a network service that performs payload transformations on the packet, the method (1) assigns a service operation identifier to the packet that identifies the service operations for the network service to perform on the packet, (2) sends the packet to the network service with the service operation identifier, and (3) stores a cache entry for processing subsequent packets without the series of packet classification operations. The cache entry includes the assigned service operation identifier. The network service uses the assigned service operation identifier to process packets without performing its own classification operations. 1. A method for processing a packet received by a managed forwarding element , the method comprising:performing a series of packet classification operations based on header values of the received packet, the packet classifications operations for determining a next destination of the received packet; and assigning a service operation identifier to the packet that identifies the service operations for the network service to perform on the packet;', 'sending the packet to the network service with the service operation identifier; and', 'storing a cache entry for processing subsequent packets without the series of packet classification operations, the cache entry comprising the assigned service operation identifier,, 'when the series of packet classification operations specifies to send the packet to a network service that performs payload transformations on the packetwherein the network service uses the assigned service operation identifier to process packets ...

Подробнее
22-09-2016 дата публикации

PROTOCOL DATA UNIT INTERFACE

Номер: US20160277544A1
Принадлежит: NETAPP, INC.

An interface can be designed that efficiently constructs descriptors for streams of protocol data units (PDUs) and provides coherent views of the PDUs and the PDU stream for a requesting application regardless of location within a buffer pool for PDUs. The interface creates a descriptor for each PDU written into the buffer pool and links the descriptors in accordance with the appropriate order of the corresponding PDUs. The interface can create PDU descriptors hierarchically. For instance, a PDU descriptor for a PDU of a layer N protocol can refer to one or more PDU descriptors of a layer N−1 protocol. 1. A non-transitory computer readable storage medium storing instructions that , when executed by one or more processors , cause the one or more processors to:create a plurality of descriptors linking a plurality of protocol data units together in sequential order, wherein each of the plurality of descriptors comprises a respective data structure;create one or more higher-level descriptors, wherein each higher level descriptor of the one or more higher-level descriptors comprises a respective data structure, wherein each higher-level descriptor of the one or more higher-level descriptors comprises one or more references to one or more descriptors of the plurality of descriptors; andreceive one or more requests for data, wherein the requests for data specify at least one of data from one or more protocol data units of the plurality of protocol data units or the data representing one or more higher-level protocol data units.2. The non-transitory computer-readable medium of claim 1 , further storing instructions configured to cause the one or more processors to:in response to receiving the one or more requests for data, send at least one of:a descriptor of the plurality of descriptors,a reference to a descriptor of the plurality of descriptors,a message descriptor of the one or more message descriptors,a reference to a higher-level descriptor of the one or more higher- ...

Подробнее
11-12-2014 дата публикации

APPARATUS AND METHOD FOR USE IN A SPACEWIRE-BASED NETWORK

Номер: US20140362861A1
Принадлежит: ASTRIUM LIMITED

An apparatus for use in a SpaceWire-based network is configured to send and receive data packets, and process data included in a received data packet. A header of the received data packet is stored in a buffer whilst the data is being processed, a processed data packet including the stored header and the processed data is generated, and the processed data packet is transmitted. The header of the received data packet may be modified, and the modified header attached to the processed data to generate the processed data packet. When the data packet is received via a first port, the processed data packet may be transmitted via the first port, or may be transmitted via a second port. 1. Apparatus for use in a SpaceWire-based network , the apparatus comprising:an input-output IO module configured to send and receive data packets;a processing module configured to process data included in a received data packet; anda buffer for storing a header of the received data packet whilst the data is being processed by the processing module,wherein the apparatus is configured to generate a processed data packet including the stored header and the processed data, and transmit the processed data packet.2. The apparatus of claim 1 , wherein the apparatus is further configured to modify the header of the received data packet and attach the modified header to the processed data to generate the processed data packet.3. The apparatus of claim 1 , wherein the IO module is configured to receive the received data packet via a first port and transmit the processed data packet via the first port.4. The apparatus of claim 1 , wherein the IO module is configured to receive the received data packet via a first port claim 1 ,wherein the apparatus is configured to select a second port from a plurality of available ports based on address information included in the header of the received data packet, andwherein the IO module is configured to transmit the processed data packet via the second port.5. ...

Подробнее
04-11-2021 дата публикации

USE OF STASHING BUFFERS TO IMPROVE THE EFFICIENCY OF CROSSBAR SWITCHES

Номер: US20210344616A1
Принадлежит: NVIDIA Corp.

A switch architecture enables ports to stash packets in unused buffers on other ports, exploiting excess internal bandwidth that may exist, for example, in a tiled switch. This architecture leverages unused port buffer memory to improve features such as congestion handling and error recovery. 121-. (canceled)22. A switch comprising:a plurality of input ports receiving packets into an input port buffer;a plurality of output ports receiving packets from an output port buffer;a stash partition comprising memory addresses allocated from both of the input port buffer and the output port buffer;a switching fabric interposed between the input port buffer and the output port buffer; anda packet path from the stash partition through the switching fabric and back to the stash partition, the packet path bypassing the input ports and the output ports.23. The switch of claim 22 ,the packet path comprising a plurality of virtual channels.24. The switch of claim 22 , the packet recirculating data path comprising a storage virtual channel coupling the stash partition to inputs of the switching fabric.25. The switch of claim 24 , the packet recirculating data path comprising a retrieval virtual channel coupling outputs of the switching fabric to the stash partition.26. The switch of claim 22 , further comprising logic to route the packets from the input port buffer to the stash partition based on a join-shortest-queue algorithm.27. The switch of claim 22 , wherein the stash partition comprises a plurality of dual-ported memory banks.28. The switch of claim 22 , further comprising logic to manage the stash partition using a heap algorithm.29. A switch comprising:a first switch fabric coupling a plurality of packet sources to a plurality of packet destinations; anda second switch fabric also coupling the plurality of packet sources to the plurality of packet destinations; anda stash partition coupled to the first switch fabric and to the second switch fabric, the stash partition ...

Подробнее
13-08-2020 дата публикации

PACKET PROCESSING

Номер: US20200259766A1
Автор: Tian Hao
Принадлежит:

A memory of a network device is divided into first blocks, each first block being divided into second blocks, and each second block including a first storage space and second storage space. When a packet is stored, second blocks occupied by the packet are determined based on a packet length and a first storage space length, and the packet is stored into a first storage space of each of the second blocks. For each of the second blocks, a PD corresponding to the second block is generated, and stored into a second storage space of the second block. When a packet is read, the second blocks to be read are determined based on a start address of the packet. A packet fragment is read from a first storage space of the second blocks to be read, and the read packet segments are composed into the second packet to be sent. 1. A method of processing a packet , wherein , a memory is divided into a plurality of first blocks , each of the first blocks being divided into a plurality of second blocks , each of the second blocks including a first storage space and a second storage space , and the method comprising:obtaining a first packet to be stored;determining one or more second blocks to be occupied by the first packet based on a length of the first packet and a length of the first storage space;storing the first packet in a first storage space of each of the determined second blocks;for each of the determined second blocks,generating a packet descriptor (PD) corresponding to the second block, andstoring the PD in a second storage space of the second block;determining one or more second blocks to be read based on a start address of a second packet to be read;for each of the determined second blocks to be read,reading a packet fragment from a first storage space of the second block to be read, andreading the PD from a second storage space of the second block to be read;obtaining the second packet by composing the read packet segments based on the read PDs, and sending the second ...

Подробнее
11-11-2021 дата публикации

UTILIZING COHERENTLY ATTACHED INTERFACES IN A NETWORK STACK FRAMEWORK

Номер: US20210352023A1

Embodiments for implementing an enhanced network stack framework in a computing environment. A plurality of network buffers coherently attached between one or more applications and a network interface may be shared while bypassing one or more drivers and an operating systems using an application buffer, a circular buffer and a queuing and pooling operation. 1. A method , by a processor , for utilizing an enhanced network stack framework in a computing environment , comprising:sharing a plurality of network buffers coherently attached between one or more applications and a network interface while bypassing one or more drivers and an operating systems using an application buffer, a circular buffer and a queuing and pooling operation.2. The method of claim 1 , further including controlling the plurality of network buffers by a shared library.3. The method of claim 1 , further including sharing one or more address spaces of the plurality of network buffers between the one or more applications using the network interface claim 1 , wherein the plurality of network buffers used for input/output (I/O) control.4. The method of claim 1 , further including exchanging memory pointers with coherently attached devices using the circular buffer.5. The method of claim 1 , further including executing the queuing and pooling operation for the plurality of network buffers for network buffer transmission claim 1 , reception claim 1 , and manipulation.6. The method of claim 1 , wherein the queuing and pooling operation further includes moving claim 1 , assigning claim 1 , or reassigning one of the plurality of network buffers from one or more queues and one or more pools.7. The method of claim 1 , further including establishing a shared memory region and a private memory region using the plurality of network buffers.8. A system for utilizing an enhanced network stack framework claim 1 , comprising: 'share a plurality of network buffers coherently attached between one or more ...

Подробнее