Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 8728. Отображено 200.
22-08-2023 дата публикации

Accessing an external storage through a NIC

Номер: US0011736565B2
Принадлежит: VMWARE, INC., VMware, Inc.

Some embodiments provide a method of providing distributed storage services to a host computer from a network interface card (MC) of the host computer. At the NIC, the method accesses a set of one or more external storages operating outside of the host computer through a shared port of the NIC. In some embodiments, the method accesses the external storage set by using a network fabric storage driver that employs a network fabric storage protocol to access the external storage set. The method presents the external storage as a local storage of the host computer to a set of programs executing on the host computer. In some embodiments, the method presents the local storage by using a storage emulation layer on the NIC to create a local storage construct that presents the set of external storages as a local storage of the host computer.

Подробнее
14-12-2023 дата публикации

Computer and Network Interface Controller Securely Offloading Encryption Keys and RDMA Encryption Processing to the Network Interface Controller

Номер: US20230403148A1
Принадлежит:

Encryption operations are securely offloaded to a network interface controller (NIC). Encryption keys are securely transferred from a virtual machine (VM) to the NIC and data is securely transferred from encrypted VM memory to secure buffers in the NIC. The NIC handles the encryption and decryption operations in hardware, greatly increasing encryption performance while not reducing security. This is especially useful in cloud server environments, so the cloud service provider does not have access to the encryption keys or the unencrypted data. The offloaded operations are performed with numerous different communication protocols, including RDMA, QUIC, IPsec underlay and WireGuard.

Подробнее
01-06-2023 дата публикации

METHOD FOR ACCESSING SYSTEM MEMORY AND ASSOCIATED PROCESSING CIRCUIT WITHIN A NETWORK CARD

Номер: US20230171207A1
Автор: Chia-Hung Lin
Принадлежит: Realtek Semiconductor Corp.

The present invention provides a method for accessing a system memory, wherein the method includes the steps of: reading a descriptor from the system memory, where the descriptor includes a buffer start address field and a buffer size field, wherein the buffer start address field includes a start address of a buffer in the system memory, and the buffer size field indicates a size of the buffer; receiving multiple packets, and writing the multiple packets in to the buffer; modifying the descriptor according to the multiple packets stored in the buffer to generate a modified descriptor, wherein the modified descriptor only comprises information of part of the multiple packets or does not comprise information of any one of the multiple packets; and writing the modified descriptor into the system memory.

Подробнее
14-04-2022 дата публикации

PLATFORM AGNOSTIC ABSTRACTION FOR FORWARDING EQUIVALENCE CLASSES WITH HIERARCHY

Номер: US20220116488A1
Принадлежит:

Methods, systems, and computer-readable mediums for managing forwarding equivalence class (FEC) hierarchies, including obtaining a forwarding equivalence class (FEC) hierarchy; making a first determination that a first hardware component supports a maximum levels of indirection (MLI) quantity; making a second determination that the FEC hierarchy has a hierarchy height; based on the first determination and the second determination, performing a comparison between the MLI quantity and the hierarchy height to obtain a comparison result; and based on the comparison result, performing a FEC hierarchy action set.

Подробнее
14-09-2023 дата публикации

METHOD AND APPARATUS FOR MANAGING BUFFERING OF DATA PACKET OF NETWORK CARD, TERMINAL AND STORAGE MEDIUM

Номер: US20230291696A1
Автор: Xu MA
Принадлежит:

A method and apparatus for managing buffering of data packets of a network card, a terminal and a storage medium are provided. The method includes: setting ring buffer queues, setting a length of each ring buffer queue according to a size of a total buffer space and the number of threads of an upper-layer application, then setting a buffer pool formed by two ring buffer queues, and setting the two ring buffer queues in the buffer pool as a busy queue and an idle queue, respectively; a network card driver receiving data packets from a data link, classifying the data packets, sequentially buffering the classified data packets into the busy queue by using a write pointer of the busy queue, and then sequentially mapping addresses of the buffered data packets in the busy queue into the idle queue; acquiring latest addresses of the buffered data packets in the busy queue by using a read pointer of the idle queue; and the upper-layer application successively acquiring and processing the buffered ...

Подробнее
28-12-2023 дата публикации

PACKET PROCESSING WITH REDUCED LATENCY

Номер: US20230421512A1
Принадлежит: Intel Corporation

Generally, this disclosure provides devices, methods, and computer readable media for packet processing with reduced latency. The device may include a data queue to store data descriptors associated with data packets, the data packets to be transferred between a network and a driver circuit. The device may also include an interrupt generation circuit to generate an interrupt to the driver circuit. The interrupt may be generated in response to a combination of an expiration of a delay timer and a non-empty condition of the data queue. The device may further include an interrupt delay register to enable the driver circuit to reset the delay timer, the reset postponing the interrupt generation.

Подробнее
18-04-2023 дата публикации

Flow processing system and method

Номер: CN115987925A
Принадлежит:

The embodiment of the invention provides a flow processing system and method, and relates to the field of communication, a receiving module receives data, an on-chip cache module stores the data, a queue number generation module generates a queue number for the data, and the data is sent to a corresponding queue. Based on the determined quantity of the descriptor information sent to each queue and a length threshold corresponding to the queue, the descriptor information exceeding the length threshold is added to the DMA queue, the queue scheduling module schedules the descriptor information in different queues based on a scheduling rule, and the data output module receives the descriptor information and outputs the descriptor information to the DMA queue. And acquiring corresponding to-be-processed data from the on-chip cache module based on the descriptor information, outputting the to-be-processed data, and temporarily storing the data corresponding to the descriptor information in the ...

Подробнее
06-06-2023 дата публикации

Data request servicing using multiple paths of smart network interface cards

Номер: US0011671350B1
Принадлежит: RED HAT, INC.

Data requests can be serviced by multiple paths of smart network interface cards (NICs). For example, a system can receive a request for data at a first path of a smart NIC. The first path can be a hardware-implemented path. The system can send one or more parameters of the request to a second path of the smart NIC. The second path can be a slower path than the first path and configured to execute a routing algorithm for the request. The system can receive routing information for the request from the second path based on the routing algorithm and transmit the request to a storage node based on the routing information.

Подробнее
20-04-2022 дата публикации

MESSAGE PROCESSING

Номер: EP3657744B1
Автор: TIAN, Hao
Принадлежит: New H3C Technologies Co., Ltd.

Подробнее
20-09-2022 дата публикации

Connection management in a network adapter

Номер: US0011451493B2
Принадлежит: MELLANOX TECHNOLOGIES, LTD.

A network adapter includes a network interface, a host interface and processing circuitry. The network interface connects to a communication network for communicating with remote targets. The host interface connects to a host that accesses a Multi-Channel Send Queue (MCSQ) storing Work Requests (WRs) originating from client processes running on the host. The processing circuitry is configured to retrieve WRs from the MCSQ and distribute the WRs among multiple Send Queues (SQs) accessible by the processing circuitry.

Подробнее
05-07-2022 дата публикации

Internet small computer interface systems extension for remote direct memory access (RDMA) for distributed hyper-converged storage systems

Номер: US0011379405B2
Принадлежит: VMware, Inc.

Certain Embodiments described herein relate to configuring the network-storage stack of two devices (e.g., physical or virtual) communicating together (e.g., an initiator and a target, as defined below) with Internet Small Computer Systems Interface (iSCSI) extension for remote direct memory access (RDMA) iSER, which is a protocol designed to utilize RDMA to accelerate iSCSI data transfer. The iSER protocol is implemented as an iSER datamover layer that acts as an interface between an iSCSI layer and an RDMA layer of the network-storage stacks of the two devices. Using iSER in conjunction with RDMA allows for bypassing the existing traditional network protocol layers (e.g., TCP/IP protocol layers) of the devices and permits data to be transferred directly, between the two devices, using certain memory buffers, thereby avoiding memory copies taking place when the existing network protocol layers are used.

Подробнее
02-11-2023 дата публикации

METHOD OF REDUCING LATENCY IN COMMUNICATION OF DATA PACKETS

Номер: US20230353510A1
Принадлежит: CYBERSTORM PTE. LTD.

A method of reducing latency in communication of data packets comprising the steps of: receiving a data packet from a first device; determining whether the addition of the data packet results in the current number of data packets stored in all data queues exceeds a global threshold limit, where if the addition of the data packet results in the number of data packets stored not exceeding the global threshold limit then (i) allocate the data packet to a queueing group based on a predetermined factor; (ii) determine whether the addition of the data packets results in the number of data packets stored in the data queues of the allocated queueing group exceeds a local threshold limit. However, if the addition of the data packet results in the number of data packets stored not exceeding the local threshold limit then allocate the data packet to a data queue in the allocated queueing group based on the predetermined factor. The data packets stored in each data queue of a queueing group thereafter ...

Подробнее
02-11-2023 дата публикации

SYNCHRONOUS COMMUNICATION APPARATUS, METHOD OF CONTROLLING THE SAME, AND STORAGE MEDIUM

Номер: US20230354100A1
Автор: MOTOHARU SUZUKI
Принадлежит:

A synchronous communication apparatus includes an output unit configured to output a reception packet and a reception time stamp indicating a time when the reception packet is received, a first storage unit configured to store the reception packet, a second storage unit configured to store the reception time stamp, a determination unit configured to determine propriety of storage of the reception packet in the first storage unit, based on a free capacity of the first storage unit, and a processing unit configured to perform time synchronous processing using the reception packet stored in the first storage unit and the reception time stamp stored in the second storage unit, the processing unit being configured not to use the reception time stamp corresponding to the reception packet in a case where the determination unit determines that the reception packet is not storable.

Подробнее
25-01-2023 дата публикации

PRIORITY QUEUE SORTING SYSTEM AND METHOD WITH DETERMINISTIC AND BOUNDED LATENCY

Номер: EP4123992A1
Принадлежит:

A priority queue sorting system including a priority queue and a message storage. The priority queue includes multiple priority blocks that are cascaded in order from a lowest priority block to a highest priority block. Each priority block includes a register block storing an address and an identifier, compare circuitry that compares a new identifier with the stored identifier for determining relative priority, and select circuitry that determines whether to keep or shift and replace the stored address and identifier within the priority queue based on the relative priority. The message storage stores message payloads, each pointed to by a corresponding stored address of a corresponding priority block. Each priority block contains its own compare and select circuitry and determines a keep, shift, or store operation. Thus, sorting is independent of the length of the priority queue thereby achieving deterministic sorting latency that is independent of the queue length.

Подробнее
23-06-2023 дата публикации

SDN network data forwarding method and device based on DPDK

Номер: CN116319628A
Принадлежит:

The invention provides an SDN network data forwarding method and device based on DPDK, and belongs to the technical field of network communication. The method comprises the steps that a scheduling static graph is created, the scheduling static graph comprises a plurality of nodes with fixed positions, the nodes comprise a source node, a common node and a general node, and the general node is located between the source node and the common node; setting node positions in the scheduling static graph to construct a data scheduling static graph; and scheduling network data through the BitMap, and sending the network data according to the strategy of the data scheduling static graph. According to the method provided by the invention, the data processing and forwarding performance can be improved.

Подробнее
28-04-2023 дата публикации

Multi-PCIe channel network card and single-network-port network card driving method for sending messages

Номер: CN116028426A
Автор: ZHU MIN, WANG YUFENG
Принадлежит:

The invention relates to the technical field of computer network cards, and discloses a multi-PCIe (Peripheral Component Interconnect Express) channel network card and a single-network-port network card driving method for uploading messages. The multi-PCIe channel network card comprises a plurality of PCIeIPCres, one ends of the PCIeIPCres are electrically connected with a master clock module and an MACIPCre through an AHB bus, the other ends of the PCIeIPCres are electrically connected with a PHY module through PCIe slots with the same or different PCIe channels, and the PCIeIPCres are used for configuring a plurality of PCIe channels. Related functions of the network card are realized through the FPGA, and a PCIe channel is divided into a master PCIe channel and a plurality of slave PCIe channels; the physical bandwidth bottleneck of interaction between the uplink path of the network port and the CPU is solved, and the performance of the network port is improved; the method can be suitable ...

Подробнее
22-08-2023 дата публикации

Storage edge controller with a metadata computational engine

Номер: US0011734363B2
Принадлежит: Marvell Asia Pte, Ltd.

Embodiments described herein provide improved methods and systems for generating metadata for media objects at a computational engine (such as an artificial intelligence engine) within the storage edge controller, and for storing and using such metadata, in data processing systems.

Подробнее
15-08-2023 дата публикации

Performing computations during idle periods at the storage edge

Номер: US0011727064B2
Автор: Noam Mizrahi
Принадлежит: MARVELL ASIA PTE LTD, Marvell Asia Pte Ltd

A controller, for use in a storage device of a data processing system, includes a host interface, a memory interface and one or more processors. The host interface is configured to communicate over a computer network with one or more remote hosts of a data processing system. The memory interface is configured to communicate locally with a non-volatile memory of the storage device. The one or more processors are configured to manage local storage or retrieval of media objects at the non-volatile memory, and to perform additional tasks that are not associated with management of storage or retrieval of the objects.

Подробнее
22-03-2022 дата публикации

Technologies for managing single-producer and single consumer rings

Номер: US0011283723B2
Принадлежит: Intel Corporation

Technologies for managing a single-producer and single-consumer ring include a producer of a compute node that is configured to allocate data buffers, produce work, and indicate that work has been produced. The compute node is configured to insert reference information for each of the allocated data buffers into respective elements of the ring and store the produced work into the data buffers. The compute node includes a consumer configured to request the produced work from the ring. The compute node is further configured to dequeue the reference information from each of the elements of the ring that correspond to the portion of data buffers in which the produced work has been stored, and set each of the elements of the ring for which the reference information has been dequeued to an empty (i.e., NULL) value. Other embodiments are described herein.

Подробнее
01-08-2023 дата публикации

Accessing multiple external storages to present an emulated local storage through a NIC

Номер: US0011716383B2
Принадлежит: VMWARE, INC., VMware, Inc.

Some embodiments provide a method of providing distributed storage services to a host computer from a network interface card (NIC) of the host computer. At the NIC, the method accesses a set of one or more external storages operating outside of the host computer through a shared port of the NIC that is not only used to access the set of external storages but also for forwarding packets not related to an external storage. In some embodiments, the method accesses the external storage set by using a network fabric storage driver that employs a network fabric storage protocol to access the external storage set. The method presents the external storage as a local storage of the host computer to a set of programs executing on the host computer. In some embodiments, the method presents the local storage by using a storage emulation layer on the NIC to create a local storage construct that presents the set of external storages as a local storage of the host computer.

Подробнее
28-11-2023 дата публикации

Distributed link descriptor memory

Номер: US0011831567B1
Принадлежит: Marvell Asia Pte, Ltd.

Link data is stored in a distributed link descriptor memory (“DLDM”) including memory instances storing protocol data unit (“PDU”) link descriptors (“PLDs”) or cell link descriptors (“CLDs”). Responsive to receiving a request for buffering a current transfer data unit (“TDU”) in a current PDU, a current PLD is accessed in a first memory instance in the DLDM. It is determined whether any data field designated to store address information in connection with a TDU is currently unoccupied within the current PLD. If no data field designated to store address information in connection with a TDU is currently unoccupied within the current PLD, a current CLD is accessed in a second memory instance in the plurality of memory instances of the same DLDM. Current address information in connection with the current TDU is stored in an address data field within the current CLD.

Подробнее
15-06-2023 дата публикации

STREAMING PLATFORM READER

Номер: US20230188486A1
Принадлежит: Chicago Mercantile Exchange Inc.

A streaming platform reader includes: a plurality of reader threads configured to retrieve messages from a plurality of partitions of a streaming platform, wherein each message in the plurality of partitions is associated with a unique identifier; a plurality of queues coupled to the plurality of reader threads configured to store messages or an end of partition signal from the reader threads, wherein each queue includes a first position that stores the earliest message stored by a queue; a writer thread controlled by gate control logic that: compares the identifiers of all of the messages in the first positions of the queues of the plurality of queues, and forwards, to a memory, the message associated with the earliest identifier; and wherein the gate control logic blocks the writer thread unless each of the queues contains a message or an end of partition signal.

Подробнее
29-08-2023 дата публикации

Method and system for handling of data packet/frames using an adapted bloom filter

Номер: US0011743186B2
Принадлежит: OVH

A method and system are disclosed for handling a received content word in a system comprising a memory of memory words, wherein: each memory word comprises Bloom Filter structures. The method comprises hashing the content word into a fixed-size word, pointing to the memory word corresponding to an address of the fixed-size word, pointing to, and reading, the Bloom Filter structure in the pointed memory word corresponding to an address in the fixed-size word, and reading and writing the content of the Bloom Filter structures so as to keep track of a number of occurrences of the received content word over a sliding window of time.

Подробнее
12-01-2023 дата публикации

Data Sequence Amendment Method, Packet Monitoring Apparatus, Data Sequence Amendment Device, and Data Sequence Amendment Program

Номер: US20230009530A1
Принадлежит:

An embodiment is a data sequence correction method. The data sequence correction method including temporarily saving data with sequence information imparted thereto in a ring buffer, the ring buffer having a predetermined number of storage regions corresponding to the sequence information, and being provided with a monitoring section made up of one, or two or more consecutive sequence numbers, and an acceptance section in which a start or a second sequence number of the monitoring section is a start sequence number, and the sequence number ahead by a count of storage regions of the ring buffer including the start of the monitoring section is an end sequence number.

Подробнее
22-02-2022 дата публикации

Systems for building data structures with highly scalable algorithms for a distributed LPM implementation

Номер: US0011258707B1
Принадлежит: PENSANDO SYSTEMS INC.

Described are programmable IO devices configured to perform operations. These operations comprise: determining a set of range-based elements for a network; sorting the set of range-based elements according to a global order among the range-based elements; generating an interval table from the sorted range-based elements; generating an interval binary search tree from the interval table; propagating data stored in subtrees of interior stages of the interval binary search tree to subtrees of a last stage of the interval binary search tree such that the interior stages do not comprise data; converting the interval binary search tree to a Pensando Tree; compressing multiple levels of the Pensando Tree into cache-lines; and assembling the cache-lines in the memory unit such that each stage can compute an address of a next-cache line to be fetched by a next stage.

Подробнее
19-09-2023 дата публикации

Socket replication between nodes of a network device without operating system kernel modification

Номер: US0011765257B1
Принадлежит: Juniper Networks, Inc.

An example network device includes a primary node and a standby node. The primary node includes one or more processors implemented in circuitry and configured to execute an operating system providing an application space and a kernel space, execute a replication application in the application space to receive a write function call including data to be written to a socket of the operating system and to send a representation of the data to a replication driver executed in the kernel space, execute the replication driver to send the representation of the data to a replication module executed in the kernel space, and execute the replication module to send the representation of the data to the standby node and, after receiving an acknowledgement from the standby node, to send the data to the socket.

Подробнее
15-11-2023 дата публикации

METHOD FOR PROCESSING TCP MESSAGE, TOE ASSEMBLY, AND NETWORK DEVICE

Номер: EP3846405B1
Принадлежит: HUAWEI TECHNOLOGIES CO., LTD.

Подробнее
08-03-2022 дата публикации

Link aggregation group failover for multicast

Номер: US0011271869B1
Принадлежит: Barefoot Networks, Inc.

A method of multicasting packets by a forwarding element that includes several packet replicators and several egress pipelines. Each packet replicator receives a data structure associated with a multicast packet that identifies a multicast group. Each packet replicator identifies a first physical egress port of a first egress pipeline for sending the multicast packet to a member of the multicast group. The first physical egress port is a member of LAG. Each packet replicator determines that the first physical egress port is not operational and identifies a second physical port in the LAG for sending the multicast packet to the member of the multicast group. When a packet replicator is connected to the same egress pipeline as the second physical egress, the packet replicator provides the identification of the second physical egress port to the egress pipeline to send the packet to the multicast member. Otherwise the packet replicator drops the packet.

Подробнее
30-06-2022 дата публикации

Redundant Media Packet Streams

Номер: US20220210210A1
Принадлежит:

This invention concerns the transmitting and receiving of digital media packets, such as audio and video channels and lighting instructions. In particular, the invention concerns the transmitting and receiving of redundant media packet streams. Samples are extracted from a first and second media packet stream. The extracted samples are written to a buffer based on the output time of each sample. Extracted samples having the same output in time are written to the same location in the buffer. Both media packet streams are simply processed all the way to the bugger without any particular knowledge that one of the packet streams is actually redundant. This simplifies the management of the redundant packet streams, such as eliminating the need for a “fail-over” switch and the concept of an “active stream”, the location is the storage space allocated to store one sample. The extracted sample written to the location may be written over another extracted sample from a different packet stream previously ...

Подробнее
17-10-2023 дата публикации

Systems and methods for enhancing or affecting neural stimulation efficiency and/or efficacy

Номер: US0011786729B2
Принадлежит: Advanced Neuromodulation Systems, Inc.

Systems and methods for enhancing or affecting neural stimulation efficiency and/or efficacy are disclosed. In one embodiment, a system and/or method may apply electromagnetic stimulation to a patient's nervous system over a first time domain according to a first set of stimulation parameters, and over a second time domain according to a second set of stimulation parameters. The first and second time domains may be sequential, simultaneous, or nested. Stimulation parameters may vary in accordance with one or more types of duty cycle, amplitude, pulse repetition frequency, pulse width, spatiotemporal, and/or polarity variations. Stimulation may be applied at subthreshold, threshold, and/or suprathreshold levels in one or more periodic, aperiodic (e.g., chaotic), and/or pseudo-random manners. In some embodiments stimulation may comprise a burst pattern having an interburst frequency corresponding to an intrinsic brainwave frequency, and regular and/or varying intraburst stimulation parameters ...

Подробнее
06-12-2022 дата публикации

Multi-stride packet payload mapping for robust transmission of data

Номер: US0011522816B2
Принадлежит: MIXHalo Corp., Mixhalo Corp.

Systems and methods for packet payload mapping for robust transmission of data are described. For example, methods may include receiving, using a network interface, packets that each respectively include a primary frame and one or more preceding frames from the sequence of frames of data that are separated from the primary frame in the sequence of frames by a respective multiple of a stride parameter; storing the frames of the packets in a buffer with entries that each hold the primary frame and the one or more preceding frames of a packet; reading a first frame from the buffer as the primary frame from one of the entries; determining that a packet with a primary frame that is a next frame in the sequence has been lost; and, responsive to the determination, reading the next frame from the buffer as a preceding frame from one of the entries.

Подробнее
27-10-2022 дата публикации

GENERATION OF DESCRIPTIVE DATA FOR PACKET FIELDS

Номер: US20220345423A1
Принадлежит: Barefoot Networks, Inc.

Some embodiments provide a method for a parser of a processing pipeline. The method receives a packet for processing by a set of match-action stages of the processing pipeline. The method stores packet header field (PHF) values from a first set of PHFs of the packet in a set of data containers. The first set of PHFs are for use by the match-action stages. For a second set of PHFs not used by the match-action stages, the method generates descriptive data that identifies locations of the PHFs of the second set within the packet. The method sends (i) the set of data containers to the match-action stages and (ii) the packet data and the generated descriptive data outside of the match-action stages to a deparser that uses the packet data, generated descriptive data, and the set of data containers as modified by the match-action stages to reconstruct a modified packet.

Подробнее
04-05-2023 дата публикации

MULTI-STRIDE PACKET PAYLOAD MAPPING FOR ROBUST TRANSMISSION OF DATA

Номер: US20230138713A1
Принадлежит:

Systems and methods for packet payload mapping for robust transmission of data are described. For example, methods may include receiving, using a network interface, packets that each respectively include a primary frame and one or more preceding frames from the sequence of frames of data that are separated from the primary frame in the sequence of frames by a respective multiple of a stride parameter; storing the frames of the packets in a buffer with entries that each hold the primary frame and the one or more preceding frames of a packet; reading a first frame from the buffer as the primary frame from one of the entries; determining that a packet with a primary frame that is a next frame in the sequence has been lost; and, responsive to the determination, reading the next frame from the buffer as a preceding frame from one of the entries.

Подробнее
25-10-2022 дата публикации

Non-disruptive implementation of policy configuration changes

Номер: US0011483206B2
Принадлежит: Cisco Technology, Inc.

Techniques for non-disruptive configuration changes are provided. A packet is received at a network device, and the packet is buffered in a common pool shared by a first processing pipeline and a second processing pipeline, where the first processing pipeline corresponds to a first policy and the second processing pipeline corresponds to a second policy. A first copy of a packet descriptor for the packet is queued in a first scheduler based on processing the first copy of the packet descriptor with the first processing pipeline. A second copy of the packet descriptor is queued in a second scheduler associated based on processing the second copy of the packet descriptor with the second processing pipeline. Upon determining that the first policy is currently active on the network device, the first copy of the packet descriptor is dequeued from the first scheduler.

Подробнее
05-04-2022 дата публикации

Metadata generation for multiple object types

Номер: US0011294965B2
Автор: Noam Mizrahi

Metadata computation apparatus includes a host interface, a storage interface and one or more processors. The host interface is configured to communicate over a computer network with one or more remote hosts. The storage interface is configured to communicate with one or more non-volatile memories of one or more storage devices. The processors are configured to manage local storage or retrieval of media objects in the non-volatile memories, to compute metadata for a plurality of media objects that are stored, or are en-route for storage, on the storage devices, wherein the media objects are of multiple media types, wherein the computed metadata tags a target feature in the media objects of at least two different media types among the multiple media types, and to store, in the non-volatile memories, the metadata tagging the target feature found in the at least two different media types, for use by the hosts.

Подробнее
05-09-2023 дата публикации

Storage aggregator controller with metadata computation control

Номер: US0011748418B2

This disclosure describes a storage aggregator controller with metadata computation control. The storage aggregator controller communicates, via a host interface, over a computer network with one or more remote hosts, and also communicates, via a storage device interface, with a plurality of local storage devices, which are separate from the remote host(s) and which have respective non-volatile memories. The storage aggregator controller manages the local storage devices for storage or retrieval of media objects. The storage aggregator controller also governs a selective computation, at aggregator control circuitry or at a storage device controller of one or more of the storage devices, of metadata that defines content characteristics of the media objects that are retrieved from the plurality of storage devices or that are received from the one or more hosts over the computer network for storage in the plurality of storage devices.

Подробнее
24-05-2022 дата публикации

Multi-connectivity communication

Номер: US0011343241B2

A method for multi-connectivity communication in an application layer of a communication network. The method includes generating a plurality of decision elements and repeating a first iterative process. An ithiteration of the first iterative process includes generating an ithtransmit message set by executing an application, transmitting the ithtransmit message set from a transmitter to a receiver, receiving an ithreceive message set of a plurality of receive message sets, and updating the plurality of decision elements. The ithtransmit message set is transmitted over a plurality of networks. The ithtransmit message set is transmitted based on the plurality of decision elements. Each transmit message in the ithtransmit message set is associated with at least one respective network of the plurality of networks. The plurality of decision elements are updated based on a jthreceive message set in the plurality of receive message sets.

Подробнее
12-10-2023 дата публикации

OPPORTUNISTIC CONTENT DELIVERY USING DELTA CODING

Номер: US20230328131A1
Автор: David LERNER
Принадлежит:

Systems and methods are described for avoiding redundant data transfers using delta coding techniques when reliably and opportunistically communicating data to multiple user systems. According to embodiments, user systems track received block sequences for locally stored content blocks. An intermediate server intercepts content requests between user systems and target hosts, and deterministically chucks and fingerprints content data received in response to those requests. A fingerprint of a received content block is communicated to the requesting user system, and the user system determines based on the fingerprint whether the corresponding content block matches a content block that is already locally stored. If so, the user system returns a set of fingerprints representing a sequence of next content blocks that were previously stored after the matching content block. The intermediate server can then send only those content data blocks that are not already locally stored at the user system ...

Подробнее
01-11-2023 дата публикации

FINE GRAIN TRAFFIC SHAPING OFFLOAD FOR A NETWORK INTERFACE CARD

Номер: EP4006735B1
Принадлежит: Google LLC

Подробнее
01-11-2023 дата публикации

COMBINED INPUT AND OUTPUT QUEUE FOR PACKET FORWARDING IN NETWORK DEVICES

Номер: EP4239976A3
Принадлежит:

An apparatus for switching network traffic includes an ingress packet forwarding engine and an egress packet forwarding engine. The ingress packet forwarding engine is configured to determine, in response to receiving a network packet, an egress packet forwarding engine for outputting the network packet and enqueue the network packet in a virtual output queue. The egress packet forwarding engine is configured to output, in response to a first scheduling event and to the ingress packet forwarding engine, information indicating the network packet in the virtual output queue and that the network packet is to be enqueued at an output queue for an output port of the egress packet forwarding engine. The ingress packet forwarding engine is further configured to dequeue, in response to receiving the information, the network packet from the virtual output queue and enqueue the network packet to the output queue.

Подробнее
18-07-2023 дата публикации

Concurrent communication method and device for many-core processor, equipment and medium

Номер: CN116455849A
Принадлежит:

The invention relates to a concurrent communication method and device oriented to a many-core processor, equipment and a medium. The method comprises the following steps: respectively setting a message sending buffer area and a message receiving buffer area in a main memory for a virtual port when sending and receiving message data, accessing through a lock mechanism, and setting a counter and a read pointer or a write pointer of the buffer areas in a network card chip, and sending end or receiving end software and network card hardware cooperatively form a virtual port queue state management mechanism. According to the method, each process can be supported to form a programming view exclusively occupying hardware, and a plurality of processes are allowed to concurrently access communication hardware resources in a user space in a protected manner, so that the atomicity of communication operation request processing during concurrent communication is ensured. In addition, based on a software ...

Подробнее
24-03-2022 дата публикации

METHOD AND SYSTEM FOR CENTRAL PROCESSING UNIT EFFICIENT STORING OF DATA IN A DATA CENTER

Номер: US20220094646A1
Принадлежит:

A method and network interface card providing central processor unit efficient storing of data. The NIC receives request for registering a memory address range in the NIC, the request comprising a rewrite protection granularity for the memory address range. When receiving data from a client process, subsequent to registering of said memory address range, said data having an address within the memory address range, the NIC determines whether the rewrite protection granularity of the NIC is reached, when receiving said data. In the event that the rewrite protection granularity is reached, the NIC inactivates the memory address range according to said reached rewrite protection granularity. The auto-inactivated memory address range also provides a rewrite protection of data when storing data. Remote logging or monitoring of data is also enabled, wherein the logging or monitoring may be regarded to become server-less.

Подробнее
11-07-2023 дата публикации

Multi-path packet descriptor delivery scheme

Номер: US0011700209B2
Принадлежит: Intel Corporation

Examples describe use of multiple meta-data delivery schemes to provide tags that describe packets to an egress port group. A tag, that is smaller than a packet, can be associated with a packet. The tag can be stored in a memory, as a group with other tags, and the tag can be delivered to a queue associated with an egress port. Packets received at an ingress port can be as non-interleaved to reduce underrun and providing cut-through to an egress port. A shared memory can be allocated to store packets received at a single ingress port or shared to store packets from multiple ingress ports.

Подробнее
01-06-2022 дата публикации

FINE GRAIN TRAFFIC SHAPING OFFLOAD FOR A NETWORK INTERFACE CARD

Номер: EP4006735A1
Принадлежит:

A network interface card (140) with traffic shaping capabilities and methods of network traffic shaping with a network interface card (140) are provided. The network interface card (140) and method can shape traffic originating from one or more applications (150a-c) executing on a host network device. The applications (150a-c) can execute in a virtual machine or containerized computing environment. The network interface card (140) and method can perform or include several traffic shaping mechanisms including, for example and without limitation, a delayed completion mechanism, a time-indexed data structure (130), a packet builder (147), and a memory manager (147).

Подробнее
03-01-2023 дата публикации

Communication input-output device

Номер: US0011546276B2

In a recording device, a data memory including a DRAM having a write pointer for each of banks, and a queue control memory that stores an active flag is provided. When frame data is written into a write-target queue, a bank for which an active flag indicates an activated state is selected as a write-target bank among the banks to write the frame data, and if there is no bank for which an active flag indicates an activated state, a bank for which an active flag indicates a deactivated state is selected as a write-target bank, a row address of a write pointer of the bank is activated, and thereafter the frame data is written.

Подробнее
30-11-2023 дата публикации

PACKET FORWARDING SYSTEM AND ASSOCIATED PACKET FORWARDING METHOD

Номер: US20230388253A1
Принадлежит: Realtek Semiconductor Corp.

The present invention provides a packet forwarding system including a packet, a packet analyzer and a DMA module. The packet buffer is configured to receive a packet and store the packet. The packet analyzer is configured to read the packet from the packet buffer, and analyze the packet to extract part of content of the packet to generate specific data. The DMA module is configured to write the specific data into a first buffer of a storage device, and write the packet into a second buffer of the storage device.

Подробнее
07-06-2022 дата публикации

Channelized rate adaptation

Номер: US0011356379B1
Автор: Junjie Yan
Принадлежит: XILINX, INC., Xilinx, Inc.

Apparatus and method relating generally to a channelized communication system is disclosed. In such a method, a read signal and a switch control signal are generated by a controller. Received by channelized buffers are data words from multiple channels associated with groups of information and the read signal. The data words are read out from the channelized buffers responsive to the read signal. A switch receives the data words from the channelized buffers responsive to the read signal. A gap is inserted between the groups of information by the switch. One or more control words are selectively inserted in the gap by the switch responsive to the switch control signal. The switch control signal has indexes for selection of the data words and the control words.

Подробнее
16-01-2024 дата публикации

System and method to perform lossless data packet transmissions

Номер: US0011876735B1
Автор: Harish Ramachandra

A system may include a primary memory, a secondary memory, and a processor that may be communicatively coupled to one another. The processor may be configured to control data packet transmissions received via an input to the primary memory and the secondary memory. Further, the processor may be configured to monitor a current buffering level of the primary memory; and compare the first current buffering level to a first buffering threshold. The first buffering threshold may be indicative of a buffering capacity difference between a first buffering capacity of the primary memory and a second buffering capacity of the secondary memory. In response to determining that the current buffering level is equal to or greater than the first buffering threshold, pause the data packet transmissions via the input to the to the primary memory and the secondary memory.

Подробнее
03-01-2023 дата публикации

Resource sharing in a telecommunications environment

Номер: US0011543979B2
Принадлежит: TQ DELTA, LLC

A transceiver is designed to share memory and processing power amongst a plurality of transmitter and/or receiver latency paths, in a communications transceiver that carries or supports multiple applications. For example, the transmitter and/or receiver latency paths of the transceiver can share an interleaver/deinterleaver memory. This allocation can be done based on the data rate, latency, BER, impulse noise protection requirements of the application, data or information being transported over each latency path, or in general any parameter associated with the communications system.

Подробнее
02-05-2023 дата публикации

Programmable congestion control engine

Номер: US0011641323B1
Принадлежит: XILINX, INC.

Examples herein describe an acceleration framework that includes a hybrid congestion control (CC) engine where some components are implemented in software (e.g., a CC algorithm) while other components are implemented in hardware (e.g., measurement and enforcement modules and a flexible processing unit). The hardware components can be designed to provide measurements that can be used by multiple different types of CC algorithms. Depending on which CC algorithms are currently enabled, the hardware components can be programmed to perform measurement, processing, and enforcement tasks, thereby freeing the CPUs in the host to perform other tasks. In this manner, the hybrid CC engine can have the flexibility of a pure software CC algorithm with the advantage of performing many of the operations associated with the CC algorithm in hardware.

Подробнее
06-12-2023 дата публикации

COMMUNICATION APPARATUS, COMMUNICATION METHOD AND COMPUTER-READABLE MEDIUM

Номер: EP3461086B1
Принадлежит: KABUSHIKI KAISHA TOSHIBA

Подробнее
30-05-2023 дата публикации

System and method for facilitating dynamic triggered operation management in a network interface controller (NIC)

Номер: US0011665113B2

A system for facilitating efficient command management in a network interface controller (NIC) is provided. During operation, the system can determine, at the NIC, a trigger condition and a location in a command queue for a set of commands corresponding to the trigger condition. The command queue can be external to the NIC. The location can correspond to an end of the set of commands in the command queue. The system can then determine, at the NIC, whether the trigger condition has been satisfied. If the trigger condition is satisfied, the system can fetch a respective command of the set of commands from the command queue and issuing the command from the NIC until the location is reached, thereby bypassing locally storing the set of commands prior to the trigger condition being satisfied.

Подробнее
12-12-2023 дата публикации

Packet processing with reduced latency

Номер: US0011843550B2
Принадлежит: Intel Corporation

Generally, this disclosure provides devices, methods, and computer readable media for packet processing with reduced latency. The device may include a data queue to store data descriptors associated with data packets, the data packets to be transferred between a network and a driver circuit. The device may also include an interrupt generation circuit to generate an interrupt to the driver circuit. The interrupt may be generated in response to a combination of an expiration of a delay timer and a non-empty condition of the data queue. The device may further include an interrupt delay register to enable the driver circuit to reset the delay timer, the reset postponing the interrupt generation.

Подробнее
07-11-2023 дата публикации

Redundant media packet streams

Номер: US0011811837B2
Принадлежит: Audinate Holdings Pty Limited

This invention concerns the transmitting and receiving of digital media packets, such as audio and video channels and lighting instructions. In particular, the invention concerns the transmitting and receiving of redundant media packet streams. Samples are extracted from a first and second media packet stream. The extracted samples are written to a buffer based on the output time of each sample. Extracted samples having the same output in time are written to the same location in the buffer. Both media packet streams are simply processed all the way to the bugger without any particular knowledge that one of the packet streams is actually redundant. This simplifies the management of the redundant packet streams, such as eliminating the need for a “fail-over” switch and the concept of an “active stream”, the location is the storage space allocated to store one sample. The extracted sample written to the location may be written over another extracted sample from a different packet stream previously ...

Подробнее
21-12-2023 дата публикации

MULTI-STRIDE PACKET PAYLOAD MAPPING FOR ROBUST TRANSMISSION OF DATA

Номер: US20230412528A1
Принадлежит:

Systems and methods for packet payload mapping for robust transmission of data are described. For example, methods may include receiving, using a network interface, packets that each respectively include a primary frame and one or more preceding frames from the sequence of frames of data that are separated from the primary frame in the sequence of frames by a respective multiple of a stride parameter; storing the frames of the packets in a buffer with entries that each hold the primary frame and the one or more preceding frames of a packet; reading a first frame from the buffer as the primary frame from one of the entries; determining that a packet with a primary frame that is a next frame in the sequence has been lost; and, responsive to the determination, reading the next frame from the buffer as a preceding frame from one of the entries.

Подробнее
06-06-2023 дата публикации

Routing table item transmission method and device and storage medium

Номер: CN116232993A
Автор: ZHANG YI, SUN XIANGDONG
Принадлежит:

The embodiment of the invention provides a routing table item transmission method and device and a storage medium, and relates to the field of data processing, a mapping relation based on a preset memory mapping protocol is established between a preset memory space capable of being accessed by a main control CPU and a window in a memory block of an NP chip, the window is a continuous memory space of the NP chip, and the NP chip is an NP chip. The mapping relation is a one-to-one mapping relation between the address of the preset memory space and the address of the window, and the method comprises the following steps: determining a first target address field to be written in the NP chip of the routing table item; if the first target address field is located in the window, determining a second target address field in the preset memory space according to the mapping relation; writing the routing table item into the second target address field; and writing the routing table item into the first ...

Подробнее
22-08-2023 дата публикации

Network packet receiving timing method based on clock counting and system timestamp mapping relation

Номер: CN116633828A
Автор: HAN SHAOHUA, CHEN ZHIMING
Принадлежит:

The invention provides a network packet receiving timing method based on a clock counting and system timestamp mapping relation. The method comprises the following steps: firstly, setting a CPU (Central Processing Unit) to run at a fixed frequency, and regularly maintaining a TSC reference count value corresponding to the whole second of a system timestamp within a range of several seconds before and after the current time; and when the timestamp corresponding to the TSC count value is queried, the numerical values are sequentially compared with the maintained TSC reference count value to obtain the reference count value which is not greater than and closest to the query count value, and then the second number of the timestamp corresponding to the reference count value is read to obtain the second number of the timestamp corresponding to the TSC count value. The calculation method provided by the invention can be applied to bypass or serial deployment network flow analysis equipment, can ...

Подробнее
02-08-2022 дата публикации

Transport protocol and interface for efficient data transfer over RDMA fabric

Номер: US0011403253B2
Принадлежит: Microsoft Technology Licensing, LLC

Described herein is a system and method for utilizing a protocol over RDMA network fabric between a first computing node and a second computing node. The protocol identifies a first threshold and a second threshold. A transfer request is received, and, a data size associated with the transfer request is determined. Based up the data size associated with the transfer request, one of at least three transfer modes is selected to perform the transfer request in accordance with the first threshold and the second threshold. Each transfer mode utilizes flow control and at least one RDMA operation. The selected transfer mode is utilized to perform the transfer request.

Подробнее
21-09-2022 дата публикации

DATA TRANSMISSION

Номер: EP3675439B1
Автор: GUO, Daorong
Принадлежит: New H3C Technologies Co., Ltd.

Подробнее
02-05-2023 дата публикации

Low latency small message communication channel using remote write

Номер: US0011641288B2
Принадлежит: Telefonaktiebolaget LM Ericsson (publ)

A method implemented by an electronic device for sending data on low latency communication includes remote writing a sequence number of a message to be sent next to a receiver, determining whether there is an open position in a receive buffer of the receiver using a local tracking mechanism, writing data of the message to an area of an address space of a sending application that is mapped onto the receive buffer as a result of determining there is the open position in the receive buffer, incrementing a local sequence counter, and updating position information in the local tracking mechanism.

Подробнее
01-08-2023 дата публикации

Multi-stride packet payload mapping for robust transmission of data

Номер: US0011716294B2
Принадлежит: Mixhalo Corp.

Systems and methods for packet payload mapping for robust transmission of data are described. For example, methods may include receiving, using a network interface, packets that each respectively include a primary frame and one or more preceding frames from the sequence of frames of data that are separated from the primary frame in the sequence of frames by a respective multiple of a stride parameter; storing the frames of the packets in a buffer with entries that each hold the primary frame and the one or more preceding frames of a packet; reading a first frame from the buffer as the primary frame from one of the entries; determining that a packet with a primary frame that is a next frame in the sequence has been lost; and, responsive to the determination, reading the next frame from the buffer as a preceding frame from one of the entries.

Подробнее
13-12-2022 дата публикации

System and method for providing a dynamic cloud with subnet administration (SA) query caching

Номер: US0011528238B2
Принадлежит: ORACLE INTERNATIONAL CORPORATION

A system and method support can subnet management in a cloud environment. During a virtual machine migration in a cloud environment, a subnet manager can become a bottleneck point that delays efficient service. A system and method can alleviate this bottleneck point by ensuring a virtual machine retains a plurality of addresses after migration. The system and method can further allow for each host node within the cloud environment to be associated with a local cache that virtual machines can utilize when re-establishing communication with a migrated virtual machine.

Подробнее
27-09-2022 дата публикации

Method for transferring transmission data from a transmitter to a receiver for processing the transmission data and means for carrying out the method

Номер: US0011456974B2
Автор: Wolfgang Röhrl

A method involves transferring a transmittal data block from a transmitting device via an Ethernet connection to a receiving device which has a storage for storing a transferred transmittal data block, and a processor for at least partially processing the transferred transmittal data block stored in the storage. The transmitting device forms from the data of the transmittal data block a sequence of Ethernet packets, comprising respectively management data and a transmittal data sub-block. The receiving device receives the Ethernet packets of the respective sequence and, while employing at least a part of the management data, writes the transmittal data sub-blocks of the received Ethernet packets of the sequence of Ethernet packets for the transmittal data block to the storage, wherein not upon or after the writing each of the transmittal data sub-blocks an interrupt is sent to the processor.

Подробнее
24-01-2023 дата публикации

Low latency queuing system

Номер: US0011563690B2
Автор: Alexander Gallego
Принадлежит: Redpanda Data, Inc., VECTORIZED, INC.

Disclosed herein are methods and apparatuses for processing network traffic by a queuing system which may include: receiving pointers to chunks of memory allocated responsive to receipt of network traffic, the chunks of memory each including a portion of a queue batch, wherein the queue batch includes a plurality of queue requests; generating a data structure including the pointers and a reference count; assigning the queue request to a second core; generating a first structured message for the first queue request; and storing the first structured message in a structured message passing queue associated with the second core, wherein a second processing thread associated with the second core, responsive to receiving the structured message, processes the first queue request by retrieving the first queue request from at least one of the chunks of memory.

Подробнее
25-05-2023 дата публикации

Low Latency Queuing System

Номер: US20230164088A1
Автор: Alexander Gallego
Принадлежит:

Disclosed herein are methods and apparatuses for processing network traffic by a queuing system which may include: receiving pointers to chunks of memory allocated responsive to receipt of network traffic, the chunks of memory each including a portion of a queue batch, wherein the queue batch includes a plurality of queue requests; generating a data structure including the pointers and a reference count; assigning the queue request to a second core; generating a first structured message for the first queue request; and storing the first structured message in a structured message passing queue associated with the second core, wherein a second processing thread associated with the second core, responsive to receiving the structured message, processes the first queue request by retrieving the first queue request from at least one of the chunks of memory.

Подробнее
22-08-2023 дата публикации

Data transmission and network interface controller

Номер: US0011736567B2
Принадлежит: Advanced New Technologies Co., Ltd.

Implementations of this disclosure provide data transmission operations and network interface controllers. An example method performed by a first RDMA network interface controller includes obtaining m data packets from a host memory of a first host; sending the m data packets to a second RDMA network interface controller of a second host; backing up the m data packets to a network interface controller memory integrated into the first RDMA network interface controller; determining that the second RDMA network interface controller does not receive n data packets of the m data packets; and in response, obtaining the n data packets from the m data packets that have been backed up to the network interface controller memory integrated into the first RDMA network interface controller, and retransmitting the n data packets to the second RDMA network interface controller.

Подробнее
21-11-2023 дата публикации

Using physical and virtual functions associated with a NIC to access an external storage through network fabric driver

Номер: US0011824931B2
Принадлежит: VMWARE, INC., VMware, Inc.

Some embodiments provide a method of providing distributed storage services to a host computer from a network interface card (NIC) of the host computer. At the NIC, the method accesses a set of one or more external storages operating outside of the host computer through a shared port of the NIC that is not only used to access the set of external storages but also for forwarding packets not related to an external storage. In some embodiments, the method accesses the external storage set by using a network fabric storage driver that employs a network fabric storage protocol to access the external storage set. The method presents the external storage as a local storage of the host computer to a set of programs executing on the host computer. In some embodiments, the method presents the local storage by using a storage emulation layer on the NIC to create a local storage construct that presents the set of external storages as a local storage of the host computer.

Подробнее
17-05-2022 дата публикации

Data link layer device and packet encapsulation method thereof

Номер: US0011336593B2
Автор: Ranyue Li, Jie Jin, Junping Li

A data link layer device and a packet encapsulation method are provided. The data link layer device includes a first and a second first-in-first-out (FIFO) module. The first FIFO module receives and stores multiple first data from an upper-layer module, and removes data gaps from the first data to store the first data in a continuous form. When the first FIFO module is not empty, the first FIFO module generates data of different lengths based on the current amount of data stored temporarily in the first FIFO module and a preset data length. When the data queue of the second FIFO module has enough space to receive the first data, the first FIFO module transfers the first data to the second FIFO module, and the first FIFO module transfers a header including the data length to a header queue of the second FIFO module.

Подробнее
23-05-2023 дата публикации

Multicast packet replication method

Номер: US0011658837B2
Принадлежит: REALTEK SEMICONDUCTOR CORP.

A replication list table structure for multicast packet replication is provided. The replication list table structure includes a plurality of entries. Each one of the plurality of entries includes a first field, a second field, a third field and a fourth field. For each one of the plurality of entries, the first field is used to declare whether the entry is an end of a program execution, the second field is used to declare the fourth field as a first type field for indicating a switch how to modify a header of a multicast packet, or as a second type field for indicating the switch, while reading the list, to jump to another one of the plurality entries, and the third field is preset to the first type field for indicating the switch how to modify the header of the multicast packet.

Подробнее
07-02-2023 дата публикации

Opportunistic content delivery using delta coding

Номер: US0011575738B2
Автор: David Lerner
Принадлежит: VIASAT, INC.

Systems and methods are described for avoiding redundant data transfers using delta coding techniques when reliably and opportunistically communicating data to multiple user systems. According to embodiments, user systems track received block sequences for locally stored content blocks. An intermediate server intercepts content requests between user systems and target hosts, and deterministically chucks and fingerprints content data received in response to those requests. A fingerprint of a received content block is communicated to the requesting user system, and the user system determines based on the fingerprint whether the corresponding content block matches a content block that is already locally stored. If so, the user system returns a set of fingerprints representing a sequence of next content blocks that were previously stored after the matching content block. The intermediate server can then send only those content data blocks that are not already locally stored at the user system ...

Подробнее
05-04-2022 дата публикации

Data transmission method and communications device

Номер: US0011297011B2
Автор: Hua Wei, Qin Zheng, Wenhua Du

A data transmission method includes obtaining dequeue information that indicates a queue which requests to output data in a communications device and a target data volume that is output from each queue at a time, and the communications device manages the target data volume based on a burst value, reading, based on the queue, a sub-packet descriptor (PD) that is obtained by segmenting the first PD, the sub-PD includes target description information indicating a target data packet, the first PD includes first description information indicating a first data packet set including the target data packet, the first data packet set and the sub-PD are stored in a packet cache including a dynamic random access memory (DRAM), the first PD is stored in a control cache including a static random access memory (SRAM), and determining, the target data packet based on the sub-PD, and sending the target data packet.

Подробнее
22-08-2023 дата публикации

Using a NIC as a network accelerator to allow VM access to an external storage via a PF module, bus, and VF module

Номер: US0011736566B2
Принадлежит: VMWARE, INC., VMware, Inc.

Some embodiments provide a method of providing distributed storage services to a host computer from a network interface card (NIC) of the host computer. At the NIC, the method accesses a set of one or more external storages operating outside of the host computer through a shared port of the NIC that is not only used to access the set of external storages but also for forwarding packets not related to an external storage. In some embodiments, the method accesses the external storage set by using a network fabric storage driver that employs a network fabric storage protocol to access the external storage set. The method presents the external storage as a local storage of the host computer to a set of programs executing on the host computer. In some embodiments, the method presents the local storage by using a storage emulation layer on the NIC to create a local storage construct that presents the set of external storages as a local storage of the host computer.

Подробнее
12-09-2023 дата публикации

Line monitor device and network switch

Номер: US0011757799B2
Принадлежит: DENSO CORPORATION

A network switch includes a plurality of ports each connected to a network or a terminal. The network switch performs routing between the plurality of ports. A control device is apart from the network switch. The control device controls the network switch. The network switch includes a command storage unit. The command storage unit stores a plurality of commands acquired from the control device for physical devices.

Подробнее
08-08-2023 дата публикации

Message processing method, gateway equipment and storage system

Номер: CN116566933A
Принадлежит:

The invention provides a message processing method, gateway equipment and a storage system, and belongs to the technical field of storage. According to the method, the access request for the NVMe node is converted into the access request for the RDMA node, the storage medium of the NVMe node is the hard disk, the storage medium of the RDMA node is the internal memory, and the internal memory can provide a read-write speed higher than that of the hard disk, so that the storage performance is improved, meanwhile, a traditional NOF storage system is facilitated to extend an RDMA memory pool, and the storage efficiency is improved. And the networking and capacity expansion flexibility of the storage system is improved.

Подробнее
11-01-2024 дата публикации

METADATA BASED EFFICIENT PACKET PROCESSING

Номер: US20240015109A1
Автор: Oren Markovitz
Принадлежит:

A method and device are presented for decreasing processing cycles spent forwarding packets of a communication from receive queues to at least one transmit queue of a network interface controller. When received, packets are placed into a receive queue based on property(ies) of a leading packet. Buffer metadata including transmit information is associated with each communication. Processor circuitry transfers the packets from each of the receive queues to a transmit queue and the buffer metadata is used to determine how to transmit the packet and how to process the packet before transmission.

Подробнее
07-11-2023 дата публикации

Platform agnostic abstraction for forwarding equivalence classes with hierarchy

Номер: US0011811901B2
Принадлежит: Arista Networks, Inc.

Methods, systems, and computer-readable mediums for managing forwarding equivalence class (FEC) hierarchies, including obtaining a forwarding equivalence class (FEC) hierarchy; making a first determination that a first hardware component supports a maximum levels of indirection (MLI) quantity; making a second determination that the FEC hierarchy has a hierarchy height; based on the first determination and the second determination, performing a comparison between the MLI quantity and the hierarchy height to obtain a comparison result; and based on the comparison result, performing a FEC hierarchy action set.

Подробнее
03-10-2023 дата публикации

Method and apparatus for managing buffering of data packet of network card, terminal and storage medium

Номер: US0011777873B1
Автор: Xu Ma

A method and apparatus for managing buffering of data packets of a network card, a terminal and a storage medium are provided. The method includes: setting ring buffer queues, setting a length of each ring buffer queue, then setting a buffer pool formed by two ring buffer queues, and setting the two ring buffer queues in the buffer pool as a busy queue and an idle queue, respectively; a network card driver receiving data packets from a data link, classifying the data packets, sequentially buffering the classified data packets into the busy queue, and then sequentially mapping addresses of the buffered data packets in the busy queue into the idle queue; acquiring latest addresses of the buffered data packets in the busy queue; and the upper-layer application successively acquiring and processing the buffered data packets, and successively releasing the addresses of the processed buffered data packets in the busy queue.

Подробнее
20-06-2023 дата публикации

Communication device including plurality of clients

Номер: US0011683270B2
Автор: Ho Lim, Yong Kim
Принадлежит: Samsung Electronics Co., Ltd.

A communication device includes a first client group in a first region; a second client group in a second region different from the first region; a first data hub configured to generate first burst data and a first control packet based on first client data received from the first client group; a second data hub configured to generate second burst data and a second control packet based on second client data received from the second client group; and a data transfer unit connected to the first data hub and the second data hub via a control protocol, the data transfer unit configured to, store the first burst data in a target memory based on the first control packet, and store the second burst data in the target memory based on the second control packet.

Подробнее
13-06-2023 дата публикации

Shared traffic manager

Номер: US0011677676B1
Принадлежит: Innovium, Inc.

A traffic manager is shared amongst two or more egress blocks of a network device, thereby allowing traffic management resources to be shared between the egress blocks. Schedulers within a traffic manager may generate and queue read instructions for reading buffered portions of data units that are ready to be sent to the egress blocks. The traffic manager may be configured to select a read instruction for a given buffer bank from the read instruction queues based on a scoring mechanism or other selection logic. To avoid sending too much data to an egress block during a given time slot, once a data unit portion has been read from the buffer, it may be temporarily stored in a shallow read data cache. Alternatively, a single, non-bank specific controller may determine all of the read instructions and write operations that should be executed in a given time slot.

Подробнее
09-05-2023 дата публикации

Packet payload mapping for robust transmission of data

Номер: US0011646979B2
Принадлежит: MIXHalo Corp., Mixhalo Corp.

Systems and methods for packet payload mapping for robust transmission of data are described. For example, methods may include receiving, using a network interface, packets that each respectively include a primary frame and one or more preceding frames from the sequence of frames of data that are separated from the primary frame in the sequence of frames by a respective multiple of a stride parameter; storing the frames of the packets in a buffer with entries that each hold the primary frame and the one or more preceding frames of a packet; reading a first frame from the buffer as the primary frame from one of the entries; determining that a packet with a primary frame that is a next frame in the sequence has been lost; and, responsive to the determination, reading the next frame from the buffer as a preceding frame from one of the entries.

Подробнее
04-07-2023 дата публикации

Multifunctional data acquisition and conversion device

Номер: CN116389608A
Принадлежит:

The invention relates to the technical field of industrial Internet of Things, in particular to a multifunctional data acquisition and conversion device which receives an instruction sent by a main end device, judges whether data needs to be replied after receiving the instruction sent by the main end device, converts the instruction according to protocol conversion format content, and sends the converted instruction to the main end device. The multifunctional data acquisition and conversion device sends a converted data frame to the to-be-acquired device, the to-be-acquired device returns data or executes an action according to a communication protocol, and the multifunctional data acquisition device converts the data returned by the acquired device according to a protocol conversion format and replies the converted data to the main acquisition end; the multifunctional data acquisition and conversion device provided by the invention has the beneficial effects that rapid access of data ...

Подробнее
15-02-2022 дата публикации

Redundant media packet streams

Номер: US0011252212B2
Принадлежит: AUDINATE HOLDINGS PTY LIMITED

This invention concerns the transmitting and receiving of digital media packets, such as audio and video channels and lighting instructions. In particular, the invention concerns the transmitting and receiving of redundant media packet streams. Samples are extracted from a first and second media packet stream. The extracted samples are written to a buffer based on the output time of each sample. Extracted samples having the same output time are written to the same location in the buffer. Both media packet streams are simply processed all the way to the buffer without any particular knowledge that one of the packet streams is actually redundant. This simplifies the management of the redundant packet streams, such as eliminating the need for a “fail-over” switch and the concept of an “active stream”, The location is the storage space allocated to store one sample. The extracted sample written to the location may be written over another extracted sample from a different packet stream previously ...

Подробнее
22-12-2022 дата публикации

Connection management in a network adapter

Номер: US20220407824A1
Принадлежит:

A network adapter includes a network interface, a host interface and processing circuitry. The network interface connects to a communication network for communicating with remote targets. The host interface connects to a host that accesses a Multi-Channel Send Queue (MCSQ) storing Work Requests (WRs) originating from client processes running on the host. The processing circuitry is configured to retrieve WRs from the MCSQ and distribute the WRs among multiple Send Queues (SQs) accessible by the processing circuitry, and retrieve WRs from the multiple NSQs and execute data transmission operations specified in the WRs retrieved from the multiple NSQs.

Подробнее
14-04-2022 дата публикации

MULTI-ACCESS MANAGEMENT SERVICE QUEUEING AND REORDERING TECHNIQUES

Номер: US20220116334A1
Автор: Jing Zhu, Menglei Zhang
Принадлежит:

The present disclosure is related to multi-queue management techniques and packet reordering techniques for inter-radio access technology (RAT) and intra-RAT traffic steering. The multi-queue management and packet reordering techniques may be used in Multi-Access Management Services (MAMS) framework, which is a programmable framework that provides mechanisms for the flexible selection of network paths in a multi-access (MX) communication environment, based on an application's needs. Other embodiments may be described and/or claimed.

Подробнее
23-06-2023 дата публикации

Satellite-borne multi-channel multi-rate high-speed data interface system based on request link

Номер: CN116318340A
Принадлежит:

The invention relates to a request link-based satellite-borne multi-channel multi-rate high-speed data interface system, which comprises a first-stage FIFO (First In First Out) cache and a plurality of second-stage FIFO caches, the first-stage FIFO cache transmits data to the plurality of second-stage FIFO caches in a time division manner, each time slot corresponds to one second-stage FIFO cache, and each time slot corresponds to one second-stage FIFO cache. And in each time slot, according to the to-be-full state of each second-level FIFO cache, determining whether to write numbers to each second-level FIFO cache. According to the invention, under the request link, the two-stage FIFO caches are used for data caching, and the state detection module detects that each second-stage FIFO cache is about to be fully marked in a time-sharing manner, so that the functions of channel forwarding, baseband processing, digital-to-analog conversion and the like on single-channel input data are realized ...

Подробнее
25-04-2023 дата публикации

Multi-port access processing method and device, electronic equipment and storage medium

Номер: CN116016398A
Принадлежит:

The embodiment of the invention provides a multi-port access processing method and device, electronic equipment and a storage medium, a scheduler is configured according to the number N of ports, and the storage is divided into X storage areas. Each scheduler respectively receives N access requests sent by all ports; each scheduler matches all the access requests with a target access request to obtain N matching results; when at least two matching results are the same, arbitrating the two same matching results, authorizing the same port by at least two corresponding schedulers, and accessing the corresponding storage area; and the schedulers with different matching results authorize the corresponding ports and access the corresponding storage areas. Therefore, when a plurality of ports access the same storage area, the scheduler grants the authority of accessing the corresponding storage area to the same port in an arbitration mode, so that the number of the scheduler is reduced, the physical ...

Подробнее
22-02-2022 дата публикации

Payload cache

Номер: US0011258887B2
Принадлежит: MELLANOX TECHNOLOGIES, LTD.

In one embodiment, a computer system includes a payload sub-system including interfaces to connect with respective devices, transfer data with the respective devices, and receive write transactions from the respective devices, a classifier to classify the received write transactions into payload data and control data, and a payload cache to store the classified payload data, and a processing unit (PU) sub-system including a local PU cache to store the classified control data, wherein the payload cache and the local PU cache are different physical caches in respective different physical locations in the computer system, and processing core circuitry configured to execute software program instructions to perform control and packet processing responsively to the control data stored in the local PU cache.

Подробнее
23-05-2023 дата публикации

Table item operation information management method and device of forwarding chip, equipment and medium

Номер: CN116155706A
Принадлежит:

The embodiment of the invention provides a table item operation information management method and device of a forwarding chip, equipment and a medium, and the method comprises the steps: receiving table item operation information issued by a network operation system, and recording the table item operation information to a currently active operation information container by a table item issuing thread; wherein the table item operation information is stored in a hierarchical manner and comprises an operation type, a table item number, a forwarding chip ID, a table item ID, an operation result, each table item content, each table item result, operation starting time, operation ending time, operation efficiency and the like; after the current operation information container is fully recorded, the operation information container of the current activity is switched, and the consumer outputs the content in the fully recorded operation information container in an agreed format; and clearing the ...

Подробнее
01-08-2023 дата публикации

Link aggregation group failover for multicast

Номер: US0011716291B1
Принадлежит: Barefoot Networks, Inc.

A method of multicasting packets by a forwarding element that includes several packet replicators and several egress pipelines. Each packet replicator receives a data structure associated with a multicast packet that identifies a multicast group. Each packet replicator identifies a first physical egress port of a first egress pipeline for sending the multicast packet to a member of the multicast group. The first physical egress port is a member of LAG. Each packet replicator determines that the first physical egress port is not operational and identifies a second physical port in the LAG for sending the multicast packet to the member of the multicast group. When a packet replicator is connected to the same egress pipeline as the second physical egress, the packet replicator provides the identification of the second physical egress port to the egress pipeline to send the packet to the multicast member. Otherwise the packet replicator drops the packet.

Подробнее
07-06-2022 дата публикации

Delayed processing for electronic data messages in a distributed computer system

Номер: US0011354178B2
Принадлежит: NASDAQ, INC., Nasdaq, Inc.

A distributed computer system is provided. The distributed computer system includes at least one sequencer computing node and at least one matcher computing node. Electronic data messages are sequenced by the sequencer and sent to at least matcher computing node. The matcher computing node receives the electronic data messages and a reference value from an external computing source. New electronic data messages are put into a pending list before they can be acted upon by the matcher. A timer is started based on a comparison of the reference value (or a calculation based thereon) to at least one attribute or value of a new electronic data message. When the timer expires, the electronic data message is moved from the pending list to another list—where it is eligible to be matched against other, contra-side electronic data messages.

Подробнее
25-10-2022 дата публикации

System and method to control latency of serially-replicated multi-destination flows

Номер: US0011483171B2
Принадлежит: Cisco Technology, Inc.

Exemplified systems and methods facilitate multicasting latency optimization operations for router, switches, and other network devices, for routed Layer-3 multicast packets to provide even distribution latency and/or selective prioritized distribution of latency among multicast destinations. A list of network destinations for serially-replicated packets is traversed in different sequences from one packet to the next, to provide delay fairness among the listed destinations. The list of network destinations are mapped to physical network ports, virtual ports, or logical ports of the router, switches, or other network devices and, thus, the different sequences are also traversed from these physical network ports, virtual ports, or logical ports. The exemplified systems and methods facilitates the management of traffic that is particularly beneficial in in a data center.

Подробнее
08-03-2022 дата публикации

Leaderless, parallel, and topology-aware protocol for achieving consensus with recovery from failure of all nodes in a group

Номер: US0011271800B1
Принадлежит:

Methods are provided for achieving consensus among an order in which write requests are received by various ones of a plurality of nodes in a distributed system using a shared data structure. The plurality of nodes are organized into groups of nodes and successively larger groupings of groups, based on physical proximity. A consensus protocol is used to achieve consensus among groups of nodes, and then among the groupings of groups of nodes in a logical tree structure up to a root level virtual node. Recovery from failure of all nodes in a group is supported.

Подробнее
27-09-2022 дата публикации

Methods and arrangements to accelerate array searches

Номер: US0011456972B2
Принадлежит: INTEL CORPORATION

Logic may store at least a portion of an incoming packet at a memory location in a host device in response to a communication from the host device. Logic may compare the incoming packet to a digest in an entry of a primary array. When the incoming packet matches the digest, logic may retrieve a full entry from the secondary array and compare the full entry with the first incoming packet. When the full entry matches the first incoming packet, logic may store at least a portion of the first incoming packet at the memory location. And, in the absence of a match between the first incoming packet and the digest or full entry, logic may compare the first incoming packet to subsequent entries in the primary array to identify a full entry in the secondary array that matches the first incoming packet.

Подробнее
13-09-2022 дата публикации

Method and device for improving bandwidth utilization in a communication network

Номер: US0011444890B2
Принадлежит: DRIVENETS LTD.

A communication system comprising at least one smart network interface card (“NIC”) provided with a logic/programmable processor and a local memory, and a computing element, wherein a communication bus is used to connect said smart NIC and said computing element to enable forwarding data there-between, wherein the system is characterized in that said smart NIC is configured to receive data packets, to extract data therefrom and to forward less than all data comprised in the received data packets, to said computing element along said communication bus, and wherein the forwarded data comprises data which is preferably required for making networking decisions that relate to that respective data packet.

Подробнее
31-10-2023 дата публикации

Apparatus and method for buffer management for receive segment coalescing

Номер: US0011805081B2
Принадлежит: Intel Corporation

Packets received non-contiguously from a network are processed by a network interface controller by coalescing received packet payload into receive buffers on a receive buffer queue and writing descriptors associated with the receive buffers for a same flow consecutively in a receive completion queue. System performance is optimized by reusing a small working set of provisioned receive buffers to minimize the memory footprint of memory allocated to store packet data. The remainder of the provisioned buffers are in an overflow queue and can be assigned to the network interface controller if the small working set of receive buffers is not sufficient to keep up with the received packet rate. The receive buffer queue can be refilled based on either timers or when the number of buffers in the receive buffer queue is below a configurable low watermark.

Подробнее
27-09-2022 дата публикации

Circuit for a buffered transmission of data

Номер: US0011456973B2
Принадлежит: WAGO Verwaltungsgesellschaft mbH

A circuit with a first buffer, a second buffer, a third buffer, a fourth buffer, a first data input for first data, a second data input for second data, a data output, and control logic is disclosed. The control logic connects the first data input to one of the buffers, connects the second data input to one of the buffers, and connects the data output to one of the buffers, swap the buffer currently connected to the first data input for a non-connected buffer when first data have been validly written through the first data input into the buffer currently connected to the first data input, swap the buffer currently connected to the second data input for the non-connected buffer when second data have been validly written through the second data input into the buffer currently connected to the second data input.

Подробнее
02-11-2023 дата публикации

Packet Payload Mapping for Robust Transmission of Data

Номер: US20230353511A1
Принадлежит:

Systems and methods for packet payload mapping for robust transmission of data are described. For example, methods may include receiving, using a network interface, packets that each respectively include a primary frame and one or more preceding frames from the sequence of frames of data that are separated from the primary frame in the sequence of frames by a respective multiple of a stride parameter; storing the frames of the packets in a buffer with entries that each hold the primary frame and the one or more preceding frames of a packet; reading a first frame from the buffer as the primary frame from one of the entries; determining that a packet with a primary frame that is a next frame in the sequence has been lost; and, responsive to the determination, reading the next frame from the buffer as a preceding frame from one of the entries.

Подробнее
30-05-2023 дата публикации

Multi-partition communication method based on FC device, FC device and storage medium

Номер: CN116185887A
Принадлежит:

The invention provides a multi-partition communication method based on FC (Fiber Channel) equipment, the FC equipment and a storage medium. The method comprises the following steps: step S100, establishing a first cache region in a DDR (Double Data Rate) connected with an FPGA (Field Programmable Gate Array); s200, performing block division on the first cache region according to the size of a dmabuffer of a corresponding destination partition system, and putting a DDR (Double Data Rate) address corresponding to each block and a corresponding dmabuffer address into a configuration list of an FPGA (Field Programmable Gate Array); and step S300, establishing an address cycle release framework in the FPGA, according to the configuration list, storing the received small queue IU data in blocks of a first cache region in sequence in blocks, and after complete storage of one small queue IU is completed, immediately initiating dma through dmap, reading the configuration list, and obtaining addresses ...

Подробнее
30-06-2023 дата публикации

Method and device for deleting address resolution protocol

Номер: CN116366544A
Автор: WANG DAN, WU LEYI
Принадлежит:

The invention relates to a deletion method and device of an address resolution protocol. The method comprises the following steps: acquiring a deletion instruction, wherein the deletion instruction comprises a message with a preset structure; initializing a message queue and a pointer when the message meets a preset condition; writing the content of the message into the message queue one by one through the pointer; and calling a driving chip interface based on the message queue to delete the related content of the address resolution protocol. According to the method and the device for deleting the address resolution protocol, related table item contents of the address resolution protocol can be cleared through one instruction, the performance of a clearing queue is optimized, the working efficiency of a switch is improved, and the utilization rate of table item resources of the address resolution protocol is improved.

Подробнее
02-02-2012 дата публикации

Backplane Interface Adapter

Номер: US20120026868A1
Принадлежит: Foundry Networks LLC

A backplane interface adapter for a network switch. The backplane interface adapter includes at least one receiver that receives input cells carrying packets of data; at least one cell generator that generates encoded cells which include the packets of data from the input cells; and at least one transmitter that transmits the generated cells to a switching fabric. The cell includes a destination slot identifier that identifies a slot of the switching fabric towards which the respective input cell is being sent. The generated cells include in-band control information.

Подробнее
02-02-2012 дата публикации

Maintaining packet order using hash-based linked-list queues

Номер: US20120027019A1
Принадлежит: Juniper Networks Inc

Ordering logic ensures that data items being processed by a number of parallel processing units are unloaded from the processing units in the original per-flow order that the data items were loaded into the parallel processing units. The ordering logic includes a pointer memory, a tail vector, and a head vector. Through these three elements, the ordering logic keeps track of a number of “virtual queues” corresponding to the data flows. A round robin arbiter unloads data items from the processing units only when a data item is at the head of its virtual queue.

Подробнее
17-05-2012 дата публикации

Apparatus, electronic apparatus and method for adjusting jitter buffer

Номер: US20120123774A1

An apparatus, electronic apparatus and method for adjusting jitter buffer is provided. A previous jitter buffer size based on a jitter buffer size determined according to an adaptive jitter buffer size calculation algorithm is applied in predicting a jitter buffer size of future time such that the predicted jitter buffer size is applied to obtain a jitter buffer size of a valid time. The audio quality of the speech transmitted over a packet switched network is enhanced.

Подробнее
31-05-2012 дата публикации

Dma (direct memory access) coalescing

Номер: US20120137029A9
Принадлежит: Individual

In general, in one aspect, a method includes determining a repeated, periodic DMA (Direct Memory Access) coalescing interval based, at least in part, on a power sleep state of a host platform. The method also includes buffering data received at the device in a FIFO (First-In-First-Out) queue during the interval and DMA-ing the data enqueued in the FIFO to a memory external to the device after expiration of the repeated, periodic DMA coalescing interval.

Подробнее
16-08-2012 дата публикации

Software Pipelining On A Network On Chip

Номер: US20120209944A1
Принадлежит: International Business Machines Corp

Memory sharing in a software pipeline on a network on chip (‘NOC’), the NOC including integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controllers, with each IP block adapted to a router through a memory communications controller and a network interface controller, where each memory communications controller controlling communications between an IP block and memory, and each network interface controller controlling inter-IP block communications through routers, including segmenting a computer software application into stages of a software pipeline, the software pipeline comprising one or more paths of execution; allocating memory to be shared among at least two stages including creating a smart pointer, the smart pointer including data elements for determining when the shared memory can be deallocated; determining, in dependence upon the data elements for determining when the shared memory can be deallocated, that the shared memory can be deallocated; and deallocating the shared memory.

Подробнее
27-12-2012 дата публикации

Compact load balanced switching structures for packet based communication networks

Номер: US20120327771A1
Принадлежит: Individual

A switching node is disclosed for the routing of packetized data employing a multi-stage packet based routing fabric combined with a plurality of memory switches employing memory queues. The switching node allowing reduced throughput delays, dynamic provisioning of bandwidth and packet prioritization.

Подробнее
31-01-2013 дата публикации

System and method for prioritizing requests to a sim

Номер: US20130029726A1
Принадлежит: Qualcomm Inc

The method and system relate to prioritizing access and shaping traffic to the SIM such that the requests to the SIM that pertain to registering the wireless mobile device on a network are given a higher priority than other requests to the SIM. The higher priority requests that relate to registering the mobile device on a network may be processed by the SIM prior to at least one other request that is not related to registering the mobile device on the network.

Подробнее
04-04-2013 дата публикации

System and method for supporting a complex message header in a transactional middleware machine environment

Номер: US20130086149A1
Автор: Peizhi SHI, Yongshun JIN
Принадлежит: Oracle International Corp

A flexible transactional data structure can be used to store message header in a transactional middleware machine environment. The flexible transactional data structure can have dynamic numbers of fields and is accessible via specified IDs. The message header can include a first data structure that stores address information for accessing a client using a first message queue, and a second data structure that stores address information for accessing a client using a second message queue. The first type of server operates to use only the first data structure to obtain the address information for accessing the client using the first message queue. The second type of server operates to obtain a key from the first data structure first, and then use the key to obtain from the second data structure the address information for accessing the client using the second message queue.

Подробнее
27-06-2013 дата публикации

Alignment circuit and receiving apparatus

Номер: US20130163599A1
Автор: Akihiro Nozaki
Принадлежит: Renesas Electronics Corp

A control circuit generates a selection signal indicating a head area of an alignment buffer when the area is an unwritten area, and when the head area is a written area, successively performs comparison between a sequence number stored in the area and a sequence number of a target packet from a head to a tail to search a boundary area and generates a selection signal indicating the detected boundary area. When the boundary area could not be detected even when the search reaches the last written area, the control circuit generates a selection signal indicating the next area of the last written area. The writing circuit shifts data stored in each area by one area from the area indicated by the selection signal in a direction of the tail of the alignment buffer, and writes packet information of the target packet into the area indicated by the selection signal.

Подробнее
11-07-2013 дата публикации

Managing message transmission and reception

Номер: US20130179505A1
Принадлежит: International Business Machines Corp

Various systems, processes, and products may be used to manage the transmission and reception of messages. In particular implementations, a system, process, and product for managing message transmission and reception may include the ability to receive a plurality of messages to be transmitted over a communication network, wherein some of the messages have a higher priority and some of the messages have a lower priority, and enqueue descriptors for the messages in a direct memory access queue. The system, process, and product may also include the ability to determine whether an overrun of the queue has occurred, analyze the queue if an overrun has occurred to determine if lower priority messages are associated with any of the descriptors in the queue, and replace, if descriptors for lower priority messages are in the queue, the descriptors for the lower priority messages with descriptors for higher priority messages.

Подробнее
05-09-2013 дата публикации

CONTEXT-SENSITIVE OVERHEAD PROCESSOR

Номер: US20130230055A1
Автор: Haas Wally
Принадлежит: ALTERA CANADA CO.

An overhead processor for data transmission in digital communications is disclosed. Incoming data is transmitted along a datapath. If there are two or more groups of incoming data, arriving separately, the initial group(s) of received data can be held in an elastic store until the arrival of additional group(s) of data, and upon the arrival of additional group(s) of data, all received data are combined and transmitted into flip-flop(s). The data is transmitted from said flip-flop(s) to a logic element to determine the new data context of imminent incoming data prior to any additional incoming bytes arriving along the datapath. Therefore, the number of overhead processors required for multi-byte data transmission is reduced, potentially reducing the number of required overhead processors in digital communications to 1. 17-. (canceled)8. A context sensitive overhead processor for transmitting data , the processor comprising instructions enabling the processor to:receive, on a datapath, data associated with a state machine;receive, from storage circuitry, a first state associated with the state machine; andcompute, based on the received data and the first state, a second state associated with the state machine.9. The processor of claim 8 , wherein the received data and the received first state are received by the processor in a single clock cycle.10. The processor of claim 8 , wherein the storage circuitry comprises an elastic store.11. The processor of claim 8 , wherein the processor further comprises instructions enabling the processor to store the second state using the storage circuitry.12. The processor of claim 8 , wherein:the first state comprises a context; andthe received data comprises overhead bytes associated with the context.13. The processor of claim 10 , wherein the storage circuitry comprises a flip-flop.14. The processor of claim 13 , wherein the elastic storage stores a first group of data; andthe first group of data is read by the flip-flop.15. The ...

Подробнее
12-09-2013 дата публикации

Data block output apparatus, communication system, data block output method, and communication method

Номер: US20130235878A1
Автор: Masaki Hirota
Принадлежит: Fujitsu Ltd

A data block output apparatus includes a first queue that stores data blocks of first traffic; a second queue that stores data blocks of second traffic and is read preferentially over the first queue; a monitoring unit that monitors for occurrence of data blocks read out of the second queue after reading of a data block from the first queue is completed; and a control unit that controls a data block interval between completion of reading of one data block in the first traffic and a start of reading of a next data block in the first traffic when occurrence frequency of the data blocks read out of the second queue after the reading of one data block from the first queue is completed is equal to or higher than a predetermined value.

Подробнее
19-09-2013 дата публикации

System and method for efficient shared buffer management

Номер: US20130247071A1
Принадлежит: Juniper Networks Inc

A method for managing a shared buffer between a data processing system and a network. The method provides a communication interface unit for managing bandwidth of data between the data processing system and an external communicating interface connecting to the network. The method performs, by the communication interface unit, a combined de-queue and head drop operation on at least one data packet queue within a predefined number of clock cycles. The method also performs, by the communication interface unit, an en-queue operation on the at least one data packet queue in parallel with the combined de-queue operation and head drop operation within the predefined number of clock cycles.

Подробнее
26-09-2013 дата публикации

Reducing Headroom

Номер: US20130250757A1
Принадлежит: Broadcom Corp

The various embodiments of the invention provide mechanisms to reduce headroom size while minimizing dropped packets. In general, this is done by using a shared headroom space between all ports, and providing a randomized delay in transmitting a flow-control message.

Подробнее
10-10-2013 дата публикации

BUFFER MANAGEMENT SCHEME FOR A NETWORK PROCESSOR

Номер: US20130266021A1
Принадлежит:

The invention provides a method for adding specific hardware on both receive and transmit sides that will hide to the software most of the effort related to buffer and pointers management. At initialization, a set of pointers and buffers is provided by software, in quantity large enough to support expected traffic. A Send Queue Replenisher (SQR) and Receive Queue Replenisher (RQR) hide RQ and SQ management to software. RQR and SQR fully monitor pointers queues and perform recirculation of pointers from transmit side to receive side. 110-. (canceled)11. A network processor for managing packets , the network processor comprising:a receive queue replenisher (RQR) for maintaining a hardware managed receive queue the receive queue being suitable for handling a first pointer to a memory location for storing a packet which has been received;a send queue replenisher (SQR) for maintaining a hardware managed send queue, the send queue being suitable for handling a first send element, the first send element comprising a second pointer to the memory location where the packet has been processed and is ready to be sent;a queue manager for, in response to the packet having been sent, receiving the first send element from the send queue and sending the first send element to the RQR, for the RQR to add the second pointer to the receive queue so that the memory location can be reused for storing another packet.12. The network processor of wherein the first send element in the send queue further comprises an identifier of the receive queue claim 11 , so as to indicate to the RQR to which receive queue the second pointer should be added.13. The network processor of claim 11 , wherein the receive queue and the send queue belong to different queue pairs claim 11 , and wherein the receive queue identifier further comprises information for determining the queue pair to which the receive queue belongs.14. The network processor of claims 11 , wherein multiple software threads can run claims ...

Подробнее
17-10-2013 дата публикации

Efficient multiple filter packet statistics generation

Номер: US20130275613A1
Принадлежит: Verisign Inc

Incoming data streams are managed by receiving a data stream on at least one network interface card (NIC) and performing operations on the data stream using a first process running several first threads for each network interface card and at least one group of second multiple processes each with an optional group o second threads. The first process and the one or more groups of second multiple processes are independent and communicate via the shared memory. The first threads for each network interface card are different than the group of second threads. The system includes at least one network interface card that receives a data stream, a first processor that runs a first process that uses a plurality of first threads for each network interface card and a second processor that runs at least one group of second multiple processes each with art optional group of second threads.

Подробнее
24-10-2013 дата публикации

Method and Transmitting Unit for Reducing a Risk of Transmission Stalling

Номер: US20130279396A1
Принадлежит:

The present disclosure relates to a method and a transmitting unit for reducing a risk of transmission stalling between a transmitting unit and a receiving unit in a communication network system comprising said transmitting unit arranged to transmit data blocks to said receiving unit. Each data block comprises a block sequence number and transmitted data blocks are stored in a transmission buffer. A transmission buffer window is arranged to control the flow of retransmission of said transmitted data blocks. When the block sequence number has been acknowledged in a piggybacked acknowledgement/negative acknowledgement field, it is only set as acknowledged upon receipt of a packet uplink acknowledgement/negative acknowledgement message or a packet downlink acknowledgement/negative acknowledgement message comprising an acknowledgement for said block sequence number. 1. (canceled)2. A method for reducing a risk of transmission stalling between a transmitting unit and a receiving unit in a communication network system , wherein the method comprises:transmitting data blocks to the receiving unit from the transmitting unit, wherein each data block comprises a block sequence number;storing the transmitted data blocks in a transmission buffer;controlling flow of retransmission of the transmitted data blocks by using a transmission buffer window comprising an acknowledge state variable that contains the block sequence number value of the oldest data block that has not been positively acknowledged by its peer;responsive to a block sequence number corresponding to the acknowledge state variable being acknowledged in a first type of acknowledgement message, setting the block sequence number as tentative acknowledged; setting the status of the block sequence number as acknowledged; and', 'advancing the transmission buffer window to the acknowledge state variable;, 'responsive to the block sequence number being acknowledged in a second type of acknowledgement messagewherein the ...

Подробнее
24-10-2013 дата публикации

Apparatus and method for receiving and forwarding data

Номер: US20130279509A1
Автор: Søren Kragh
Принадлежит: Napatech AS

A method and apparatus adapted to prevent Head-Of-Line blocking by forwarding dummy packets to queues which have not received data for a predetermined period of time. This prevention of HOL may be on an input where data is forwarded to each of a number of FIFOs or an output where data is de-queued from FIFOs. The dummy packets may be provided with a time stamp derived from a recently queued or de-queued packet.

Подробнее
21-11-2013 дата публикации

TECHNIQUES FOR SENDING AND RELAYING INFORMATION OVER BROADCAST AND NON-BROADCAST COMMUNICATIONS MEDIA

Номер: US20130308652A1
Принадлежит: TV Band Service, LLC

Sending and relaying of information includes: a direct receiver receives messages from a server over a broadcast communications medium, each message having associated targeter data attributes; the direct receiver selects messages from the server for storage in a message store of the first receiver device based on the targeter data attributes associated with each message from the server; the direct receiver connects with an indirect receiver over the non-broadcast communications medium; the direct receiver receives a message request from the indirect receiver for messages in the message store of the direct receiver over a non-broadcast communications medium; and in response to the message request, the direct receiver sends messages in the message store of the direct receiver to the indirect receiver over the non-broadcast communications medium. The direct receiver receives data from the server, and the indirect receiver receives data via another receiver. 1. A method , comprising:(a) receiving, by a first receiver, one or more messages from a server over a broadcast communications medium, each message comprising associated targeter data attributes;(b) selecting, by the first receiver, one or more of the messages from the server for storage in a message store of the first receiver device based on the targeter data attributes associated with each message from the server;(c) receiving, by the first receiver, a message request from a second receiver device over a non-broadcast communications medium; and(d) in response to the message request, sending, by the first receiver, one or more messages in the message store of the first receiver device to the second receiver device over the non-broadcast communications medium.2. The method of claim 1 , wherein the first receiver comprises a direct receiver claim 1 , wherein the direct receiver is a receiver that receives data from the server claim 1 , wherein the receiving (a) and the selecting (b) comprises:(a1) receiving, by the ...

Подробнее
05-12-2013 дата публикации

System for performing data cut-through

Номер: US20130322271A1
Принадлежит: Broadcom Corp

A system transfers data. The system includes an ingress node transferring data at a determined bandwidth. The ingress node includes a buffer and operates based on a monitored node parameter. The system includes a controller in communication with the ingress node. The controller is configured to allocate, based on the monitored node parameter, an amount of the determined bandwidth for directly transferring data to bypass the buffer of the ingress node.

Подробнее
05-12-2013 дата публикации

Router and many-core system

Номер: US20130322459A1
Автор: HUI XU
Принадлежит: Toshiba Corp

According to one embodiment, a router includes a plurality of input ports and a plurality of output ports. The input ports receive a packet including control information indicating a type of access. Each of the input ports includes a first buffer and a second buffer which store the packet. The output ports output the packet. Each of the input ports selects at least one of the first buffer and the second buffer as a buffer in which the packet is stored on the basis of the control information and a state of the output port serving as a destination port of the packet.

Подробнее
26-12-2013 дата публикации

Offloading virtual machine flows to physical queues

Номер: US20130343399A1
Принадлежит: Microsoft Corp

The present invention extends to methods, systems, and computer program products for offloading virtual machine flows to physical queues. A computer system executes one or more virtual machines, and programs a physical network device with one or more rules that manage network traffic for the virtual machines. The computer system also programs the network device to manage network traffic using the rules. In particular, the network device is programmed to determine availability of one or more physical queues at the network device that are usable for processing network flows for the virtual machines. The network device is also programmed to identify network flows for the virtual machines, including identifying characteristics of each network flow. The network device is also programmed to, based on the characteristics of the network flows and based on the rules, assign one or more of the network flows to at least one of the physical queues.

Подробнее
23-01-2014 дата публикации

PACKET ROUTER HAVING A HIERARCHICAL BUFFER STRUCTURE

Номер: US20140023085A1
Принадлежит: LSI Corporation

A packet-router architecture in which buffer modules are interconnected by one or more interconnect fabrics and arranged to form a plurality of hierarchical buffer levels, with each higher buffer level having more buffer modules than a corresponding lower buffer level. An interconnect fabric is configured to connect three or more respective buffer modules, with one of these buffer modules belonging to one buffer level and the other two or more buffer modules belonging to a next higher buffer level. A buffer module is configured to implement a packet queue that (i) enqueues received packets at the end of the queue in the order of their arrival to the buffer module, (ii) dequeues packets from the head of the queue, and (iii) advances packets toward the head of the queue when the buffer module transmits one or more packets to the higher buffer level or to a respective set of output ports connected to the buffer module. 1. An apparatus comprising a packet router that includes:an input buffer module configured to receive packets from a set of input ports;a set of output buffer modules, each configured to direct packets stored therein to a respective set of output ports; and each of said one or more interconnect fabrics is disposed to couple a respective first buffer module, a respective second buffer module, and a respective third buffer module and is configured to transport packets from said respective first buffer module to at least one of said respective second and third buffer modules; and', enqueue packets at an end of a queue therein in an order of their arrival to said respective first buffer module;', 'dequeue packets from a head of the queue; and', 'advance packets toward the head of the queue when the first buffer module dequeues one or more packets from the head of the queue and transmits the one or more dequeued packets, via the interconnect fabric, to at least one of said respective second buffer module and said respective third buffer module., 'said ...

Подробнее
23-01-2014 дата публикации

Backplane Interface Adapter with Error Control and Redundant Fabric

Номер: US20140023086A1
Принадлежит:

A backplane interface adapter with error control and redundant fabric for a high-performance network switch. The error control may be provided by an administrative module that includes a level monitor, a stripe synchronization error detector, a flow controller, and a control character presence tracker. The redundant fabric transceiver of the backplane interface adapter improves the adapter's ability to properly and consistently receive narrow input cells carrying packets of data and output wide striped cells to a switching fabric. 1. A method comprising:receiving blocks of data through a first set of lanes;striping the blocks of data among a second set of lanes;adding a control character to the striped blocks of data on each of the lanes in the second set of lanes;synchronizing the striped blocks of data on the second set of lanes according to the control character; andwherein the first set of lanes has a different number of lanes than the second set of lanes.2. A system comprising:a first packet processor that receives blocks of data through a first set of lanes, stripes the blocks of data among a second set of lanes, and adds a control character to the striped blocks of data on each of the lanes in the second set of lanes;a second packet processor that receives the striped blocks of data along with the control character on each of the lanes in the second set of lanes, and synchronizes the striped blocks of data on the second set of lanes according to the control character;wherein the first set of lanes has a different number of lanes than the second set of lanes. This application is a continuation application of U.S. Ser. No. 12/400,594, filed Mar. 9, 2009, which is a continuation application of U.S. application Ser. No. 09/988,066, filed Nov. 16, 2001, which is a continuation-in-part application of U.S. application Ser. No. 09/855,038, filed May 15, 2001, U.S. application Ser. No. 09/988,066 claims the benefit of provisional U.S. Application No. 60/249,871, filed ...

Подробнее
30-01-2014 дата публикации

Fractional threshold encoding and aggregation

Номер: US20140029630A1
Автор: Aron B. Hall, Jared Go
Принадлежит: Hobnob Inc

Fractional encoding of a packet into fractional packets and reconstruction of fractional packets into an original packet is disclosed. A packet is received. A plurality fractional packets is constructed from the received packet such that the received packet is fully reconstructable from a portion of the fractional packets. The portion is fewer than all of the fractional packets. At least one fractional packet is transmitted.

Подробнее
06-02-2014 дата публикации

Phase-Based Packet Prioritization

Номер: US20140036929A1
Принадлежит: Futurewei Technologies, Inc.

A network node comprises a receiver configured to receive a first packet, a processor coupled to the receiver and configured to process the first packet, and prioritize the first packet according to a scheme, wherein the scheme assigns priority to packets based on phase, and a transmitter coupled to the processor and configured to transmit the first packet. An apparatus comprises a processor coupled to the memory and configured to generate instructions for a packet prioritization scheme, wherein the scheme assigns priority to packet transactions based on closeness to completion, and a memory coupled to the processor and configured to store the instructions. A method comprises receiving a first packet, processing the first packet, prioritizing the first packet according to a scheme, wherein the scheme assigns priority to packets based on phase, and transmitting the first packet. 1. A network node comprising:a receiver configured to receive a first packet; process the first packet, and', 'prioritize the first packet according to a scheme, wherein the scheme assigns priority to packets based on phase; and, 'a processor coupled to the receiver and configured toa transmitter coupled to the processor and configured to transmit the first packet.2. The node of claim 1 , wherein the scheme assigns highest priority to packets in their last phase.3. The node of claim 1 , wherein the scheme assigns lowest priority to packets in their first phase.4. The node of claim 1 , wherein the scheme assigns highest priority to packets in their last phase and assigns lowest priority to packets in their first phase.5. The node of claim 1 , wherein the scheme assigns one of a plurality of intermediate priorities to packets in neither their first phase nor their last phase.6. The node of claim 1 , wherein the scheme favors packets that have relatively fewer phases and disfavors packets that have relatively more phases.7. The node of claim 1 , wherein the scheme favors packets that have ...

Подробнее
06-02-2014 дата публикации

Priority Driven Channel Allocation for Packet Transferring

Номер: US20140036930A1
Принадлежит: FutureWei Technologies Inc

A method comprising advertising to a second node a total allocation of storage space of a buffer, wherein the total allocation is less than the capacity of the buffer, wherein the total allocation is partitioned into a plurality of allocations, wherein each of the plurality of allocations is advertised as being dedicated to a different packet type, and wherein a credit status for each packet type is used to manage the plurality of allocations, receiving a packet of a first packet type from the second node, and storing the packet to the buffer, wherein the space in the buffer occupied by the first packet type exceeds the advertised space for the first packet type due to the packet.

Подробнее
06-02-2014 дата публикации

Relay device and recovery method

Номер: US20140040679A1
Принадлежит: Fujitsu Ltd

A relay device for dividing a storage device into a plurality of unit areas, assigning an unused unit area from among the plurality of unit areas to received channel-specified data, and performing at least one of adjustment of a transmission timing of the data and conversion of the data by using the assigned unit area, is disclosed. The relay device includes an error detector configured to detect an error where the unit area from which the data is to be read is not specified; and an error control configured to recognize a channel of data stored in the unit area that is not specified due to the error detected by the error detector as a target channel, and to invalidate an assignment of the unit area to the recognized target channel.

Подробнее
27-02-2014 дата публикации

System and method for centralized virtual interface card driver logging in a network environment

Номер: US20140059195A1
Автор: Prabhath Sajeepa
Принадлежит: Cisco Technology Inc

A method is provided in one example and includes creating a staging queue in a virtual interface card (VIC) adapter firmware of a server based on a log policy; receiving a log message from a VIC driver in the server; copying the log message to the staging queue; generating a VIC control message comprising the log message from the staging queue; and sending the VIC control message to a switch.

Подробнее
27-02-2014 дата публикации

Method and Apparatus for Probabilistic Allocation in a Switch Packet Buffer

Номер: US20140059303A1
Принадлежит: Broadcom Corp

Systems and methods of writing data to a buffer during a buffer cycle are described. The buffer has a plurality of buffer banks having various fill levels. The buffer determines a first portion of banks from the plurality of buffer banks. The first portion of banks unfilled banks. A rank can be assigned to each of the first portion of banks and a candidate set of banks chosen from the first portion of banks. A target bank is then chosen from the candidate set and the data is written to that bank. The ranking may be random. Furthermore, the target bank can be chosen based on ranking, fill level, or both.

Подробнее
06-03-2014 дата публикации

THROTTLING FOR FAST DATA PACKET TRANSFER OPERATIONS

Номер: US20140064294A1
Принадлежит:

A fast send method may be selectively implemented for certain data packets received from an application for transmission through a network interface. When the fast send method is triggered for a data packet, the application requesting transmission of the data packet may be provided a completion notice nearly immediately after the data packet is received. The fast send method may be used for data packets similar to previously-transmitted data packets for which the information in the data packet is already vetted. For example, a data packet with a similar source address, destination address, source port, destination port, application identifier, and/or activity identifier may have already been vetted. Data packets sent through the fast send method may be throttled to prevent one communication stream from blocking out other communication streams. For example, every nth data packet queued for the fast send method may be transmitted by a slow send method. 1. A method , comprising:receiving, from an application, a first data packet for transmission;identifying a fast send method for transmitting the first data packet;determining whether to throttle the first data packet; andtransmitting, if the first data packet is throttled, the first data packet with a slow send method.2. The method of claim 1 , further comprising transmitting claim 1 , if the first data packet is not throttled claim 1 , the first data packet with a fast send method.3. The method of claim 2 , in which the step of identifying the fast send method for transmitting the first data packet comprises determining a similar packet to the first data packet was previously transmitted successfully.4. The method of claim 2 , in which the step of determining whether to throttle the first data packet comprises determining whether a certain number of data packets were transmitted with the fast send method prior to the first data packet.5. The method of claim 2 , in which the step of determining whether to throttle the ...

Подробнее
06-03-2014 дата публикации

SOCKET TABLES FOR FAST DATA PACKET TRANSFER OPERATIONS

Номер: US20140064295A1
Принадлежит:

A fast send method may be selectively implemented for certain data packets received from an application for transmission through a network interface. When the fast send method is triggered for a data packet, the application requesting transmission of the data packet may be provided a completion notice nearly immediately after the data packet is received. The fast send method may be used for data packets similar to previously-transmitted data packets for which the information in the data packet is already vetted. For example, a data packet with a similar source address, destination address, source port, destination port, application identifier, and/or activity identifier may have already been vetted. A socket table may be maintained listing previously-transmitted data packets and an instruction for handling additional data packets similar to the data packet entered in the socket table. 1. A method , comprising:receiving, from an application, a first data packet for transmission;identifying a similar packet in a socket table; anddetermining from the socket table, a transmission method for the first data packet.2. The method of claim 1 , in which the step of identifying the similar packet in the socket table comprises matching at least one of a local address claim 1 , a local port claim 1 , a remote address claim 1 , a remote port claim 1 , an application identifier claim 1 , and an activity identifier of the first data packet in the socket table.3. The method of claim 2 , in which the step of identifying the similar packet comprises calculating a hash value for the first data packet.4. The method of claim 1 , further comprising:queuing, when no similar packet is identified in the socket table, the first data packet for transmission according to a slow send method; andadding information corresponding to the first data packet to the socket table.5. The method of claim 4 , further comprising:queuing, when the similar packet is identified in the socket table, the first ...

Подробнее
06-03-2014 дата публикации

Method And Apparatus For Performing Finite Memory Network Coding In An Arbitrary Network

Номер: US20140064296A1
Принадлежит: Massachusetts Institute of Technology

Techniques for performing finite memory network coding in an arbitrary network limit an amount of memory that is provided within a node of the network for the performance of network coding operations during data relay operations. When a new data packet is received by a node, the data stored within the limited amount of memory may be updated by linearly combining the new packet with the stored data. In some implementations, different storage buffers may be provided within a node for the performance of network coding operations and decoding operations. 1. A machine implemented method for operating a network node in a memory efficient manner in a network having a plurality of nodes that does not use pre-established routing to direct packets through the network , the method comprising:receiving a new packet at the network node from an arbitrary direction;modifying contents of a coding buffer of the network node using the new packet, the coding buffer for use in performing network coding for the network node, the coding buffer to store no more than S packets for use in network coding, where S is a positive integer, wherein modifying the contents of the coding buffer includes linearly combining the new packet with packets already stored in the coding buffer to generate modified packets and storing the modified packets in the coding buffer,generating a coded packet to be transmitted from the network node after modifying contents of the coding buffer, wherein generating a coded packet includes linearly combining packets stored in the coding buffer using network coding; andtransmitting the coded packet to one or more possibly unknown other nodes.2. The method of claim 1 , wherein:storing the modified packets in the coding buffer includes storing the modified packets in the coding buffer in place of packets already stored in the coding buffer.3. The method of claim 1 , wherein:modifying the contents of the coding buffer includes linearly combining the new packet with S ...

Подробнее
06-03-2014 дата публикации

REFRESHING BLOCKED MEDIA PACKETS FOR A STREAMING MEDIA SESSION OVER A WIRELESS NETWORK IN A STALL CONDITION

Номер: US20140064299A1
Принадлежит: Apple Inc.

A method for refreshing blocked media packets for a streaming media session over a wireless network in a stall condition is disclosed. The method can include a wireless communication device maintaining a buffer at an application layer. The buffer can contain at least a portion of media packets provided to a baseband layer by the application layer for transmission. Media packets provided to the baseband layer can be queued in a baseband queue prior to transmission. The method can further include the wireless communication device generating at least one new media packet for the streaming media session during the stall condition; flushing at least a portion of the media packets queued in the baseband queue; and replenishing the baseband queue by providing the baseband layer with at least a portion of the media packets contained in the buffer and at least one new media packet. 1. A method for refreshing blocked media packets for a streaming media session over a wireless network in a stall condition , the method comprising: maintaining a buffer at the application layer, the buffer containing at least a portion of media packets provided to the baseband layer by the application layer for transmission in the streaming media session, wherein media packets provided to the baseband layer are queued by the baseband layer in a baseband queue prior to transmission;', 'generating at least one new media packet for the streaming media session at the application layer during the stall condition, wherein a new media packet is a media packet that has not been previously provided to the baseband layer;', 'flushing at least a portion of media packets queued in the baseband queue; and', 'replenishing the baseband queue by providing the baseband layer with at least a portion of the media packets contained in the buffer and at least one new media packet of the at least one new media packet generated during the stall condition., 'at a wireless communication device that implements an ...

Подробнее
13-03-2014 дата публикации

Resource sharing in a telecommunications environment

Номер: US20140075128A1
Принадлежит: TQ Delta LLC

A transceiver is designed to share memory and processing power amongst a plurality of transmitter and/or receiver latency paths, in a communications transceiver that carries or supports multiple applications. For example, the transmitter and/or receiver latency paths of the transceiver can share an interleaver/deinterleaver memory. This allocation can be done based on the data rate, latency, BER, impulse noise protection requirements of the application, data or information being transported over each latency path, or in general any parameter associated with the communications system.

Подробнее
20-03-2014 дата публикации

DYNAMIC POWER-SAVING APPARATUS AND METHOD FOR MULTI-LANE-BASED ETHERNET

Номер: US20140078918A1

Provided are a power-saving apparatus and method for multi-lane-based Ethernet, which allow a multi-lane-based Ethernet apparatus to improve energy-conserving efficiency while minimizing the deterioration of the performance of a network. The power-saving apparatus includes: a multi-lane communication unit configured to distribute an input communication packet between a plurality of transmission lanes, photoelectrically convert the input communication packet and transmit the photoelectrically-converted communication packet; a multi-buffer unit configured to comprise one or more buffers, store the input communication packet in each active buffer and transmit the input communication packet to the multi-lane communication unit; and a control unit configured to monitor the multi-buffer unit, compare the size of memory space in use of the multi-buffer unit with a predefined threshold and switch the one or more buffers to an active or inactive state based on the results of the comparison. 1. A power-saving apparatus for multi-lane-based Ethernet , comprising:a multi-lane communication unit configured to distribute an input communication packet between a plurality of transmission lanes, photoelectrically convert the input communication packet and transmit the photoelectrically-converted communication packet;a multi-buffer unit configured to comprise one or more buffers, store the input communication packet in each active buffer and transmit the input communication packet to the multi-lane communication unit; anda control unit configured to monitor the multi-buffer unit, compare the size of memory space in use of the multi-buffer unit with a predefined threshold and switch the one or more buffers to an active or inactive state based on the results of the comparison.2. The power-saving apparatus of claim 1 , wherein the multi-buffer unit is further configured to switch the one or more buffers to the active or inactive state through an on-off control.3. The power-saving ...

Подробнее
20-03-2014 дата публикации

Segmentation and reassembly of network packets for switched fabric networks

Номер: US20140079075A1
Принадлежит: International Business Machines Corp

Reassembly of member cells into a packet comprises receiving an incoming member cell of a packet from a switching fabric wherein each member cell comprises a segment of the packet and a header, generating a reassembly key using selected information from the incoming member cell header wherein the selected information is the same for all member cells of the packet, checking a reassembly table in a content addressable memory to find an entry that includes a logic key matching the reassembly key, and using a content index in the found entry and a sequence number of the incoming member cell within the packet, to determine a location offset in a reassembly buffer area for storing the incoming member cell at said location offset in the reassembly buffer area for the packet for reassembly.

Подробнее
01-01-2015 дата публикации

FLOW-BASED NETWORK SWITCHING SYSTEM

Номер: US20150003248A1
Принадлежит:

A flow-based network switching system includes a memory having a flow table and a packet processor coupled to the memory. The packet processor includes a user-programmable flow-based rule storage that includes a plurality of flow-based rules. A flow-based handler and session manager in the packet processor is operable to retrieve application layer metadata from a first packet received over a network, determine a first flow session associated with the first packet using the application layer metadata from the first packet and the flow table, and retrieve at least one of the plurality of flow-based rules from the programmable flow-based rule storage using the application layer metadata from the first packet. A flow-based rule processing engine in the packet processor is operable to apply the at least one flow-based rule to the first packet. Packets with applied flow-based rules are forwarded through the network. 1. A flow-based network switching system , comprising:a memory including a flow table; a user-programmable flow-based rule storage that includes a plurality of flow-based rules;', 'a flow-based handler and session manager that is configured to retrieve transport layer metadata that includes a first transport protocol from a first packet, determine a first flow session associated with the first packet using the first transport protocol in the transport layer metadata from the first packet and the flow table, and retrieve at least one of the plurality of flow-based rules from the programmable flow-based rule storage using the first transport protocol in the transport layer metadata from the first packet; and', 'a flow-based rule processing engine that is configured to apply the at least one flow-based rule to the first packet., 'a packet processor coupled to the memory, wherein the packet processor includes2. The system of claim 1 , wherein the memory further includes:a queue that is configured to store packets.3. The system of claim 1 , wherein the flow-based ...

Подробнее
01-01-2015 дата публикации

SYSTEM AND METHOD FOR CREATING A SCALABLE MONOLITHIC PACKET PROCESSING ENGINE

Номер: US20150003447A1
Автор: Ekner Hartvig
Принадлежит:

A novel and efficient method is described that creates a monolithic high capacity Packet Engine (PE) by connecting N lower capacity Packet Engines (PEs) via a novel Chip-to-Chip (C2C) interface. The C2C interface is used to perform functions, such as memory bit slicing and to communicate shared information, and enqueue/dequeue operations between individual PEs. 120-. (canceled)21. A system comprising a plurality of packet engines , a first packet engine including: 'wherein the first packet engine slices a packet into bit slices and distributes the bit slices to the second packet engine without using switch fabric.', 'an interface coupled to a second packet engine;'}22. The system of claim 21 , wherein the first packet engine distributes the bit slices to the second packet engine using the interface.23. The system of claim 21 , wherein the first packet engine distributes the bit slices to a buffer system of the second packet engine.24. The system of claim 23 , wherein the second packet engine reads the bit slices from its buffer system claim 23 , and enqueues the packet using a network egress interface.25. The system of claim 21 , wherein the plurality of packet engines behave as a monolithic packet engine system claim 21 , wherein the monolithic packet engine has the processing power that is substantially equal to the processing power of the number of packet engines included in the plurality of packet engines.26. The system of claim 21 , wherein each packet engine of the plurality of packet engines includes a network ingress interface for accepting packets coupled to a buffer system.27. The system of claim 21 , wherein each packet engine of the plurality of packet engines includes a plurality of egress queues having a first interface for enqueuing and dequeuing packets claim 21 , and a second interface for transmitting packets via a network egress interface.28. The system of claim 21 , wherein the plurality of packet engines are configured to facilitate non-blocking ...

Подробнее
01-01-2015 дата публикации

APPLICATION-CONTROLLED NETWORK PACKET CLASSIFICATION

Номер: US20150003467A1
Автор: Biswas Anumita
Принадлежит:

Embodiments of the present invention provide a system, method, and computer program product that enables applications transferring data packets over a network to a multi-processing system to choose how the data packets are going to be processed by, e.g., allowing the applications to pre-assign connections to a particular network thread and migrate a connection from one network thread to another network thread without putting the connection into an inconsistent state. 1. A method for enabling applications to flexibly distribute connections in a multiprocessing system , the method comprising:maintaining a plurality of connection data structures, each data structure associated with a network thread executed in the multiprocessing system, each data structure storing connections queued to a network thread;receiving an indication that an application wants to migrate an established network connection to a new network context that queues connections processed by a network thread;halting processing of data packets arriving on the connection until the connection is migrated without putting the connection in an inconsistent state; andmigrating the connection to the new network thread.2. The method of claim 1 , wherein migration of the connection to the new network thread is performed without halting execution of timer threads.4. The method of claim 1 , further comprising:maintaining a plurality of timer data structures accessed by network threads;removing a timer data structure corresponding to a connection in migration from the first data structure;placing the removed timer data structure to a temporary queue; andplacing the timer data structure to the new data structure corresponding to a new network thread to which the connection is migrated after the connection is migrated.5. The method of claim 1 , further comprising:placing data packets arriving on the connection in migration in a temporary queue;placing the data packets into the new data structure for the migrated ...

Подробнее
02-01-2020 дата публикации

Network packet templating for gpu-initiated communication

Номер: US20200004610A1
Принадлежит: Advanced Micro Devices Inc

Systems, apparatuses, and methods for performing network packet templating for graphics processing unit (GPU)-initiated communication are disclosed. A central processing unit (CPU) creates a network packet according to a template and populates a first subset of fields of the network packet with static data. Next, the CPU stores the network packet in a memory. A GPU initiates execution of a kernel and detects a network communication request within the kernel and prior to the kernel completing execution. Responsive to this determination, the GPU populates a second subset of fields of the network packet with runtime data. Then, the GPU generates a notification that the network packet is ready to be processed. A network interface controller (NIC) processes the network packet using data retrieved from the first subset of fields and from the second subset of fields responsive to detecting the notification.

Подробнее
03-01-2019 дата публикации

Searching Varying Selectable Physical Blocks of Entries within a Content-Addressable Memory

Номер: US20190005152A1
Принадлежит: CISCO TECHNOLOGY, INC.

In one embodiment, a content-addressable memory has multiple blocks of content-addressable memory entries, including different first and second sets of content-addressable memory blocks. One embodiment determines the first set of content-addressable memory blocks based on a content-addressable memory profile identifier and a search key and then performs a first content-addressable memory lookup operation in each of the first set of content-addressable memory blocks, but not in the second set of content-addressable memory blocks, based on the search key. If at least one entry is match, a corresponding result is identified. Otherwise, in one embodiment, the second set of content-addressable memory blocks is determined based on the content-addressable memory profile identifier but not based on the search key, and a search is made therein to identify a matching result or that no match was determined. In one embodiment, a matching result determines how a packet is processed. 1. A method , comprising:determining a first one or more content-addressable memory blocks within a content-addressable memory based on a content-addressable memory profile identifier and a search key, with the first one or more content-addressable memory blocks being less than all of searchable content-addressable memory blocks within the content-addressable memory and not including a second one or more content-addressable memory blocks within the content-addressable memory, and with each of the first and second one or more content-addressable memory blocks including a plurality of content-addressable memory entries; andperforming a first content-addressable memory lookup operation in each of the first one or more content-addressable memory blocks, but not in the second one or more content-addressable memory blocks, based on the search key resulting in the identification of one or more first content-addressable memory matching entries.2. The method of claim 1 , comprising:creating a hash key based ...

Подробнее
13-01-2022 дата публикации

Signalling of Dejittering Buffer Capabilities for TSN Integration

Номер: US20220014485A1
Принадлежит:

It is provided a method, including determining a parameter of a data flow to pass through a non-deterministic network and a buffering device; configuring the data flow in at least one of a network element of the non-deterministic network and the buffering device based on the parameter, wherein the parameter is determined based on a capability of the buffering device 1. Apparatus , comprising:circuitry configured to determine a parameter of a data flow to pass through a non-deterministic network and a buffering device;circuitry configured to configure the data flow in at least one of a network element of the non-deterministic network and the buffering device based on the parameter,wherein the circuitry configured to determine the parameter is configured to determine the parameter based on a capability of the buffering device.2. The apparatus according to claim 1 , further comprising:circuitry configured to obtain the capability from a notification received from the buffering device.3. The apparatus according to claim 1 , further comprising:circuitry configured to retrieve the capability from a storage device.4. The apparatus according to claim 1 , wherein the capability comprises at least one of a memory available to buffer data of the data flow in the buffering device and a processing time per data volume of the data flow to be buffered in the buffering device.5. The apparatus according to claim 1 , wherein the parameter comprises at least one of an egress time window for the data flow claim 1 , a minimum delay within the non-deterministic network claim 1 , and a quality of service within the non-deterministic network.6. The apparatus according to claim 1 , further comprising:circuitry configured to check if the data flow is configured to carry deterministic traffic;circuitry configured to inhibit the determining circuitry to determine the parameter if the data flow is not configured to carry deterministic traffic.7. The apparatus according to claim 1 , wherein a ...

Подробнее
05-01-2017 дата публикации

VIRTUAL NETWORK INTERFACE CONTROLLER PERFORMANCE USING PHYSICAL NETWORK INTERFACE CONTROLLER RECEIVE SIDE SCALING OFFLOADS

Номер: US20170005931A1
Принадлежит:

Techniques disclosed herein provide an approach for using receive side scaling (RSS) offloads from a physical network interface controller (PNIC) to improve the performance of a virtual network interface controller (VNIC). In one embodiment, the PNIC is configured to write hash values it computes for RSS purposes to packets themselves. The VNIC then reads the hash values from the packets and places the packets into VNIC RSS queues, which are processed by respective CPUs, based on the hash values. CPU overhead is thereby reduced, as RSS processing by the VNIC no longer requires computing hash values. In another embodiment in which the number of PNIC RSS queues and VNIC RSS queues are identical, the VNIC may map packets from PNIC RSS queues to VNIC RSS queues using the PNIC RSS queue ID numbers, which also does not require the computing RSS hash values. 1. A method of delivering packets from queues of a physical network interface controller (PNIC) to queues of a virtual network interface controller (VNIC) , comprising:storing received packets in the PNIC queues based on hash values computed by the PNIC from header attributes of the received packets; andforwarding the packets stored in the PNIC queues to the VNIC queues based on the hash values computed by the PNIC if the number of PNIC queues is different from the number of VNIC queues.2. The method of claim 1 , further comprising claim 1 , storing the hash value computed by the PNIC for each packet in the packet itself claim 1 , wherein the packets are forwarded to the VNIC queues based on the hash values stored therein.3. The method of claim 1 , further comprising claim 1 , for each of the PNIC queues claim 1 , splitting a packet list from the PNIC queue into multiple packet lists corresponding to each of the VNIC queues.4. The method of claim 3 , further comprising claim 3 , cloning the packet list from the PNIC queue claim 3 , wherein the cloned packet list is split into the multiple packet lists.5. The method of ...

Подробнее
05-01-2017 дата публикации

APPARATUS AND METHOD FOR STORING DATA TRAFFIC ON FLOW BASIS

Номер: US20170005952A1
Принадлежит:

An apparatus and method for storing data traffic on a flow basis. The apparatus for storing data traffic on a flow basis includes a packet storage unit a flow generation unit, and a metadata generation unit. The packet storage unit receives packets corresponding to data traffic, and temporarily stores the packets using queues. The flow generation unit generates flows by grouping the packets by means of a hash function using information about each of the packets as input, and to store the flows. The metadata generation unit generates metadata and index data corresponding to each of the flows, and stores the metadata and the index data. 1. An apparatus for storing data traffic on a flow basis , comprising:a packet storage unit configured to receive packets corresponding to data traffic, and to temporarily store the packets using queues;a flow generation unit configured to generate flows by grouping the packets by means of a hash function using information about each of the packets as input, and to store the flows; anda metadata generation unit configured to generate metadata and index data corresponding to each of the flows, and to store the metadata and the index data2. The apparatus of claim 1 , wherein the flow generation unit comprises:a hash value generation unit configured to generate a hash value based on an IP address of each sender, an IP address of each recipient, a port address of the sender, and a port address of the recipient, which correspond to the packets;a generation unit configured to sort the packets according to their flows based on the hash values, to generate flows by grouping the packets, and to store the flows in flow buffers; anda flow storage unit configured to store the flows, stored in the flow buffers, on hard disks.3. The apparatus of claim 2 , wherein the flow storage unit stores each of the flows on the hard disks when a size of the flow stored in the flow buffers exceeds a specific value or the flow is terminated.4. The apparatus of ...

Подробнее
07-01-2016 дата публикации

Method And Apparatus For Performing Finite Memory Network Coding In An Arbitrary Network

Номер: US20160006676A1
Принадлежит:

Techniques for performing finite memory network coding in an arbitrary network limit an amount of memory that is provided within a node of the network for the performance of network coding operations during data relay operations. When a new data packet is received by a node, the data stored within the limited amount of memory may be updated by linearly combining the new packet with the stored data. In some implementations, different storage buffers may be provided within a node for the performance of network coding operations and decoding operations. 132-. (canceled)33. A method for operating a network node in a network having a plurality of nodes , where the network does not use pre-established network routing to direct packets through the network from a source node to a destination node , the method comprising:receiving a new packet at the network node from an arbitrary direction; 'generating one or more modified packets by linearly combining the new packet with one or more of the packets already stored in the coding buffer and storing the modified packets in the coding buffer in the place of the one or more packets already stored in the coding buffer;', 'in response to a maximum number of packets being stored in the coding buffer 'storing the new packet in the coding buffer;', 'if a maximum number of packets is not already stored in the coding buffergenerating one or more coded packets to be transmitted from the network node by linearly combining one or more packets stored in the coding buffer using network coding; andtransmitting the coded packet to one or more destination nodes without using pre-established network routing.34. The method of claim 33 , wherein:linearly combining the new packet with one or more packets already stored in the coding buffer comprises linearly combining the new packet with S packets stored in the coding buffer and generating S modified packets, and storing the S modified packets in the coding buffer, wherein S is a positive integer. ...

Подробнее
05-01-2017 дата публикации

Systems and methods for air-ground message prioritization

Номер: US20170006619A1
Принадлежит: Honeywell International Inc

Systems and methods for air-ground message prioritization are provided. In one embodiment, an message communication system comprises: a first Class-of-Service and Priority Tagging Module configured to tag messages with a message tag, the message tag comprising a Class-of-Service tag and a Priority tag; a queue broker that includes a plurality of message queues, wherein each message queues is associated with a Class-of-Service defined by at least one datalink technology, wherein the queue broker assigns each of the messages to one of the plurality of message queues based on a Class-of-Service indicated by the Class-of-Service tag; and an on-board message broker that monitors datalink availability and current state indicators, wherein the on-board message broker communicates to the queue broker when to transition one or more of the message queues between a Prioritize-and-Store operating state and a Prioritize-and-Forward operating state based on the datalink availability and the current state indicators.

Подробнее
04-01-2018 дата публикации

ACK CLOCK COMPENSATION FOR HIGH-SPEED SERIAL COMMUNICATION INTERFACES

Номер: US20180006768A1
Принадлежит:

In a serial communication interface with transceivers that run on different clocks, an ACK transmit FIFO is used to track packets transmitted, and an ACK receive queue is used to track ACK bits for received packets. The ACK receive queue contains a number of entries, and training for the transceivers begins transmitting ACK bits from the ACK receive queue once the ACK receive queue has multiple valid ACK bits. When the ACK receive queue is less than a lower threshold, an ACK compensation mechanism sends one or more packets that make the ACK receive queue grow. When the ACK receive queue is more than an upper threshold, the ACK compensation mechanism sends one or more packets that make the ACK receive queue shrink. The combination of the ACK receive queue and the ACK compensation mechanism allow dynamically compensating for the different clocks of the two transceivers. 1. A transceiver for sending and receiving network packets , the transceiver comprising:packet processing logic that receives the network packets, the packet processing logic comprising an acknowledge (ACK) receive queue that includes a plurality of entries, wherein the packet processing logic generates an ACK bit for each received network packet, stores each ACK bit in the ACK receive queue, and waits until multiple of the plurality of entries in the ACK receive queue contain valid ACK bits before transmitting one of the valid ACK bits from the ACK receive queue; andan ACK compensation mechanism that monitors the ACK receive queue, and when a number of valid ACK bits in the ACK receive queue is less than a predetermined lower threshold, the ACK compensation mechanism sends at least one network packet that increases the number of valid ACK bits in the ACK receive queue, and when the number of valid ACK bits in the ACK receive queue is greater than a predetermined upper threshold, the ACK compensation mechanism sends at least one network packet that decreases the number of valid ACK bits in the ACK ...

Подробнее
04-01-2018 дата публикации

AUTOMATICALLY TUNING MIDDLEWARE IN A MOBILEFIRST PLATFORM RUNNING IN A DOCKER CONTAINER INFRASTRUCTURE

Номер: US20180006886A1
Принадлежит:

An approach is provided for tuning middleware. Performance-related settings are loaded. Performance data of the middleware of a MobileFirst Platform (MFP) running in a docker container infrastructure is received. The performance data is collected by agents installed in container groups. Based on the performance data, a performance issue in one of the container groups is identified and a server included in the one container group is identified as a source of the performance issue. Recommendations are generated for tuning the middleware by modifying one or more of the performance-related settings. While the middleware is running in the docker container infrastructure, one of the recommendations is applied to modify one of the performance-related settings which dynamically tunes the middleware, thereby resolving the performance issue. 1. A method of tuning middleware , the method comprising the steps of:loading, by a computer, performance-related settings;receiving, by the computer, performance data specifying a performance of the middleware of a MobileFirst platform (MFP) running in a docker container infrastructure, the performance data having been collected by agents installed in container groups included in the docker container infrastructure, and the agents having collected the performance data from multiple servers included in the container groups;based on the received performance data, identifying, by the computer, a performance issue in one of the container groups and identifying, by the computer, a server included in the one container group as being a source of the identified performance issue;generating, by the computer, a set of recommendations for tuning the middleware by modifying one or more of the performance-related settings; andwhile the middleware is running in the docker container infrastructure, applying, by the computer, one of the recommendations in the set of recommendations, which modifies one of the performance-related settings which is ...

Подробнее
04-01-2018 дата публикации

TECHNOLOGIES FOR SCALABLE PACKET RECEPTION AND TRANSMISSION

Номер: US20180006970A1
Принадлежит:

Technologies for scalable packet reception and transmission include a network device. The network device is to establish a ring that is defined as a circular buffer and includes a plurality of slots to store entries representative of packets. The network device is also to generate and assign receive descriptors to the slots in the ring. Each receive descriptor includes a pointer to a corresponding memory buffer to store packet data. The network device is further to determine whether the NIC has received one or more packets and copy, with direct memory access (DMA) and in response to a determination that the NIC has received one or more packets, packet data of the received one or more packets from the NIC to the memory buffers associated with the receive descriptors assigned to the slots in the ring. 1. A network device to process packets , the network device comprising:one or more processors that include a plurality of cores;a network interface controller (NIC) coupled to the one or more processors; and establish a ring in a memory of the one or more memory devices, wherein the ring is defined as a circular buffer and includes a plurality of slots to store entries representative of packets;', 'generate and assign receive descriptors to the slots in the ring, wherein each receive descriptor includes a pointer to a corresponding memory buffer to store packet data;', 'determine whether the NIC has received one or more packets; and', 'copy, with direct memory access (DMA) and in response to a determination that the NIC has received one or more packets, packet data of the received one or more packets from the NIC to the memory buffers associated with the receive descriptors assigned to the slots in the ring., 'one or more memory devices having stored therein a plurality of instructions that, when executed by the one or more processors, cause the network device to2. The network device of claim 1 , wherein to generate and assign the receive descriptors to the slots in the ...

Подробнее
04-01-2018 дата публикации

Ic card, portable electronic apparatus, and ic card processing apparatus

Номер: US20180006971A1
Автор: Aki Fukuda
Принадлежит: Toshiba Corp

An IC card has a communication unit, a storage unit, and a controller. The communication unit communicates with an external apparatus. A communication buffer for communication between the communication unit and the external apparatus is set in the storage unit. If the size of a buffer used in communication is designated by the external apparatus, the controller sets a receive buffer that stores reception data and a transmit buffer that stores transmission data in the communication buffer, and notifies the external apparatus of the size of the set receive buffer and the size of the set transmit buffer.

Подробнее
07-01-2021 дата публикации

Transaction based scheduling

Номер: US20210006513A1
Принадлежит:

One embodiment includes a communication apparatus, including multiple interfaces including at least one egress interface to transmit packets belonging to multiple flows to a network, and control circuitry to queue packets belonging to the flows in respective flow-specific queues for transmission via a given egress interface, and to arbitrate among the flow-specific queues so as to select packets for transmission responsively to dynamically changing priorities that are assigned such that all packets in a first flow-specific queue, which is assigned a highest priority among the queues, are transmitted through the given egress interface until the first flow-specific queue is empty, after which the control circuitry assigns the highest priority to a second flow-specific queue, such that all packets in the second flow-specific queue are transmitted through the given egress interface until the second flow-specific queue is empty, after which the control circuitry assigns the highest priority to another flow-specific queue. 1. A communication apparatus , comprising:multiple interfaces including at least one egress interface, which is configured to transmit packets belonging to multiple flows to a packet data network; andcontrol circuitry, which is configured to queue the packets belonging to a plurality of the flows in respective flow-specific queues for transmission via a given egress interface, including at least first and second flow-specific queues, and to arbitrate among the flow-specific queues so as to select the packets for transmission responsively to dynamically changing priorities that are assigned to the flow-specific queues, and which is configured to assign the priorities to the flow-specific queues such that all the packets in the first flow-specific queue, which is assigned a highest priority among the flow-specific queues, are transmitted through the given egress interface until the first flow-specific queue is empty, after which the control circuitry ...

Подробнее
03-01-2019 дата публикации

NETWORK SYSTEM, COMMUNICATION DEVICE, AND COMMUNICATION METHOD

Номер: US20190007299A1
Принадлежит:

A buffer capacity of a memory is reduced and the occurrence of useless communication is prevented. A network system can include a communication device as a transmission source, a communication device as a transmission destination, and an intermediate communication device disposed between the transmission-source communication device and the transmission-destination communication device, the intermediate communication device having a function of connecting sub-networks which construct communication routes based on route construction protocols independent of each other, the network system characterized in that the intermediate communication device includes: a route establishing means (control part) that establishes a route to the transmission-destination communication device when a packet addressed to the transmission-destination communication device is received from the transmission-source communication device; and a proxy reply means (control part) that replies as a proxy for the transmission-destination communication device to the transmission-source communication device when the route is established by the route establishing means. 1. A network system which includes at least a communication device as a transmission source , a communication device as a transmission destination , and an intermediate communication device disposed between the transmission-source communication device and the transmission-destination communication device , the intermediate communication device having a function of connecting sub-networks which construct communication routes based on route construction protocols independent of each other ,wherein the intermediate communication device comprises:a route establishing means that establishes a route to the transmission-destination communication device when a packet addressed to the transmission-destination communication device is received from the transmission-source communication device; anda proxy reply means that replies as a proxy for the ...

Подробнее
03-01-2019 дата публикации

Technologies for scalable network packet processing with lock-free rings

Номер: US20190007330A1
Принадлежит: Intel Corp

Technologies for network packet processing include a computing device that receives incoming network packets. The computing device adds the incoming network packets to an input lockless shared ring, and then classifies the network packets. After classification, the computing device adds the network packets to multiple lockless shared traffic class rings, with each ring associated with a traffic class and output port. The computing device may allocate bandwidth between network packets active during a scheduling quantum in the traffic class rings associated with an output port, schedule the network packets in the traffic class rings for transmission, and then transmit the network packets in response to scheduling. The computing device may perform traffic class separation in parallel with bandwidth allocation and traffic scheduling. In some embodiments, the computing device may perform bandwidth allocation and/or traffic scheduling on each traffic class ring in parallel. Other embodiments are described and claimed.

Подробнее
03-01-2019 дата публикации

TECHNOLOGIES FOR EXTRACTING EXTRINSIC ENTROPY FOR WORKLOAD DISTRIBUTION

Номер: US20190007347A1
Принадлежит:

Technologies for distributing network packet workload are disclosed. A compute device may receive a network packet and determine network packet extrinsic entropy information that is based on information that is not part of the contents of the network packet, such as an arrival time of the network packet. The compute device may use the extrinsic entropy information to assign the network packet to one of several packet processing queues. Since the assignment of network packets to the packet processing queues depend at least in part on extrinsic entropy information, similar or even identical packets will not necessarily be assigned to the same packet processing queue. 1. A compute device for distributing network packet workload based on extrinsic entropy , the compute device comprising:a processor;a memory; receive a network packet;', 'determine network packet extrinsic entropy information, wherein the network packet extrinsic entropy information is not based on content of the network packet;', 'select, based on the network packet extrinsic entropy information, a packet processing queue from a plurality of packet processing queues; and', 'assign the network packet to the selected packet processing queue., 'a network interface controller to2. The compute device of claim 1 , wherein the network interface controller is further to determine information associated with a temporal characteristic of the arrival of the network packet at the compute device claim 1 ,wherein to determine the network packet extrinsic entropy information comprises to determine the network packet extrinsic entropy information based on the temporal characteristic of the arrival of the network packet.3. The compute device of claim 2 , wherein the network interface controller is further to determine a timestamp of the arrival time of the network packet claim 2 , wherein the temporal characteristics of the arrival of the network packet is the timestamp of the arrival time claim 2 ,wherein to select, ...

Подробнее
03-01-2019 дата публикации

METHOD FOR TRANSFERRING TRANSMISSION DATA FROM A TRANSMITTER TO A RECEIVER FOR PROCESSING THE TRANSMISSION DATA AND MEANS FOR CARRYING OUT THE METHOD

Номер: US20190007348A1
Автор: Röhrl Wolfgang
Принадлежит:

A method involves transferring a transmittal data block from a transmitting device via an Ethernet connection to a receiving device which has a storage for storing a transferred transmittal data block, and a processor for at least partially processing the transferred transmittal data block stored in the storage. The transmitting device forms from the data of the transmittal data block a sequence of Ethernet packets, comprising respectively management data and a transmittal data sub-block. The receiving device receives the Ethernet packets of the respective sequence and, while employing at least a part of the management data, writes the transmittal data sub-blocks of the received Ethernet packets of the sequence of Ethernet packets for the transmittal data block to the storage, wherein not upon or after the writing each of the transmittal data sub-blocks an interrupt is sent to the processor. 118.-. (canceled)19. A method for transferring a transmittal data block from a transmitting device , for example at least a part of a sensor or a part of an evaluation device , preferably for evaluating transmittal data , via an Ethernet connection to a receiving device , for example to an evaluating device for evaluating transmittal data , which has a storage for storing a transferred transmittal data block , and a processor for at least partially processing the transferred transmittal data block stored in the storage ,wherein the transmittal data in the transmittal data block are preferably sensor data of a sensor for the examination of value documents,in which the transmitting device forms from the data of the transmittal data block a sequence of Ethernet packets, which comprise respectively management data and a transmittal data sub-block, which is formed from at least a part of the data, so that the transmittal data sub-blocks of the Ethernet packets of the sequence comprise the data of the transmittal data block,wherein the management data comprise management data, from ...

Подробнее
02-01-2020 дата публикации

OPTIMIZATION OF DATA QUEUE PRIORITY FOR REDUCING NETWORK DATA LOAD SPEEDS

Номер: US20200007453A1
Принадлежит:

There are provided systems and methods for optimization of data queue priority for reducing network data load speeds. A user may utilize a communication device to access an online resource and request data, such as server data from an online server. The online resource may determine a user profile associated with the user and/or device, which may include previous online actions and completion information for electronic transaction processing with one or more online entities. Using the profile, the server may optimize a data queue for data delivery to multiple devices depending on the devices' data requests and priority. The server may deliver data to the devices based on the data queue. The server may also update the user profile based on additional device actions with the server. These techniques may be particularly useful for prioritizing requests during peak server resource usage. 1. A system comprising:a non-transitory memory storing instructions; and receiving, by the system from a device associated with a user profile, a data request for website data from a website;', 'determining a device prioritization level for the device based on a history of server resource usage corresponding to the user profile;', 'determining a processing load of the system based on one or more available resources for the system, wherein the one or more available resources are associated with servicing the data request for the website data; and', 'providing the website data to the device based on the device prioritization level and the processing load., 'one or more hardware processors couple to the non-transitory memory and configured to read the instructions from the non-transitory memory to cause the system to perform operations comprising2. The system of claim 1 , wherein the user profile comprises a transaction history of transactions conducted by a user of the device through at least one of the website or a dedicated application associated with the website claim 1 , and wherein ...

Подробнее
08-01-2015 дата публикации

SYSTEM FOR PERMITTING CONTROL OF THE PURGING OF A NODE B BY THE SERVING RADIO NETWORK CONTROLLER

Номер: US20150009938A1
Принадлежит: INTERDIGITAL TECHNOLOGY CORPORATION

A system and method which permit the RNC to control purging of data buffered in the Node B. The RNC monitors for a triggering event, which initiates the purging process. The RNC then informs the Node B of the need to purge data by transmitting a purge command, which prompts the Node B to delete at least a portion of buffered data. The purge command can include instructions for the Node B to purge all data for a particular UE, data in one or several user priority transmission queues or in one or more logical channels in the Node B, depending upon the particular data purge triggering event realized in the RNC. 1. A radio network controller (RNC) comprising:at least one device configured to determine a radio link control (RLC) reset associated with a user equipment (UE); andthe at least one device further configured, in response to the RLC reset determination, to send a data frame associated with a high-speed downlink shared channel (HS-DSCH) to a NodeB; wherein the data frame includes an information element that requests a purge of all packet data units (PDUs) associated with the UE in a transmission priority queue in the Node B.2. The RNC of wherein the information element is a single bit that requests the purge.3. The RNC of wherein the at least one component is further configured to receive an indication of the RLC reset from the UE4. A method for use by a radio network controller (RNC) claim 1 , the method comprising:receiving, by the RNC, an indication of a radio link control (RLC) reset associated with a user equipment (UE); andin response to receiving the RLC reset indication, to send a data frame associated with a high-speed downlink shared channel (HS-DSCH) to a NodeB; wherein the data frame includes an information element that requests a purge of all packet data units (PDUs) associated with the UE in a transmission priority queue in the Node B.5. The RNC of wherein the information element is a single bit that requests the purge.6. A Node B comprising:a ...

Подробнее
08-01-2015 дата публикации

COMMUNICATION DEVICE FOR WIDEBAND CODE DIVISION MULTIPLE ACCESS COMMUNICATION (W-CDMA) SYSTEM, A METHOD FOR USE THEREIN AND AN ASSOCIATED SEMICONDUCTOR DEVICE

Номер: US20150009983A1
Принадлежит:

A communication device () for a wideband code division multiple access communication (W-CDMA) system is described. The communication device has an antenna interface (AIF), a front end processor (FE), a packet generator (PG), a packet writer (PW) and one or more digital signal processors (DSP DSP). The front end processor (FE) is configured to receive one or more antenna signals from the antenna interface (AIF) and to calculate soft symbols representing symbols transmitted by a UE (UE0, UE1) using descrambling and despreading of the one or more antenna signals using a plurality of fingers assigned to the UE. The packet generator (PG) is configured to organize the soft symbols into packets, each packet comprising the soft symbols from the plurality of fingers assigned to a respective UE associated with one physical channel of the one of more physical channels and with the same symbol index. The packet writer (PW) is configured to write the packets into a system memory (SYSMEM). The one or more digital signal processors (DSP DSP) are configured to access the packets from the system memory (SYSMEM) and to process the packets. 1. A communication device for a wideband code division multiple access communication (W-CDMA) system , the communication device comprising:an antenna interface, a controller, a front end processor, a packet generator, a packet writer and one or more digital signal processors;wherein the antenna interface is configured to receive one or more antenna signals comprising a superposition of a plurality of reflections of a signal transmitted by one or more UEs, the signal comprising scrambled and spreaded symbols associated with one or more physical channels per UE, and to provide the one or more antenna signals to the front end processor;wherein the controller configured to determine a number of fingers corresponding to the plurality of reflections and to assign a plurality of fingers to a UE in accordance with the number,wherein the front end processor ...

Подробнее
12-01-2017 дата публикации

NETWORK PROCESSOR, COMMUNICATION DEVICE, PACKET TRANSFER METHOD, AND COMPUTER-READABLE RECORDING MEDIUM

Номер: US20170012855A1
Принадлежит: FUJITSU LIMITED

A managing unit adds update information to an entry to be updated of a table updated prior to a change of a network configuration, and deletes the update information when the update of the table caused by the change of the network configuration is completed. A packet processing unit executes a plurality of pipeline processes using the table sequentially, and suspends executing the pipeline processes when the update information is added to any entry of the table. A reprocessing control unit stores an input packet in a reprocessing queue when the pipeline processes executed by the packet processing unit is suspended, and transfers the packet stored in the reprocessing queue to the input queue when update of the table to which the update information is added is all completed. 1. A network processor comprising:a managing unit that adds update information to an entry to be updated of a table updated prior to a change of a network configuration, and that deletes the update information when the update of the table caused by the change of the network configuration is completed;a packet processing unit that executes a plurality of pipeline processes using the table in sequence for a packet input to an input queue, and that suspends executing the pipeline processes when the update information is added to any entry of the table; anda reprocessing control unit that stores an input packet in a reprocessing queue when the pipeline processing executed by the packet processing unit is suspended, and that transfers the packet stored in the reprocessing queue to the input queue when update of the table to which the update information is added is all completed.2. The network processor according to claim 1 , whereinthe managing unit informs stop of dequeue of the reprocessing queue to the reprocessing control unit when the update of the table is performed, and informs start of dequeue of the reprocessing queue to the reprocessing control unit when the update of the table to which the ...

Подробнее
12-01-2017 дата публикации

MESSAGE REORDERING TIMERS

Номер: US20170012884A1
Автор: Ho Tracey, Meng Chun
Принадлежит:

A method for data communication from a first node to a second node over a data channel coupling the first node and the second node includes receiving data messages at the second node, the messages belonging to a set of data messages transmitted in a sequential order from the first node, sending feedback messages from the second node to the first node, the feedback messages characterizing a delivery status of the set of data messages at the second node, including maintaining a set of one or more timers according to occurrences of a number of delivery order events, the maintaining including modifying a status of one or more timers of the set of timers based on occurrences of the number of delivery order events, and deferring sending of said feedback messages until expiry of one or more of the set of one or more timers. 1. A method for data communication from a first node to a second node over a data channel coupling the first node and the second node , the method comprising:receiving data messages at the second node, the messages belonging to a set of data messages transmitted in a sequential order from the first node; maintaining a set of one or more timers according to occurrences of a plurality of delivery order events, the maintaining including modifying a status of one or more timers of the set of timers based on occurrences of the plurality of delivery order events, and', 'deferring sending of said feedback messages until expiry of one or more of the set of one or more timers., 'sending feedback messages from the second node to the first node, the feedback messages characterizing a delivery status of the set of data messages at the second node, including'}2. The method of wherein the set of one or more timers includes a first timer and the first timer is started upon detection of a first delivery order event claim 1 , the first delivery order event being associated with receipt of a first data message associated with a first position in the sequential order ...

Подробнее
12-01-2017 дата публикации

Method and Device for Controlling Output Arbitration

Номер: US20170012890A1
Автор: Huarui LIU, Yanming QIAO
Принадлежит: Sanechips Technology Co Ltd, ZTE Corp

Provided is a method and device for controlling output arbitration, comprising: a received data stream is stored in a corresponding data cache queue according to a de-multiplexing filter condition, and data address information of the corresponding data cache queue is updated; when it is determined that a length of cache data in the data cache queue is greater than or equal to a fixed length, or when it is determined that the length of the cache data is less than the fixed length but the cache data contains an End Of Packet (EOP), the data cache queue is controlled to apply for output arbitration and the state of the data cache queue is updated; and the cache data in the data cache queue which applies for the output arbitration is outputted according to a preset scheduling rule and the state of the data cache queue.

Подробнее
12-01-2017 дата публикации

ERROR CORRECTION OPTIMIZATION

Номер: US20170012905A1
Автор: Ho Tracey, Meng Chun
Принадлежит:

A method for data communication between a first node and a second node over a data path coupling the first node and the second node includes transmitting a segment of data from the first node to the second node over the data path as a number of messages, the number of messages being transmitted according to a transmission order. A degree of redundancy associated with each message of the number of messages is determined based on a position of said message in the transmission order. 1. A method for data communication between a first node and a second node over a data path coupling the first node and the second node , the method comprising:transmitting a segment of data from the first node to the second node over the data path as a plurality of messages, the plurality of messages being transmitted according to a transmission order;wherein a degree of redundancy associated with each message of the plurality of messages is determined based on a position of said message in the transmission order.2. The method of wherein the degree of redundancy associated with each message of the plurality of messages increases as the position of the message in the transmission order is non-decreasing.3. The method of wherein determining the degree of redundancy associated with each message of the plurality of messages based on the position (i) of the message in the transmission order is further based on one or more of:application delay requirements;a round trip time associated with the data path,a smoothed loss rate (P) associated with the channel,a size (N) of the data associated with the plurality of messages,{'sub': 'i', 'a number (a) of acknowledgement messages received from the second node corresponding to messages from the plurality of messages,'}{'sub': 'i', 'a number (f) of in-flight messages of the plurality of messages, and'}an increasing function (g(i)) based on the index of the data associated with the plurality of messages.4. The method of wherein the degree of redundancy ...

Подробнее
12-01-2017 дата публикации

OPTICAL DELAY LINE AND ELECTRONIC BUFFER MERGED-TYPE OPTICAL PACKET BUFFER CONTROL DEVICE

Номер: US20170013332A1
Принадлежит:

[Problem] To provide an optical packet buffer control device, without making device construction large in scale, that is capable of dynamically responding to traffic and suppressing power consumption. 1. An optical delay line and an electronic buffer merged-type optical packet buffer control device comprising:{'b': '11', 'N (N is integer number of 2 or larger) input terminals () in which an optical packet is input;'}{'b': 13', '11, 'an optical packet information acquisition unit () connected to the N input terminals () and that acquires packet information related to the optical packet;'}{'b': 15', '11, 'a plurality of switches () connected to the N input terminals ();'}{'b': 17', '15, 'a plurality of delay lines () respectively connected to the plurality of switches ();'}{'b': 19', '11, 'an electronic buffer () connected to the N input terminals ();'}{'b': 21', '17', '19, 'an output terminal () connected to the plurality of delay lines () and the electronic buffer (); and'}{'b': 23', '13', '11', '15', '11', '19', '15', '17, 'an optical buffer control unit () connected to the optical packet information acquisition unit () and that controls a connection relationship between the N input terminals () and the plurality of switches (), a connection relationship between the N input terminals () and the electronic buffer (), and a connection relationship between the plurality of switches () and the plurality of delay lines (),'}{'b': '17', 'wherein amounts of delay of the plurality of delay lines () are different from each other, and'}{'b': 23', '19, 'the buffer control unit () receives the packet information, analyzes packet traffic, and performs control in such a manner that the electronic buffer () is not used in a case where the packet traffic is a first threshold related to traffic or less or in a case where a use rate of the delay lines is a first threshold related to a use rate or lower.'}2. The device according to claim 1 ,{'b': '19', 'wherein the electronic buffer ...

Подробнее
11-01-2018 дата публикации

SYSTEMS AND METHODS FOR THE DESIGN AND IMPLEMENTATION OF AN INPUT AND OUTPUT PORTS FOR CIRCUIT DESIGN

Номер: US20180013540A1
Принадлежит:

Systems and methods for providing input and output ports to connect to channels are provided. Input and output ports are the basic building blocks to create more complex data routing IP blocks. By aggregating these modular ports in different ways, different implementations of crossbar or Network on Chip (NoC) can be implemented, allowing flexible routing structure while maintaining all the benefits of channels such as robustness against delay variation, data compression and simplified timing assumptions. 1. An input port configured to accept a bundle of channels at an input and to convert and route the bundle of channels to a plurality of outputs , the input port comprising:a converter coupled with the bundle of channels at the input and configured to convert input encoding associated with data streams provided via the bundle of input channels to the encoding desired within an associated IP block;a buffer stage coupled with the converter configured to improve throughput for the data-path;a router configured to decompress address and then forward the data streams to the appropriate output of the plurality of outputs;a Quality of Service (QOS)/Fault Tolerant (FT) block configured to influence the routing selection for the data streams based on routing priority for resource sharing so that QoS is maintained, to avoid faulty link paths, or both; andoutput buffers configured to improve throughputs.2. The input port of claim 1 , wherein the router is further configured to decompress (QoS) information when the compression ratio used in the bundle of input channels is greater than 1.3. An output port configured to accept multiple bundles of channels at an input and to arbitrate and convert one of the bundles of channels to an associated output claim 1 , the output port comprising:buffer stages configured to improve data throughput for data streams associated with the multiple bundles of input channels;an arbiter configured to select which of the multiple bundle of channels ...

Подробнее
11-01-2018 дата публикации

REDUCING NETWORK LATENCY DURING LOW POWER OPERATION

Номер: US20180013676A1
Принадлежит:

In an embodiment, a method includes identifying a core of a multicore processor to which an incoming packet that is received in a packet buffer is to be directed, and if the core is powered down, transmitting a first message to cause the core to be powered up prior to arrival of the incoming packet at a head of the packet buffer. Other embodiments are described and claimed. 1. A system comprising:a processor including a plurality of first cores and a plurality of second, lower power cores; and perform a look-up in a table for task requests from at least one buffer for the plurality of first cores and the plurality of second, lower power cores, wherein the table comprises data indicating which of the plurality of first cores and which of the plurality of second, lower power cores are to be in an active power state based on task requests from the at least one buffer, and', 'cause each of the plurality of first cores and the plurality of second, lower power cores to be in a respective power state indicated by the data from the look-up in the table., 'a non-transitory machine-accessible storage medium including instructions that when executed cause the system to2. The system of claim 1 , wherein the non-transitory machine-accessible storage medium includes instructions that when executed cause the system to utilize the plurality of second claim 1 , lower power cores to handle processing tasks at reduced power consumption.3. The system of claim 1 , wherein the table includes data indicating which of the plurality of first cores and which of the plurality of second claim 1 , lower power cores are to be in an active power state based on at least eight task requests from the at least one buffer.4. The system of claim 1 , wherein the non-transitory machine-accessible storage medium includes instructions that when executed cause the system to cause each of the plurality of first cores and the plurality of second claim 1 , lower power cores to be in a respective power state ...

Подробнее
14-01-2016 дата публикации

Data Matching Using Flow Based Packet Data Storage

Номер: US20160014051A1
Принадлежит:

A system for matching data using flow based packet data storage includes a communications interface and a processor. A communications interface receives a packet between a source and a destination. The processor identifies a flow between the source and the destination based on the packet. The processor determines whether some of packet data of the packet indicates a potential match to data in storage using hashes. The processor then stores the data from the most likely data match and second most likely data match without a packet header in a block of memory in the storage based on the flow. 1. A system for matching data using flow based packet data storage , the system comprising:a communications interface that receives at least one data packet at a network device between a source and a destination, the at least one data packet including data and flow information; and identifies a flow between the source and the destination based on the flow information in the at least one data packet;', 'determines whether at least a portion of the data from the received at least one data packet indicates one or more potential matches to data in storage;', 'retrieves a list of possible data matches;', 'determines match sizes of at least two likely data matches by directly comparing packet bytes and matched data bytes; and', 'stores the data from the at least one data packet without a packet header in a block of memory allocated for the flow, or', 'generates a retrieve instruction for the data match depending on the match sizes., 'a processor that2. The system of claim 1 , wherein the processor moves the storage data between a fast memory and a slow memory.3. The system of claim 1 , wherein the flow comprises a session between the source and the destination.4. The system of claim 1 , wherein the processor allocates the block of the memory for the identified flow.5. The system of claim 1 , wherein the processor transmits the packet data.6. The system of claim 1 , wherein the block of ...

Подробнее
10-01-2019 дата публикации

Application and network aware adaptive compression for better qoe of latency sensitive applications

Номер: US20190014055A1
Принадлежит: Citrix Systems Inc

This disclosure is directed to embodiments of systems and methods for performing compression of data in a queue. A device intermediary between a client and a server may determine that a length of time to move existing data maintained in a queue from the queue exceeds a predefined threshold. The device may identify, responsive to the determination, a first quantity of the existing data to undergo compression, and a second quantity of the existing data according to a compression ratio of the compression. The device may reserve, according to the second quantity, a first portion of the queue that maintained the first quantity of the existing data, to place compressed data obtained from applying the compression on the first quantity of the existing. The device may place incoming data into the queue beyond the reserved first portion of the queue.

Подробнее
15-01-2015 дата публикации

PORT PACKET QUEUING

Номер: US20150016467A1
Автор: WYATT Richard M.
Принадлежит:

A port queue includes a first memory portion having a first memory access time and a second memory portion having a second memory access time. The first memory portion includes a cache row. The cache row includes a plurality of queue entries. A packet pointer is enqueued in the port queue by writing the packet pointer in a queue entry in the cache row in the first memory. The cache row is transferred to a packet vector in the second memory. A packet pointer is dequeued from the port queue by reading a queue entry from the packet vector stored in the second memory. 1. A queue , the queue having an egress port queue associated with an egress port of a switch , the egress port queue comprising:a first queue memory being configured to receive and enqueue a pointer corresponding to a received data packet, the first queue memory having a first access time;a second queue memory in communication with the first queue memory, the second queue memory having a second access time, the first access time being shorter than the second access time; andcontrol logic configured to transfer a plurality of pointers from the first queue memory to the second queue memory in a single transfer cycle, and dequeue each pointer of a plurality of pointers from the second queue memory.2. The queue of claim 1 , wherein the access time being shoring than the second access time such that the pointer corresponding to the received data packet may additionally be enqueued in one or more other egress port queues associated with one or more other egress ports of the switch during a single port cycle.3. The queue of claim 1 , wherein the control logic is further configured to dequeue each pointer of the plurality of pointers from the second queue memory.4. The queue of claim 3 , wherein each pointer of the plurality of pointers is dequeued by reading the pointer from the second queue memory.5. The queue of claim 1 , wherein the first queue memory comprises a plurality of cache rows.6. The queue of claim ...

Подробнее
15-01-2015 дата публикации

Maintaining Data Stored with a Packet

Номер: US20150016469A1
Принадлежит: NICIRA, INC.

Some embodiments provide a method for a managed forwarding element that operates on a host machine to process packets for at least one logical network. The method receives a packet that includes a particular piece of data to maintain with the packet. The particular piece of data is not stored in a payload of the packet and is not protocol-specific data. The method stores the particular piece of data in a register while processing the packet. The method identifies a next destination of the packet that operates on the host machine. The method generates an object to represent the packet for the identified destination. The particular piece of data is stored in a field of the generated object. 1. For a managed forwarding element that operates on a host machine to process packets for at least one logical network , a method comprising:receiving a packet comprising a particular piece of data to maintain with the packet, wherein the particular piece of data is not stored in a payload of the packet and is not protocol-specific data;storing the particular piece of data in a register while processing the packet;identifying a next destination of the packet that operates on the host machine; andgenerating an object to represent the packet for the identified destination, wherein the particular piece of data is stored in a field of the generated object.2. The method of claim 1 , wherein the particular piece of data comprises an indicator that the packet is for a trace operation.3. The method of claim 1 , wherein receiving the packet comprises receiving the packet via a message from a controller claim 1 , wherein the message commands the managed forwarding element to store the particular piece of data in the register.4. The method of claim 1 , wherein the registers is a particular type of register that stores data for maintaining with the packet after processing by the managed forwarding element.5. The method of claim 1 , wherein the identified destination is a namespace in which a ...

Подробнее
14-01-2021 дата публикации

DETERMINISTIC PACKET SCHEDULING AND DMA FOR TIME SENSITIVE NETWORKING

Номер: US20210014177A1
Автор: Kasichainula Kishore
Принадлежит: Intel Corporation

In one embodiment, a network interface controller (NIC) includes multiple packet transmission queues to queue data packets for transmission. The data packets are assigned to multiple traffic classes. The NIC also includes multiple input/output (I/O) interfaces for retrieving the data packets from memory. Each I/O interface is assigned to a subset of the traffic classes. The NIC also includes scheduler circuitry to select a first data packet to be retrieved from memory, and direct memory access (DMA) engine circuitry to retrieve the first data packet from memory via one of the I/O interfaces based on the traffic class of the first data packet, and store the first data packet in one of the packet transmission queues. The NIC also includes a transmission interface to transmit the first data packet over a network at a corresponding launch time indicated by the scheduler circuitry. 1. A network interface controller , comprising:a plurality of packet transmission queues to queue a plurality of data packets for transmission, wherein the plurality of data packets are assigned to a plurality of traffic classes;a plurality of input/output (I/O) interfaces for retrieving the plurality of data packets from a memory of a host computing system, wherein each I/O interface of the plurality of I/O interfaces is assigned to one or more of the plurality of traffic classes;scheduler circuitry to select a first data packet to be retrieved from the memory, wherein the first data packet is to be selected from the plurality of data packets;direct memory access (DMA) engine circuitry to retrieve the first data packet from the memory via one of the plurality of I/O interfaces based on a corresponding traffic class of the first data packet, wherein the DMA engine circuitry is to store the first data packet in a corresponding packet transmission queue of the plurality of packet transmission queues; anda transmission interface to transmit the first data packet over a network at a corresponding ...

Подробнее
09-01-2020 дата публикации

OPEN REAL-TIME ETHERNET PROTOCOL

Номер: US20200014479A1
Принадлежит:

A real-time Ethernet (RTE) protocol includes start-up frames originated by a master device for network initialization including a preamble, destination address (DA), source address (SA), a type field, and a status field including state information that indicates a current protocol state that the Ethernet network is in for the slave devices to translate for dynamically switching to one of a plurality of provided frame forwarding modes. The start-up frames include device Discovery frames at power up, Parameterization frames that distribute network parameters, and Time Synchronization frames including the master's time and unique assigned communication time slots for each slave device. After the initialization at least one data exchange frame is transmitted exclusive of SA and DA including a preamble that comprises a header that differentiates between master and slave, a type field, a status field excluding the current protocol state, and a data payload. 124-. (canceled)25. An apparatus , comprising:a transceiver; anda processor coupled to said transceiver and an associated memory which stores code for implementing start-up frames for network initialization including a preamble, destination address (DA), source address (SA), a type field which includes a frame type selected from a plurality of said frame types, and a status field including state information that indicates a current protocol state that an Ethernet network is in, selected from a plurality of said protocol states for a plurality of slave devices to translate for dynamically switching to one of a list of different frame forwarding modes, said start-up frames including device Discovery frames at power up, Parametrization frames that distribute network parameters including an Inter frame Gap (IFG), and Time Synchronization frames including timing information of said master device (TM) and unique assigned communication time slots for each of said slave devices.26. The apparatus of claim 25 , wherein said ...

Подробнее
14-01-2021 дата публикации

DATA TRANSMISSION AND NETWORK INTERFACE CONTROLLER

Номер: US20210014307A1
Автор: Li Changqing
Принадлежит:

Implementations of this disclosure provide data transmission operations and network interface controllers. An example method performed by a first RDMA network interface controller includes obtaining m data packets from a host memory of a first host; sending the m data packets to a second RDMA network interface controller of a second host; backing up the m data packets to a network interface controller memory integrated into the first RDMA network interface controller; determining that the second RDMA network interface controller does not receive n data packets of the m data packets; and in response, obtaining the n data packets from the m data packets that have been backed up to the network interface controller memory integrated into the first RDMA network interface controller, and retransmitting the n data packets to the second RDMA network interface controller. 1. A computer-implemented method , comprising:communicating, by a second RDMA network interface controller of a second host with a first RDMA network interface controller of a first host, to receive m data packets from the first RDMA network interface controller, wherein the m data packets have been backed up by the first RDMA network interface controller to a first network interface controller memory integrated into the first RDMA network interface controller, m being a positive integer; and storing, by the second RDMA network interface controller, received data packets of the m data packets into a second network interface controller memory integrated into the second RDMA network interface controller;', 'waiting, by the second RDMA network interface controller, to receive the n data packets having been retransmitted by the first RDMA network interface controller, wherein the retransmitted n data packets are obtained by the first RDMA network interface controller from the first network interface controller memory; and', 'after the retransmitted n data packets have been received by the second RDMA network ...

Подробнее
09-01-2020 дата публикации

METHODS, DEVICES AND SYSTEMS FOR A DISTRIBUTED COORDINATION ENGINE-BASED EXCHANGE THAT IMPLEMENTS A BLOCKCHAIN DISTRIBUTED LEDGER

Номер: US20200014745A1
Принадлежит:

A distributed system that implements an online exchange may comprise a plurality of server nodes, each of which being configured to receive exchange transaction proposals from customers of the online exchange over a computer network and each being configured to store a copy of a blockchain distributed ledger of completed exchange transactions. A distributed coordination engine may be coupled, over the computer network, to the plurality of server nodes and may receive a plurality of exchange transaction proposals from the plurality of server nodes. The distributed coordination engine may be being further configured to achieve consensus on the plurality of exchange transaction proposals and to generate, in response, an ordering of agreed-upon exchange transaction proposals that includes the plurality of exchange transaction proposals on which consensus has been reached. This ordering of agreed-upon exchange transaction proposals is identically provided to each of the server nodes and specifies the order in which the server nodes are to execute exchange transactions and to update their copy of the distributed ledger. The ordering of agreed-upon exchange transaction proposals may optionally be re-ordered and identically provided to each server node to conform to the local orderings at the exchange transaction proposal's node server of origin. 1. A distributed system that implements an online exchange and implements a blockchain distributed ledger , comprising:a plurality of server nodes, each server node of the plurality of server nodes being configured to receive exchange transaction proposals from customers of the online exchange over a computer network and each being configured to store a copy of a distributed ledger of completed exchange transactions; anda distributed coordination engine, the distributed coordination engine being coupled, over the computer network, to the plurality of server nodes and configured to receive a plurality of exchange transaction ...

Подробнее
21-01-2016 дата публикации

Method For Calculating Statistic Data of Traffic Flows in Data Network And Probe Thereof

Номер: US20160020968A1
Принадлежит: CELLOS SOFTWARE LTD

The disclosure provides a probe and a method for calculating statistic data of traffic flows. The probe comprises at least one link processor (LP) and a correlation processor (CP). Each LP includes two buffers, receives packets from directional traffic flows, generates information of bi-directional traffic flows based on the received packets, stores the generated information in one buffer within a reporting period and, reports the stored information to CP when the reporting period boundary is reached. The information of each bi-directional traffic flow includes the relevant identification information and statistic data. The CP calculates statistic data of a particular group of traffic flows with a predetermined characteristic based on the reported information, and the other buffer stores information of bi-directional traffic flows to be generated within a next reporting period and the stored information is to be reported to the correlation processor when the next reporting period boundary is reached.

Подробнее
19-01-2017 дата публикации

Packet reception apparatus

Номер: US20170019352A1
Принадлежит: NTT Electronics Corp

A reception buffer of a packet reception apparatus includes a plurality of storage addresses. A packet determination unit receives a packet from a plurality of lines including a main system and an auxiliary system. The packet determination unit obtains a storage address corresponding to a unique number assigned to the packet, and overwrites and stores data of the packet onto the storage address. A packet extraction/transmission unit extracts and transmits the data stored in the reception buffer.

Подробнее
03-02-2022 дата публикации

Programmatically configured switches and distributed buffering across fabric interconnect

Номер: US20220038391A1
Принадлежит:

Programmable switches and routers are described herein for enabling their internal network fabric to be configured with a topology. In one implementation, a programmable switch is arranged in a network having a plurality of switches and an internal fabric. The programmable switch includes a plurality of programmable interfaces and a buffer memory component. Also, the programmable switch includes a processing component configured to establish each of the plurality of programmable interfaces to operate as one of a user-facing interface and a fabric-facing interface. Based on one or more programmable interfaces being established as one or more fabric-facing interfaces, the buffer memory component is configured to store packets received from a user-facing interface of an interconnected switch of the plurality of switches via one or more hops into the internal fabric. 1. A programmable switch arranged in a network having a plurality of switches and an internal fabric , the programmable switch comprisinga plurality of programmable interfaces,a buffer memory component, anda processing component configured to establish each of the plurality of programmable interfaces to operate as one of a user-facing interface and a fabric-facing interface,wherein, based on one or more programmable interfaces being established as one or more fabric-facing interfaces, the buffer memory component is configured to store packets received from a user-facing interface of an interconnected switch of the plurality of switches via one or more hops into the internal fabric.2. The programmable switch of claim 1 , wherein the network is arranged with a flat internal fabric and full-mesh configuration.3. The programmable switch of claim 2 , wherein the flat internal fabric includes one or more of Direct Attach Cables (DACs) claim 2 , Active Electrical Cables (AECs) claim 2 , Active Optical Cables (AOCs) claim 2 , passive optical cables claim 2 , silicon photonics claim 2 , and Printed Circuit Board ( ...

Подробнее
03-02-2022 дата публикации

PACKET PROCESSING WITH REDUCED LATENCY

Номер: US20220038395A1
Принадлежит: Intel Corporation

Generally, this disclosure provides devices, methods, and computer readable media for packet processing with reduced latency. The device may include a data queue to store data descriptors associated with data packets, the data packets to be transferred between a network and a driver circuit. The device may also include an interrupt generation circuit to generate an interrupt to the driver circuit. The interrupt may be generated in response to a combination of an expiration of a delay timer and a non-empty condition of the data queue. The device may further include an interrupt delay register to enable the driver circuit to reset the delay timer, the reset postponing the interrupt generation. 126-. (canceled)27. At least one non-transitory storage medium storing instructions for being executed by programmable circuitry , the programmable circuitry for being used in association with network interface circuitry , the instructions , when executed , by the programmable circuitry resulting in performance of operations comprising:subjecting access to at least one queue to at least one spin lock, the at least one queue being for use in processing of packet data received via the network interface circuitry, the at least one spin lock to be provided in response to at least one request of at least one entity while the at least one requesting entity is in a polling state of the at least one requesting entity;determining whether to indicate occurrence of at least one other request based at least in part upon whether the at least one other request is made while the access to the at least one queue is subject to the at least one spin lock, the at least one other request being for obtaining of the at least one spin lock, the at least one other request to be made by at least one other entity while the at least one other entity is in a polling state of the at least one other entity; andreleasing the at least one spin lock; the at least one entity is associated with at least one ...

Подробнее
18-01-2018 дата публикации

UTILIZING REALLOCATION VIA A DECENTRALIZED, OR DISTRIBUTED, AGREEMENT PROTOCOL (DAP) FOR STORAGE UNIT (SU) REPLACEMENT

Номер: US20180019916A1
Принадлежит:

Based on a system configuration change (e.g., of a Decentralized, or Distributed, Agreement Protocol (DAP)) within a dispersed storage network (DSN) (e.g., from a first to a second system configuration of the DAP), a computing device directs a storage unit to be replaced (SUTBR) to transfer encoded data slices (EDSs) stored therein to a replacement storage unit (RSU). During transfer of the EDSs (e.g., from SUTBR to RSU), the computing device directs the SUTBR to service read and/or write requests for EDS(s) stored therein to operate based on a first system configuration of the DAP. When the EDSs have been successfully transferred from the SUTBR to the RSU, the computing device directs the RSU to service read and/or write requests for the EDS(s) stored therein to operate based on a second system configuration of the DAP. 1. A computing device comprising:an interface configured to interface and communicate with a dispersed storage network (DSN);memory that stores operational instructions; and detect a change from a first system configuration of a Decentralized, or Distributed, Agreement Protocol (DAP) to a second system configuration of the DAP based on a storage unit to be replaced (SUTBR) within a plurality of storage units (SUs) within the DSN, wherein the first system configuration of the DAP and the second system configuration of the DAP respectively provide for deterministic calculation of locations of encoded data slice (EDS) sets that correspond respectively to a plurality of data segments of a data object that are distributedly stored across the plurality of storage units (SUs) within the DSN, wherein the data object is segmented into the plurality of data segments, wherein a data segment of the plurality of data segments is dispersed error encoded in accordance with dispersed error encoding parameters to produce a set of EDSs of the EDS sets that is of pillar width having a plurality of EDS names, wherein a read threshold number of EDSs of the set of EDSs ...

Подробнее
03-02-2022 дата публикации

Tag-based data packet prioritization in dual connectivity systems

Номер: US20220038556A1
Принадлежит: T Mobile USA Inc

A component of a cellular communication system is configured to prioritize data packets based on packet tags that have been associated with the data packets. The packet tags may comprise an application identifier and a customer identifier, as examples. A Packet Data Convergence Protocol (PDCP) layer of a radio protocol stack receives a data packet and associated packet tags and assigns the data packet to a preferred transmission queue or a non-preferred transmission queue, based on the packet tags associated with the data packet. In order to manage queue overflows, data packets of the non-preferred transmission queue may be discarded when they have been queued for more than a predetermined length of time. Data packets of the preferred transmission queue, however, are retained regardless of how long they have been queued.

Подробнее
18-01-2018 дата публикации

METHODS AND APPARATUS FOR ENABLING COMMUNICATION BETWEEN NETWORK ELEMENTS THAT OPERATE AT DIFFERENT BIT RATES

Номер: US20180019954A1
Автор: Handelman Doron
Принадлежит: III Holdings 2, LLC

A method for enabling communication between a network element (NE) operating at a bit rate Rand a NE operating at a bit rate Ris disclosed. A ratio of Rto Ris represented by a ratio M:N, where M and N are positive integers and M>N. Information is received that is associated with distribution of electrical lanes of at least one of a number of M×K NEs operating at bit rate R. A distribution of the electrical lanes of the at least one M×K NE based on the received information is determined. The electrical lanes of the at least one M×K NE are interconnected with lane ports of at least one M:N electrical interface of N×K transceivers based on the determined distribution. An indication is transmitted identifying the distribution to the at least one M×K NE. 1{'sub': 1', '0, 'receiving information, wherein the information is associated with distribution of electrical lanes of at least one of a number of M×K NEs operating at bit rate R, wherein each electrical lane operates at a lane bit rate R;'}determining a distribution of the electrical lanes of the at least one M×K NE based on the received information;{'sub': 2', '0, 'interconnecting the electrical lanes of the at least one M×K NE with lane ports of at least one M:N electrical interface of N×K transceivers based on the determined distribution, wherein the at least one M:N electrical interface operates at a bit rate R, and wherein each lane port of the at least one M:N electrical interface operates at a lane bit rate R; and'}transmitting an indication identifying the distribution to the at least one M×K NE.. A method for enabling communication between a network element (NE) operating at a bit rate Rand a NE operating at a bit rate R, wherein a ratio of Rto Ris represented by a ratio M:N, where M and N are positive integers and M>N, the method comprising: This application is a continuation of application Ser. No. 14/086,215, filed Nov. 21, 2013, which is a divisional of application Ser. No. 13/752,306, filed Jan. 28, 2013, ...

Подробнее
18-01-2018 дата публикации

METHOD FOR ADAPTIVELY STREAMING AN AUDIO/VISUAL MATERIAL

Номер: US20180020031A1
Автор: CHANG Wei-Chung
Принадлежит:

A method for adaptively streaming an audio/visual (AV) material includes: processing a plurality of current data packets stored in a data buffer to play media segments of the AV material at a current quality; during playback of a current one of the media segments, determining whether a new data packet for playing a candidate one of the media segments at an improved quality is able to be completely downloaded in time; when the determination is affirmative, downloading the new data packet and processing the new data packet to play the candidate one of the media segments of the AV material at the improved quality on the player interface when the new data packet is completely downloaded and stored in the data buffer. 1. A method for adaptively streaming an audio/visual (AV) material , the method being implemented by an electronic device that includes a processor , a memory device having a data buffer , and a display , the data buffer storing a plurality of current data packets that constitute a plurality of successive media segments of the AV material , respectively , the method comprising:generating, by the processor, a player interface on the display;processing, by the processor, the current data packets stored in the data buffer to play the media segments of the AV material at a current quality on the player interface one by one;during playback of a current one of the media segments, determining, by the processor for a candidate one of the media segments after the current one, whether a new data packet for playing the candidate one of the media segments at an improved quality higher than the current quality is able to be completely downloaded before the candidate one of the media segments is expected to be played, based on a current network bandwidth of a network between the electronic device and a server that provides the new data packet, and an amount of the current data packets currently stored in the data buffer;when the determination is affirmative, downloading ...

Подробнее
18-01-2018 дата публикации

DYNAMIC HIERARCHY BASED MESSAGE DISTRIBUTION

Номер: US20180020046A1
Принадлежит:

In one respect, there is provided a distributed database system. The distributed database system can include a plurality of nodes. A first node of the plurality of nodes can be configured to: group the plurality of nodes into at least a first cluster; select a second node to act as a gateway node for the first cluster; determine that at least one recipient node of a message is included in the first cluster; and route the message to the recipient node by at least sending the message to the second node. Related methods and articles of manufacture are also disclosed. 1. A distributed database system , comprising: group the plurality of nodes into at least a first cluster;', 'select a second node to act as a gateway node for the first cluster;', 'determine that at least one recipient node of a message is included in the first cluster; and', 'route the message to the recipient node by at least sending the message to the second node., 'a plurality of nodes, wherein a first node of the plurality of nodes is configured to2. The distributed database system of claim 1 , wherein the first node is further configured to generate the message for broadcasting to one or more recipient nodes.3. The distributed database system of claim 1 , wherein the first node receives the message from a third node.4. The distributed database system of claim 1 , wherein the second node is configured to:receive, from the first node, the message;determine, based at least in part on a transport mode and/or a recipient list of the message, not to reuse a path included with the message; and determine, based at least in part on the recipient list, that the second node is not a recipient node of the message; and', 'in response to determining that the second node is not a recipient node of the message, route the message to one or more recipient nodes., 'in response to determining to not reuse the path5. The distributed database system of claim 4 , wherein the second node routes the message to the one or ...

Подробнее
17-04-2014 дата публикации

MESSAGE HANDLING MULTIPLEXER

Номер: US20140105220A1
Принадлежит: RED HAT INC.

A method and apparatus for processing message is described. In one embodiment, an application programming interface is configured for receiving and sending messages. A multiplexer receives messages from different servers. A service name is coupled to each message with the corresponding destination service. A single shared channel is formed. The messages are processed over the single shared channel. 1. A method comprising:receiving messages from a plurality of servers by a multiplexer executing on a processing device, wherein the multiplexer comprises an application programming interface, a building block layer, a channel layer, and a concurrent transport protocol stack;coupling, by the multiplexer, a service name to each message with a corresponding destination service;maintaining, by the multiplexer, a set of queues, one queue per service name;adding, by the multiplexer, a message to a corresponding queue based on its service name, to form a single shared channel;maintaining, by the multiplexer, an out-of-band thread pool and a regular thread pool of the concurrent transport protocol stack, wherein the out-of-band thread pool and the regular thread pool are coupled to receive the messages from the set of queues;dispatching the messages marked as out-of-band from a sender to a thread of the out-of-bound thread pool; anddispatching all other messages from the sender to a thread of the regular thread pool.2. The method of further comprising:multiplexing the messages onto the single shared channel; andsending the messages over the single shared channel.3. The method of further comprising:determining the service name for each message from the single shared channel; andchanneling each message to the corresponding destination service based on the service name of each message.4. The method of further comprising concurrently dispatching messages to different services from a same server using the concurrent transport protocol stack.5. The method of further comprising ...

Подробнее
18-01-2018 дата публикации

Method and Apparatus for Reducing Processing Delay

Номер: US20180020466A1
Принадлежит:

Embodiments of the present disclosure provide a method implemented in a first device for wireless communication. The method may comprise receiving a sequence of signals from a second device, the sequence of signals comprising a signal of a first type and a signal of a second type; assigning a first priority to the signal of the first type and a second priority lower than the first priority to the signal of the second type; and transmitting the sequence of signals via a transport network to a third device according to the assigned priorities. By virtue of this method, the processing delay at the third device may be reduced at low cost. 130-. (canceled)31. A method implemented in a first device for wireless communication , comprising:receiving a sequence of signals from a second device, the sequence of signals comprising a signal of a first type and a signal of a second type;assigning a first priority to the signal of the first type and a second priority lower than the first priority to the signal of the second type; andtransmitting the sequence of signals via a transport network to a third device according to the assigned first and second priorities.32. The method according to claim 31 , wherein the signal of the first type comprises a reference signal claim 31 , and the signal of the second type comprises a data signal.33. The method according to claim 31 , wherein the sequence of signals comprises a sequence of bits contained in a subframe claim 31 , and wherein assigning the first priority to the signal of the first type and the second priority lower than the first priority to the signal of the second type comprises:buffering bits corresponding to the signal of the first type in a queue with the first priority;buffering bits corresponding to the signal of the second type in a queue with the second priority; andwherein transmitting the sequence of signals via the transport network to the third device according to the assigned first and second priorities comprises ...

Подробнее
17-01-2019 дата публикации

SYSTEMS AND METHODS FOR LOCATING APPLICATION-SPECIFIC DATA ON A REMOTE ENDPOINT COMPUTER

Номер: US20190020604A1
Принадлежит: MAGNET FORENSICS INC.

According to one aspect, a system for locating application-specific data that includes a server, a broker, and an agent. An operator may define a command using the server, and this command may be sent to the broker. The broker may then send the command to the agent operating on an end-point system. The agent may then conduct an application-specific data search on the end-point system in respect of the user command. Search results may then be sent to the broker. The broker may then sent the search results to the server. 1. A system for locating application-specific data , comprising:a server for defining a command by an operator;an agent operating on an end-point system for conducting an application-specific data search on the end-point system, the search defined by the command; and,a broker for receiving the command from the server and relaying the command to the agent.2. The system of claim 1 , wherein the agent produces a real-time search result from the search.3. The system of claim 2 , wherein the agent sends the search result to the broker when the agent has a viable communications channel with the broker.4. A method for dispatching a message from a local computer system to locate application-specific data on a remote computer system claim 2 , comprising:(a) receiving a message from a first remote computer system, the message addressed using a name of a second remote computer system;(b) storing the message in a buffer on the local computer system for subsequent retrieval by the second remote computer system;(c) receiving a request from the second computer to send a message addressed using the name of the second remote computer system;(d) sending a corresponding message in the buffer addressed with the name of the second remote computer system to the second remote computer system, the corresponding message corresponding to the request.5. The method of claim 4 , wherein the buffer is a queue.6. The method of claim 4 , wherein the name of the second remote ...

Подробнее
21-01-2021 дата публикации

APPARATUS AND METHOD FOR PROCESSING FLUSH REQUESTS WITHIN A PACKET NETWORK

Номер: US20210021544A1
Принадлежит:

An apparatus and method are provided for processing flush requests within a packet network. The apparatus comprises a requester device within the packet network arranged to receive a flush request generated by a remote agent requesting that one or more data items be flushed to a point of persistence. The requester device translates the flush request into a packet-based flush command conforming to a packet protocol of the packet network. A completer device within the packet network that is coupled to a persistence domain incorporating the point of persistence is arranged to detect receipt of the packet-based flush command, and then trigger a flush operation within the persistence domain to flush the one or more data items to the point of persistence. This provides a fast, hardware-based, mechanism for performing a flush operation within a persistence domain without needing to trigger software in the persistence domain to handle the flush to the point of persistence. 1. An apparatus comprising:a requester device within a packet network to receive a flush request generated by a remote agent requesting that one or more data items be flushed to a point of persistence, and to translate the flush request into a packet-based flush command conforming to a packet protocol of the packet network; anda completer device within the packet network that is coupled to a persistence domain incorporating the point of persistence, the completer device being arranged to detect receipt of the packet-based flush command and to trigger a flush operation within the persistence domain to flush said one or more data items to the point of persistence.2. An apparatus as claimed in claim 1 , wherein the packet-based flush command forms a native command of the packet network that is distinguished by devices of the packet network from other native commands routed through the packet network.3. An apparatus as claimed in claim 1 , wherein the packet protocol is the Peripheral Component Interconnect ...

Подробнее
17-04-2014 дата публикации

Method and system for an os virtualization-aware network interface card

Номер: US20140108676A1
Автор: Kan F. Fan
Принадлежит: Broadcom Corp

Aspects of a method and system for an operating system (OS) virtualization-aware network interface card (NIC) are provided. A NIC may provide direct I/O capabilities for each of a plurality of concurrent guest operating systems (GOSs) in a host system. The NIC may comprise a GOS queue for each of the GOSs, where each GOS queue may comprise a transmit (TX) queue, a receive (RX) queue, and an event queue. The NIC may communicate data with a GOS via a corresponding TX queue and RX queue. The NIC may notify a GOS of events such as down link, up link, packet transmission, and packet reception via the corresponding event queue. The NIC may also support unicast, broadcast, and/or multicast communication between GOSs. The NIC may also validate a buffered address when the address corresponds to one of the GOSs operating in the host system.

Подробнее
26-01-2017 дата публикации

FRAME PROCESSING DEVICE AND FRAME PROCESSING METHOD

Номер: US20170026290A1
Принадлежит: FUJITSU LIMITED

There is provided a frame processing device includes a plurality of output ports; a table in which a destination address is stored in association with an output port; a buffer configured to store a learned frame, an un-learned frame, and a copy frame generated by copying the un-learned frame; a transfer unit configured to read a second frame from the buffer in an order in which the second frame is stored and transfer the second frame to a predetermined output port; a storage configured to store the destination address of the learned frame; and a controller configured to discard the second frame to be transferred by the transfer unit, when the second frame is the un-learned frame and the destination address of the second frame is stored in the storage, wherein the second frame transferred to the plurality of output ports is output as the first frame. 1. A frame processing device comprising:a plurality of output ports each from which a first frame having a destination address is output to a network;a table in which the destination address is stored in association with an output port of the plurality of output ports;a buffer configured to store a learned frame for which it is determined that the destination address is registered in the table, an un-learned frame for which it is determined that the destination address is not registered in the table, and a copy frame generated by copying the un-learned frame;a transfer unit configured to read a second frame from the buffer in an order in which the second frame is stored and transfer the second frame to a predetermined output port of the plurality of output ports, the second frame being one of the learned frame, the un-learned frame, or the copy frame;a storage configured to store the destination address of the learned frame; anda controller configured to discard the second frame to be transferred by the transfer unit, when the second frame is the un-learned frame and the destination address of the second frame is stored ...

Подробнее
26-01-2017 дата публикации

STATELESS NETWORK FUNCTIONS

Номер: US20170026301A1
Принадлежит:

Systems and methods are described for stateless network function virtualization. Embodiments operate in context of a data network, in which network middleboxes are virtualized, for example, for added scalability and fault tolerance. The virtualized middleboxes can implement state-reliant network functions in a manner that decouples the state from the packet processing pipeline, while maintaining reliability and throughput even at very high data rates. Various embodiments include novel packet processing pipeline architectures, novel thread coordination structures (e.g., including batching and buffer pool sub-structures), novel remote state data store structures, and/or other novel features. 1. A stateless network middlebox system comprising: a plurality of state request inputs;', 'a coordinated request output; and', 'a coordinated reply input; and, 'a thread coordination subsystem comprising a thread input;', 'a state request output responsive to the thread input and coupled with a respective one of the state request inputs;', 'a state data input responsive to the coordinated reply input; and', 'a thread output responsive to application of the thread input and the state response input to the associated state-reliant network function,, 'a plurality of parallel threads, each associated with a state-reliant network function, and each havingwherein the coordinated reply input is responsive to state data received from a remote state data store in response to the coordinated request output, and the coordinated request output is generated as a coordinated batching of the state request inputs.2. The system of claim 1 , wherein thread coordination subsystem further comprises:a request batcher having the coordinated request output and the plurality of state request inputs; anda buffer pool coupled with the request batcher and comprising pre-allocated memory to store state requests received via the state request inputs for coordinated batching.3. The system of claim 1 , further ...

Подробнее
26-01-2017 дата публикации

STREAMING MEDIA DELIVERY SYSTEM

Номер: US20170026434A1
Автор: Price Harold Edward
Принадлежит:

Streaming media, such as audio or video files, is sent via the Internet. The media are immediately played on a user's computer. Audio/video data is transmitted from the server under control of a transport mechanism. A server buffer is prefilled with a predetermined amount of the audio/video data. When the transport mechanism causes data to be sent to the user's computer, it is sent more rapidly than it is played out by the user system. The audio/video data in the user buffer accumulates; and interruptions in playback as well as temporary modem delays are avoided. 1. A method for distributing a live audio or video program over the Internet from a server system to a plurality of user systems , the method comprising:receiving at the server system a continuous digitally encoded stream for the audio or video program, via a data connection from a live source, in real time, the server system comprising at least one computer; supplying, at the server system, media data elements representing the program, each media data element comprising a digitally encoded portion of the program and having a playback rate,', 'serially identifying the media data elements, and', 'storing the media data elements in a data structure under the control of the server system;, 'upon receipt of the stream by the server system,'}receiving requests at the server system via one or more data connections over the Internet, for one or more of the media data elements stored in the data structure, each received request specifying one or more serial identifiers of the requested one or more media data elements, each received request originating from a requesting user system of a plurality of user systems; and the data connection between the server system and each requesting user system has a data rate more rapid than the playback rate of the one or more media data elements sent via that connection;', 'each sending is at a transmission rate as fast as the data connection between the server system and each ...

Подробнее
26-01-2017 дата публикации

STREAMING MEDIA DELIVERY SYSTEM

Номер: US20170026435A1
Автор: Price Harold Edward
Принадлежит:

Streaming media, such as audio or video files, is sent via the Internet. The media are immediately played on a user's computer. Audio/video data is transmitted from the server under control of a transport mechanism. A server buffer is prefilled with a predetermined amount of the audio/video data. When the transport mechanism causes data to be sent to the user's computer, it is sent more rapidly than it is played out by the user system. The audio/video data in the user buffer accumulates; and interruptions in playback as well as temporary modem delays are avoided. 1. A method for distributing over the Internet , from a server system to one or more user systems , a pre-recorded audio or video program stored in digitally encoded form on computer-readable media , the method comprising:reading, by at least one computer of the server system, the pre-recorded audio or video program from the computer-readable media;supplying, at the server system, media data elements representing the program, each media data element comprising a digitally encoded portion of the program and having a playback rate;serially identifying the media data elements;storing the media data elements in a data structure under the control of the server system;receiving requests at the server system via one or more data connections over the Internet, for one or more of the media data elements stored in the data structure, each received request specifying one or more serial identifiers of the requested one or more media data elements, each received request originating from a requesting user system of the one or more user systems; and the data connection between the server system and each requesting user system has a data rate more rapid than the playback rate of the one or more media data elements sent via that connection;', 'each sending is at a transmission rate as fast as the data connection between the server system and each requesting user system allow;', 'the one or more media data element sent are ...

Подробнее
26-01-2017 дата публикации

STREAMING MEDIA DELIVERY SYSTEM

Номер: US20170026436A1
Автор: Price Harold Edward
Принадлежит:

Streaming media, such as audio or video files, is sent via the Internet. The media are immediately played on a user's computer. Audio/video data is transmitted from the server under control of a transport mechanism. A server buffer is prefilled with a predetermined amount of the audio/video data. When the transport mechanism causes data to be sent to the user's computer, it is sent more rapidly than it is played out by the user system. The audio/video data in the user buffer accumulates; and interruptions in playback as well as temporary modem delays are avoided. 1. A method for operating a media player to receive and play an audio or video program , from a remote media source via a data connection over the Internet , the method comprising:sending requests from the media player to the media source via the data connection for one or more serially identified media data elements representing the audio or video program, each media data element comprising a digitally encoded portion of the audio or video program and having a playback rate, each request specifying one or more serial identifiers of the media data elements requested;receiving each of the requested media data elements via the data connection, wherein the data connection has a data rate more rapid than the playback rate of the media data elements, and each received media data element is received at a rate as fast as the data connection between the media source and the media player allows;storing the received media data elements in a memory of the media player;playing the received media data elements in series from the memory of the media player; andas the received media data elements are played, sending additional requests for subsequent media data elements for storage in the memory of the media player as required to maintain about a predetermined number of media data elements in the memory of the media player during playing.2. The method of claim 1 , further comprising maintaining in the memory of the media ...

Подробнее
24-01-2019 дата публикации

DELAYED PROCESSING FOR ELECTRONIC DATA MESSAGES IN A DISTRIBUTED COMPUTER SYSTEM

Номер: US20190026170A1
Принадлежит:

A distributed computer system is provided. The distributed computer system includes at least one sequencer computing node and at least one matcher computing node. Electronic data messages are sequenced by the sequencer and sent to at least matcher computing node. The matcher computing node receives the electronic data messages and a reference value from an external computing source. New electronic data messages are put into a pending list before they can be acted upon by the matcher. A timer is started based on a comparison of the reference value (or a calculation based thereon) to at least one attribute or value of a new electronic data message. When the timer expires, the electronic data message is moved from the pending list to another list—where it is eligible to be matched against other, contra-side electronic data messages. 1. A distributed computer system comprising:electronic memory configured to store a data structure that includes at least two different types of data transaction requests, wherein a first type of data transaction requests is contra-sided to a second type of transaction requests;a transceiver configured to receive data transaction requests for processing by the distributed computer system; receive, via the transceiver, a first data transaction request that is of the first type of data transaction requests, the first data transaction request including a first value for a first parameter;', 'based on reception of the first data transaction request, set a status of the first data transaction request to a first status;', 'based on reception of the first data transaction request, perform a comparison that compares the first value of the first data transaction request to a second value;', 'based on the performed comparison, activate a timer that indicates how long the first data transaction request will remain in the first status;', 'during a time period in which the first data transaction request is set to the first status, perform a matching ...

Подробнее
25-01-2018 дата публикации

ADAPTIVE AND DYNAMIC QOS/QOE ENFORCEMENT

Номер: US20180026896A1
Принадлежит:

Methods and apparatus, including computer program products, are provided an application scheduler. Related apparatus, systems, methods, and articles are also described. 1. An apparatus comprising:an application scheduler configured to at least monitor user plane traffic for a start of an application session, instantiate a buffer for a detected application session, and configure at least one service parameter at the buffer in accordance with a quality of service parameter and/or a quality of experience parameter.2. The apparatus of claim 1 , wherein the buffer provides additional buffering for a detected bottleneck at another network node.3. The apparatus of claim 1 , wherein the application scheduler provides congestion control by increasing or decreasing a rate of a scheduler comprising the buffer.4. The apparatus of claim 3 , wherein the increase is an additive increase.5. The apparatus of claim 3 , wherein the decrease is a multiplicative decrease.6. The apparatus of claim 1 , wherein the application scheduler correlates uplink and downlink user plane traffic to enforce scheduling at the buffer in accordance with the quality of service parameter and/or the quality of experience parameter.7. An method comprising:monitoring, by an application scheduler, user plane traffic for a start of an application session;instantiating, by the application scheduler, a buffer for a detected application session; andconfiguring, by the application scheduler, at least one service parameter at the buffer in accordance with a quality of service parameter and/or a quality of experience parameter.8. The method of claim 7 , wherein the buffer provides additional buffering for a detected bottleneck at another network node.9. The method of further comprising:controlling, by the application scheduler, congestion by increasing or decreasing a rate of a scheduler comprising the buffer.10. The method of claim 9 , wherein the increase is an additive increase.11. The method of claim 9 , wherein ...

Подробнее
25-01-2018 дата публикации

Packet buffering

Номер: US20180026902A1
Автор: Xiaohu Tang, Zhuxun Wang

A first device as a buffer server in an Ethernet transmits a first buffer client querying packet from a port of enabling a distributed buffer function of the first device, receives a first buffer client registering packet from a second device through the port, and adds the second device into a distributed buffer group of the port. When the first device detects that a sum of sizes of packets entering the port and not transmitted reaches a preset first flow-splitting threshold in a first preset time period, the first device forwards a packet entering the port and not transmitted to a buffer client selected from the distributed buffer group of the port.

Подробнее
25-01-2018 дата публикации

MULTI-PROCESSOR COMPUTING SYSTEMS

Номер: US20180026916A1

A multi-processor computing system comprising a second processing device to generate outgoing data packets and comprising a second network stack to save the outgoing data packets in a second outgoing packet buffer of the second processing device. A second network driver to save an outgoing buffer pointer in a second transmission ring of the second processing device, the outgoing buffer pointer corresponding to the second outgoing packet buffer. A first processing device comprising a first network driver to move the outgoing buffer pointer from the second transmission ring to a send ring in the first processing device. A network interface controller (NIC) to obtain the outgoing buffer pointer from the send ring. The NIC to copy the outgoing data packets from the second outgoing packet buffer to a transmission queue of the NIC. The NIC to transmit the outgoing data packets to another computing system over a communication network. 1. A multi-processor computing system comprising: a second network stack to save the outgoing data packets in a second outgoing packet buffer of the second processing device; and', 'a second network driver to save an outgoing buffer pointer in a second transmission ring of the second processing device, the outgoing buffer pointer corresponding to the second outgoing packet buffer;, 'a second processing device to generate outgoing data packets and comprisinga first processing device communicatively coupled to the second processing device, the first processing device comprising a first network driver to move the outgoing buffer pointer from the second transmission ring to a send ring in the first processing device; and obtain the outgoing buffer pointer from the send ring;', 'copy, using the outgoing buffer pointer, the outgoing data packets from the second outgoing packet buffer to a transmission queue of the NIC; and', 'transmit the outgoing data packets to another computing system over a communication network., 'a network interface ...

Подробнее
29-01-2015 дата публикации

NETWORK INTERFACE FOR REDUCED HOST SLEEP INTERRUPTION

Номер: US20150029915A1
Принадлежит:

Systems and techniques for reduced host sleep interruption are described herein. A first packet received via a receive chain may be placed into a buffer. The first packet may be of a first preliminary type. The first packet may be processed from the buffer without communication with the host machine. The first packet may also be of a first secondary type. Processing the first packet may include an operation chosen from the group of dropping the packet and responding to the packet. A second packet received via the receive chain may be placed into the buffer. The second packet may be of a first preliminary type and a second secondary type. The second packet may be communicated from the buffer to the machine. A third packet received via the receive chain may be communicated to the machine. The third packet may be of a second preliminary type. 130-. (canceled)31. A network interface device for reduced host sleep interruption , the network interface device comprising:a buffer;a first module configured to determine that a packet received via a wireless network does not need to be immediately processed by a sleeping machine and storing the packet in the buffer; anda second module configured to determine, during a period of predetermined receive inactivity, that the packet stored in the buffer can be processed by the network interface device without waking the machine and processing the packet without waking the machine, wherein the period of predetermined receive inactivity is a power save mode period.32. The network interface device of comprising a smart first-in first-out device claim 31 , known as a S-FIFO claim 31 , the S-FIFO configured to buffer one or more packets received from the wireless network until a predetermined condition is met prior to releasing the one or more packets to the first module; andwherein the first module is configured to determine that a second packet in the one or more packets needs to be immediately processed by the sleeping machine and ...

Подробнее
29-01-2015 дата публикации

VOICE COMMUNICATION METHOD AND APPARATUS AND METHOD AND APPARATUS FOR OPERATING JITTER BUFFER

Номер: US20150030017A1

Voice communication method and apparatus and method and apparatus for operating jitter buffer are described. Audio blocks are acquired in sequence. Each of the audio blocks includes one or more audio frames. Voice activity detection is performed on the audio blocks. In response to deciding voice onset for a present one of the audio blocks, a subsequence of the sequence of the acquired audio blocks is retrieved. The subsequence precedes the present audio block immediately. The subsequence has a predetermined length and non-voice is decided for each audio block in the subsequence. The present audio block and the audio blocks in the subsequence are transmitted to a receiving party. The audio blocks in the subsequence are identified as reprocessed audio blocks. In response to deciding non-voice for the present audio block, the present audio block is cached. 120-. (canceled)21. A method of performing voice communication based on voice activity detection , comprising:acquiring audio blocks in sequence, wherein each of the audio blocks includes one or more audio frames;performing voice activity detection on the audio blocks; and retrieving a subsequence of the sequence of the acquired audio blocks, including a number of audio blocks which precede the present audio block immediately, wherein the subsequence has a predetermined length and non-voice is decided for each audio block in the subsequence; and', 'transmitting the present audio block and the audio blocks in the subsequence to a receiving party, wherein the audio blocks in the subsequence are identified as reprocessed audio blocks to inform the receiving party that these audio blocks are different from the present audio block and reprocessed as including voice; and, 'in response to deciding voice onset for a present one of the audio blocks,'}in response to deciding non-voice for the present audio block, caching the present audio block.22. The method according to claim 21 , wherein before the step of transmitting ...

Подробнее
10-02-2022 дата публикации

NON-DISRUPTIVE IMPLEMENTATION OF POLICY CONFIGURATION CHANGES

Номер: US20220045907A1
Принадлежит:

Techniques for non-disruptive configuration changes are provided. A packet is received at a network device, and the packet is buffered in a common pool shared by a first processing pipeline and a second processing pipeline, where the first processing pipeline corresponds to a first policy and the second processing pipeline corresponds to a second policy. A first copy of a packet descriptor for the packet is queued in a first scheduler based on processing the first copy of the packet descriptor with the first processing pipeline. A second copy of the packet descriptor is queued in a second scheduler associated based on processing the second copy of the packet descriptor with the second processing pipeline. Upon determining that the first policy is currently active on the network device, the first copy of the packet descriptor is dequeued from the first scheduler. 1. A method , comprising:receiving, at a network device; a packet;buffering the packet in a common pool shared by a first processing pipeline and a second processing pipeline, wherein the first processing pipeline corresponds to a first policy and the second processing pipeline corresponds to a second policy;queueing a first copy of a packet descriptor for the packet in a first scheduler based on processing the first copy of the packet descriptor with the first processing pipeline;queueing a second copy of the packet descriptor in a second scheduler associated based on processing the second copy of the packet descriptor with the second processing pipeline; andupon determining that the first policy is currently active on the network device, dequeueing the first copy of the packet descriptor from the first scheduler.2. The method of claim 1 , further comprising:retrieving the packet from the common pool, based on the first copy of the packet descriptor; andprocessing the packet based on the first policy.3. The method of claim 1 , further comprising:receiving an instruction to activate the second policy on the ...

Подробнее