Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 8661. Отображено 100.
02-02-2012 дата публикации

Backplane Interface Adapter

Номер: US20120026868A1
Принадлежит: Foundry Networks LLC

A backplane interface adapter for a network switch. The backplane interface adapter includes at least one receiver that receives input cells carrying packets of data; at least one cell generator that generates encoded cells which include the packets of data from the input cells; and at least one transmitter that transmits the generated cells to a switching fabric. The cell includes a destination slot identifier that identifies a slot of the switching fabric towards which the respective input cell is being sent. The generated cells include in-band control information.

Подробнее
02-02-2012 дата публикации

Maintaining packet order using hash-based linked-list queues

Номер: US20120027019A1
Принадлежит: Juniper Networks Inc

Ordering logic ensures that data items being processed by a number of parallel processing units are unloaded from the processing units in the original per-flow order that the data items were loaded into the parallel processing units. The ordering logic includes a pointer memory, a tail vector, and a head vector. Through these three elements, the ordering logic keeps track of a number of “virtual queues” corresponding to the data flows. A round robin arbiter unloads data items from the processing units only when a data item is at the head of its virtual queue.

Подробнее
09-02-2012 дата публикации

Crossbar switch and recursive scheduling

Номер: US20120033662A1
Автор: Tadeusz H. Szymanski
Принадлежит: Individual

A crossbar switch has N input ports, M output ports, and a switching matrix with N×M crosspoints. In an embodiment, each crosspoint contains an internal queue, which can store one or more packets to be touted. Traffic rates to be realized between all Input/Output (IO) pairs of the switch are specified in an N×M traffic rate matrix, where each element equals a number of requested cell transmission opportunities between each IO pair within a scheduling frame of F time-slots. An efficient algorithm for scheduling N traffic flows with traffic rates based upon a recursive and fair decomposition of a traffic rate vector with N elements, is proposed. To reduce memory requirements shared row queue (SRQ) may be embedded in each row of the switching matrix, allowing the size of all the XQs to be reduced. To further reduce memory requirements, a shared column queue may be used in place of the XQs. The proposed buffered crossbar switches with shared row and column queues, in conjunction with the now scheduling algorithm and the DCS column scheduling algorithm, can achieve high throughout with reduced buffer and VLSI area requirements, while providing probabilistic guarantees on rate, delay and jitter for scheduled traffic flows.

Подробнее
16-02-2012 дата публикации

Traffic Management In A Multi-Channel System

Номер: US20120039173A1
Принадлежит: Broadcom Corp

A method, system and computer program product in a downstream line card of a Cable Modem Termination System (CMTS) for managing downstream traffic for channels and bonded channel groups is provided herein. The method comprises the step of receiving packets for transmission to cable modems and classifying each packet to a flow based on class of service associated with the packet. The method further includes the step of storing the packets in flow queues based, wherein a flow queue is selected based on a flow a packet is associated with and wherein each flow corresponds to a single flow queue. The method also includes transmitting the packets from the flow queues to channel queues or bonded channel queues using corresponding channel nodes or bonded channel nodes at a rate that is determined based on feedback data and scheduling downstream transmission of packets on a single downstream channel if the packet is stored in a channel queue and on multiple downstream channels that are bonded together to form a bonded channel group if the packet is stored in a bonded channel queue. The feedback data is adjusted for each channel node or bonded channel node based on a queue depth for a corresponding channel queue or bonded channel queue.

Подробнее
19-04-2012 дата публикации

Virtual switching ports on high-bandwidth links

Номер: US20120093034A1
Принадлежит: International Business Machines Corp

Method and apparatus for managing traffic of a switch include logically partitioning a physical port of the switch into a plurality of virtual ports. One or more virtual output queues are uniquely associated with each virtual port. Switching resources of the switch are assigned to each of the virtual ports. A source virtual port is derived from a frame arriving at the physical port. The frame is placed in a given one of the one or more virtual output queues uniquely associated with the source virtual port derived from the frame. A destination virtual port for the frame is determined. The frame is transferred from the virtual output queue in which the frame is placed to an egress queue associated with the destination virtual port and forwarded from the egress queue to a destination physical port of the switch.

Подробнее
02-08-2012 дата публикации

Method and Apparatus for Achieving Fairness in Interconnect Using Age-Based Arbitration and Timestamping

Номер: US20120195322A1
Принадлежит: FutureWei Technologies Inc

An apparatus comprising a chip comprising a plurality of nodes, wherein a first node from among the plurality of nodes is configured to receive a first flit comprising a first timestamp, receive a second flit comprising a second timestamp, determine whether the first flit is older than the second flit based on the first timestamp and the second timestamp, transmit the first flit before the second flit if the first flit is older than the second flit, and transmit the second flit before the first flit if the first flit is not older than the second flit.

Подробнее
04-10-2012 дата публикации

Method and system for cooperative transmission in wireless multi-hop networks

Номер: US20120250494A1
Принадлежит: University of Maryland at Baltimore

User cooperation in wireless networks implemented on the Network Protocol layer level attains a higher stable throughput and improved transmission delay. The cooperation is designed between a set of source user nodes transmitting to a common destination, where users with channels providing a higher successful delivery probability, in addition to their own traffic, relay packets of other source users whose transmissions to the destination fails. Each source user node is provided with an ample queue buffer having capacity to accumulate packets inadvertently received from other users in the system in addition to its own packets. Ranking mechanism facilitates in determining the “quality” of wireless channels, and the Acknowledgement mechanism facilitates in coordination of the transmissions in the system. The nodes exchange information on the queues status, and decision is made by a scheduling controller on the priority of transmission.

Подробнее
13-12-2012 дата публикации

Communication system and communication apparatus

Номер: US20120314579A1
Принадлежит: Fujitsu Ltd

A communication system includes a first communication apparatus including one or more first processors that determine a first bandwidth variance for each flow, based on a requested bandwidth variance amount and a surplus bandwidth of a physical line, and a first transmitter that transmits the first bandwidth variance to an adjacent apparatus; and a second communication apparatus including one or more second processors that set the received first bandwidth variance as a requested bandwidth variance amount for the second communication apparatus and determine a second bandwidth variance for each flow from the first bandwidth variance and the surplus bandwidth, and a second transmitter that transmits the second bandwidth variance to an adjacent apparatus.

Подробнее
14-02-2013 дата публикации

Identifiers in a Communication System

Номер: US20130039310A1
Принадлежит: Vringo Infrastructure Inc

A process and system for controlling selection of which MS is to receive the next packet data transmission on a forward channel and selection of which plural MCS is to be used for the packet data transmissions on the forward channel. A process for controlling selection of MCS method to be used by a BTS to transmit data packets over a forward shared channel to a MS stores information at the BTS, the information containing MCS methods which may be selected to transmit data packets over the forward shared channel to the MS; receiving from the MS at the BTS a quality indication of transmission of data packets over the forward channel to the MS; and selecting a MCS method from a plurality of MCS methods which may be used to transmit data packets on the forward channel dependent upon the received quality indication.

Подробнее
07-03-2013 дата публикации

User-controlled download duration time

Номер: US20130060904A1
Автор: Shmuel Ur
Принадлежит: Individual

A method, apparatus and computer program product useful for communicating media content, over a computerized network, in accordance with download duration time. One exemplary method may comprise obtaining a download duration time from a client with respect to a media content; determining a quality of the media content so as to be provided within the download duration time; and transmitting to the client a version of the media content having the quality. Another exemplary embodiment may be a computer program product for enabling a user of a client to select download duration time, the computer program product comprising: program code stored on a non-transitory computer readable medium; wherein the program code is operative to display a Graphical User Interface (GUI) widget on a display of the client, wherein the GUI widget comprises: a download duration time input component, wherein the GUI widget is operative to provide the download duration time to a content delivery apparatus for providing a media content within the download duration time.

Подробнее
02-05-2013 дата публикации

Packet traffic control in a network processor

Номер: US20130107711A1
Принадлежит: Cavium LLC

A network processor controls packet traffic in a network by maintaining a count of pending packets. In the network processor, a pipe identifier (ID) is assigned to each of a number of paths connecting a packet output to respective network interfaces receiving those packets. A corresponding pipe ID is attached to each packet as it is transmitted. A counter employs the pipe ID to maintain a count of packets to be transmitted by a network interface. As a result, the network processor manages traffic on a per-pipe ID basis to ensure that traffic thresholds are not exceeded.

Подробнее
16-05-2013 дата публикации

MULTI-BANK QUEUING ARCHITECTURE FOR HIGHER BANDWIDTH ON-CHIP MEMORY BUFFER

Номер: US20130121341A1
Принадлежит: JUNIPER NETWORKS, INC.

A network device includes a main storage memory and a queue handling component. The main storage memory includes multiple memory banks which store a plurality of packets for multiple output queues. The queue handling component controls write operations to the multiple memory banks and controls read operations from the multiple memory banks, where the read operations for at least one of the multiple output queues alternates sequentially between the each of the multiple memory banks, and where the read operations and the write operations occur during a same clock period on different ones of the multiple memory banks. 120-. (canceled)21. A method comprising:performing, by a network device, a write operation to move a first packet to a first bank of a memory;selecting, by the network device and while performing at least a portion of the write operation, an output queue to perform a read operation to remove a second packet from a second bank of the memory;receiving, by the network device, a bank indicator for the output queue;reading, by the network device, information identifying a pointer memory based on the bank indicator;retrieving, by the network device and from a data structure, an address of the second packet based on the bank indicator;providing, by the network device and to the memory, the address of the second packet;writing, by the network device and to the pointer memory, an updated pointer for the data structure; andforwarding, by the network device, the second packet.22. The method of claim 21 , where performing the write operation includes:receiving a different bank indicator for the write operation to the first bank,writing, based on the different bank indicator and to the data structure, a free address for the first packet, andassigning the free address to the first packet.23. The method of claim 21 , where the write operation and the read operation are performed on a single chip.24. The method of claim 21 ,where the first bank includes a first single- ...

Подробнее
23-05-2013 дата публикации

WIRELESS NETWORK COMMUNICATION SYSTEM AND METHOD

Номер: US20130128814A1
Принадлежит:

A communication system comprising one or more wireless stations programmed to await for an authorizing signal to initiate wireless communications with a network controller or access point. The network controller maintains identification information in different queues, said queues based upon the anticipated wireless station's activity. The wireless station identification information is moved between the different queues in response to this predicted activity. Between polling, each mobile station aggregates data for the next opportunity to transmit. Multi-polling may be employed such that more than a single station is polled at a time. Polling is accomplished by polling one of the more active stations along with a less active station. The less active station is unlikely to transmit, so collisions are avoided to a certain degree. If a lesser active station becomes active, it is moved into the more active queue and consequently will be polled more often. 1. A system comprising:a network controller, said controller operable to:communicate with one or more wireless stations, said stations operative to await for an authorizing signal to initiate wireless communications;maintain at least two queues, each of said queues including information associated with one of said wireless stations;determine a likelihood that a wireless station will have traffic to transmit, andassign a wireless station to a queue in response to said determination.2. The system of wherein the network controller is an access point.3. The system of wherein the network controller is further operable to poll two or more wireless stations at substantially the same time.4. The system of wherein the likelihood that a wireless station will have traffic to transmit is determined by historical transmission volume.5. The system of wherein the likelihood that a wireless station will have traffic to transmit is determined by a rolling average of recent activity over a predetermined amount of time.6. The system of ...

Подробнее
27-06-2013 дата публикации

Packet transport system and traffic management method thereof

Номер: US20130163418A1
Автор: Won Kyoung Lee

A method of managing traffic of packet transport system according to some embodiments of the inventive concept may include calculating an average queue size of input traffic with reference to a link capacity; and differently applying allowable length and probability of disuse with respect to the calculated average queue size according to marking information with respect to packets of the input traffic. The input traffic includes a CCM packet for OAM.

Подробнее
04-07-2013 дата публикации

ROUTING METHOD AND NODE EQUIPMENT

Номер: US20130170504A1
Принадлежит: FUJITSU LIMITED

A routing method performed by node equipment includes: receiving a first frame including a wait number, incrementing the wait number, and storing the incremented wait number as a local wait number; receiving a second frame including a wait number of a destination node equipment, and comparing the wait number in the second frame and the local wait number; transmitting the second frame to an adjacent node equipment having a larger wait number than the local wait number, when the wait number in the second frame is larger than the local wait number; and returning the second frame to a source node equipment of the second frame, when the wait number in the second frame is larger than the local wait number but there is no adjacent node equipment having a larger wait number than the local wait number. 1. A routing method performed by node equipment in a network including a plurality of node equipments , the method comprising:receiving a first frame including a wait number, incrementing the wait number, and storing the incremented wait number as a local wait number;transmitting the first frame including the local wait number;receiving a second frame including a wait number of a destination node equipment, and comparing the wait number in the second frame and the local wait number;transmitting the second frame to an adjacent node equipment having a larger wait number than the local wait number, when the wait number in the second frame is larger than the local wait number;returning the second frame to a source node equipment of the second frame, when the wait number in the second frame is larger than the local wait number but there is no adjacent node equipment having a larger wait number than the local wait number;transmitting the second frame to an adjacent node equipment having a smaller wait number than the local wait number, when the wait number in the second frame is smaller than the local wait number; andreturning the second frame to the source node equipment of the ...

Подробнее
15-08-2013 дата публикации

Scheduling distribution of logical forwarding plane data

Номер: US20130212243A1
Принадлежит: Nicira Inc

A controller for managing several managed switching elements that forward data in a network is described. The controller includes an interface for receiving input logical control plane data in terms of input events data. The controller includes a converter for converting the input logical control plane data to output logical forwarding plane data by processing the input events data. The logical forwarding plane data is for subsequent translation into physical control plane data. The controller includes an input scheduler for (1) categorizing the input events data into different groups based on certain criteria and (2) supplying the input events data into the converter in a manner that each different group of input events data is processed separately by the converter.

Подробнее
15-08-2013 дата публикации

Communication channel for distributed network control system

Номер: US20130212244A1
Принадлежит: Nicira Inc

For a particular controller for managing managed forwarding elements that forward data in a network, a method for computing forwarding state using a set of inputs from a first controller and a second controller that is a back up controller for the first controller is described. The method receives a first subset of the set of inputs from the first controller. After failure of the first controller, the method receives a second subset of the set of inputs from the second controller. At least one input of the second subset of the set of inputs is duplicative of an input in the first subset. The method computes forwarding state using the first and second subsets of the inputs but without using the duplicative input.

Подробнее
22-08-2013 дата публикации

SCHEDULING DISTRIBUTION OF PHYSICAL CONTROL PLANE DATA

Номер: US20130219037A1
Принадлежит: NICIRA, INC.

A controller for managing several managed switching elements that forward data in a network is described. The controller includes an interface for receiving input logical forwarding plane data in terms of input events data. The controller includes a converter for converting the input logical forwarding plane data to output physical control plane data by processing the input events data. The physical control plane data is for subsequent translation into physical forwarding plane data. The controller includes an input scheduler for (1) categorizing the input events data into different groups based on certain criteria and (2) supplying the input events data into the converter in a manner that each different group of input events data is processed separately by the converter. 1. A controller for managing a plurality of managed switching elements that forward data in a network , the controller comprising:an interface for receiving input logical forwarding plane data in terms of input events data;a converter for converting the input logical forwarding plane data to output physical control plane data by processing the input events data, said physical control plane data for subsequent translation into physical forwarding plane data; andan input scheduler for (i) categorizing the input events data into different groups based on certain criteria and (ii) supplying the input events data into the converter in a manner that each different group of input events data is processed separately by the converter.2. The controller of claim 1 , wherein a set of managed switching elements perform the translation of the physical control plane data to the physical forwarding plane data.3. The controller of claim 1 , wherein the input logical forwarding plane data are at least partially supplied by a controller that translates logical control plane data to logical forwarding plane data.4. The controller of claim 1 , wherein the certain criteria comprise whether data for an input event is ...

Подробнее
29-08-2013 дата публикации

PACKET SPRAYING FOR LOAD BALANCING ACROSS MULTIPLE PACKET PROCESSORS

Номер: US20130223224A1
Принадлежит: JUNIPER NETWORKS, INC.

A network device includes multiple packet processing engines implemented in parallel with one another. A spraying component distributes incoming packets to the packet processing engines using a spraying technique that load balances the packet processing engines. In particular, the spraying component distributes the incoming packets based on queue lengths associated with the packet processing engines and based on a random component. In one implementation, the random component is a random selection from all the candidate processing engines. In another implementation, the random component is a weighted random selection in which the weights are inversely proportional to the queue lengths. 131-. (canceled)32. A method comprising:determining, by a device, an amount of space being used in a first buffer of a plurality of buffers;determining, by the device, a first value based on the amount of space being used in the first buffer;determining, by the device, that the first value exceeds a second value;selecting, by the device and after determining that the first value exceeds the second value, a second buffer, of the plurality of buffers, by using a random selection process; andtransmitting, by the device, a particular packet to the second buffer after selecting the second buffer by using the random selection process.33. The method of claim 32 , where determining the first value comprises:determining a length of the particular packet, anddetermining the first value based on the amount of space being used in the first buffer and the length of the particular packet.34. The method of claim 33 , where the first value is a sum of the amount of space and the length of the particular packet.35. The method of claim 32 , where the second value is a threshold value selected by an operator associated with the device.36. The method of claim 32 , where selecting the second buffer includes:removing, based on the first value exceeding the second value, the first buffer from being one of a ...

Подробнее
19-09-2013 дата публикации

LOW-POWER POLICY FOR PORT

Номер: US20130243007A1
Автор: Ding Jin, Kwan Bruce
Принадлежит: BROADCOM CORPORATION

Various example embodiments are disclosed. According to an example embodiment, a method may include determining, by a port processor, a buffer length based on an amount of data stored in a port controlled by the port processor, comparing the buffer length to a low-power buffer threshold, determining a link utilization based on a number of packets transmitted by the port, comparing the link utilization to a link utilization threshold, and placing the port into a low-power state based on the comparison of the buffer length to the low-power buffer threshold and the comparison of the link utilization to the link utilization threshold. 1. An apparatus comprising: compare a buffer length to a low-power buffer threshold, the buffer length being based on an amount of data stored for a port controlled by the port processor;', 'place the port into a low-power state based on the comparison of the buffer length to the low-power threshold; and', remove the port from the low-power state based on either of the following conditions:', 'comparing the buffer length to a timer expiration buffer threshold upon expiration of an aging timer; and', 'comparing the buffer length to an active buffer threshold., 'when the port is in the low-power state], 'a port processor, the port processor being configured to2. The apparatus of claim 1 , wherein the placing the port into the low-power state includes placing the port into the low-power state based on the buffer length being less than the low-power buffer threshold.3. The apparatus of claim 1 , wherein the placing the port into the low-power state includes instructing the port to cease transmission of packets.4. The apparatus of claim 1 , wherein the timer expiration buffer threshold is less than the active buffer threshold.5. The apparatus of claim 1 , wherein the port processor is included in the port claim 1 , the port being configured to receive packets from a memory management unit.6. The apparatus of claim 1 , wherein the apparatus ...

Подробнее
17-10-2013 дата публикации

Electronic devices for sending a message and buffering a bitstream

Номер: US20130273945A1
Автор: Sachin G. Deshpande
Принадлежит: Sharp Laboratories of America Inc

An electronic device for sending a message is described. The electronic device includes a processor and instructions stored in memory that is in electronic communication with the processor. The electronic device determines whether a first picture is a Clean Random Access (CRA) picture. The electronic device also determines whether a leading picture is present if the first picture is a CRA picture. The electronic device further generates a message including a CRA discard flag and an initial CRA Coded Picture Buffer (CPB) removal delay parameter if a leading picture is present. The electronic device additionally sends the message.

Подробнее
24-10-2013 дата публикации

Allocating Bandwidth in a Resilient Packet Ring Network by PI Controller

Номер: US20130279333A1
Автор: Aharbi Fahd, Ansari Nirwan
Принадлежит:

Implementations and techniques for allocating bandwidth in a resilient packet ring network by a PI-type controller are generally disclosed. 124-. (canceled)26. The method of claim 25 , wherein determining the fair rate comprises determining a rate of change of a difference between the target queue length and the current transit queue length.27. The method of claim 25 , wherein determining the fair rate is based at least in part on a round trip delay between a bottleneck link of the resilient packet ring network and the at least one node of the resilient packet ring network.28. The method of claim 25 , further comprising stabilizing an end-to-end delay associated with one or more transit queues under unbalanced traffic scenarios claim 25 , based at least in part on the allocated bandwidth.30. The article of claim 29 , wherein the determination of the fair rate is based at least in part on rate of change of a difference between the target queue length and the current transit queue length.31. The article of claim 29 , wherein the determination of the fair rate is based at least in part on a round trip delay between a bottleneck link of the resilient packet ring network and the at least one node of the resilient packet ring network.32. The article of claim 29 , further comprising machine-readable instructions stored thereon claim 29 , which claim 29 , if executed by the one or more processors claim 29 , operatively enable the computing device to stabilize an end-to-end delay associated with one or more transit queues under unbalanced traffic scenarios claim 29 , based at least in part on the allocated bandwidth.34. The resilient packet ring network of claim 33 , wherein the determination of the fair rate is based at least in part on rate of change of a difference between the target queue length and the current transit queue length.35. The resilient packet ring network of claim 33 , wherein the determination of the fair rate is based at least in part on a round trip ...

Подробнее
24-10-2013 дата публикации

Apparatus and method for receiving and forwarding data

Номер: US20130279509A1
Автор: Søren Kragh
Принадлежит: Napatech AS

A method and apparatus adapted to prevent Head-Of-Line blocking by forwarding dummy packets to queues which have not received data for a predetermined period of time. This prevention of HOL may be on an input where data is forwarded to each of a number of FIFOs or an output where data is de-queued from FIFOs. The dummy packets may be provided with a time stamp derived from a recently queued or de-queued packet.

Подробнее
31-10-2013 дата публикации

Feed-forward arbitration

Номер: US20130286825A1
Принадлежит: Hewlett Packard Development Co LP

Feed-forward arbitration is disclosed. An example method of feed-forward arbitration includes determining an aggregated measure of urgency of packets waiting in a queue. The method also includes sending the aggregated measure to switching node arbiters along the path that an urgent packet will take, to reduce backpressure along a path of an urgent packet by biasing arbiters in favor of the packet

Подробнее
14-11-2013 дата публикации

Providing a Quality of Service for Various Classes of Service for Transfer of Electronic Data Packets

Номер: US20130301412A1
Принадлежит: AT&T Intellectual Property I, L.P.

A quality of service for various classes of services for the transfer of electronic data packets is provided by establishing classes of packets for a customer and for assigning bandwidths to the classes for the customer. Accordingly, the amount of bandwidth for one type of service may vary from the bandwidth for another type of service over the same data connection. A device, such as an edge router of a network, may police the data packets being transferred by a customer to maintain the bandwidth being utilized by a given class of packets of the customer to within the assigned bandwidth for that class of the customer. The data packets may further be policed by core routers of the network may also to maintain the bandwidth being utilized by a given class of packets to within the assigned bandwidth for that class as specified by the service provider. 1. A non-transitory computer readable storage device having instructions encoded thereon which , when executed by a processor , cause the processor to perform operations comprising:receiving packets via a plurality of ports, wherein the packets are classified according to markings that identify a class of service to which each packet belongs;detecting the marking of each packet;detecting at a port whether there is an attempt to transfer packets of a particular class through the port at a bandwidth greater than a bandwidth assigned for the particular class at the port;responsive to an attempt to transfer packets of the particular class through the port at the bandwidth greater than the bandwidth assigned for the particular class, holding in a queue dedicated to the port the packets of the particular class until time for transmission; andacting upon the packets in accordance with a bandwidth assigned for each class of service to forward the packets with the bandwidth assigned for each class of service, wherein classes of service comprise a first class, a second class, and a third class, wherein the first class has a ...

Подробнее
21-11-2013 дата публикации

System And Method For Implementing Active Queue Management Enhancements For Variable Bottleneck Rates

Номер: US20130308458A1
Автор: Francini Andrea
Принадлежит: Alcatel-Lucent USA Inc.

An advance is made over the prior art in accordance with the principles of the present invention that is directed to a new approach for a system and method for a buffer management scheme. Certain embodiments of the invention improve the response of AQM schemes with controllable parameters to variations of the output rate of the bottleneck buffer. The impact on TCP performance can be substantial in most cases where the bottleneck rate is not guaranteed to be fixed. The new solution allows AQM schemes to achieve queue stability despite continuous variations of the bottleneck rate. 1. A method of operating a packet buffer , the packet buffer operable to accept multiple flows of packets , wherein an average queue length (AQL) value of said packet buffer is calculated , said method comprising:comparing the AQL with a first threshold, wherein a packet drop rate remains unchanged as long as the AQL is less than the first threshold;tracking a first timer representative of a time since a latest buffer overflow event, a second timer representative of a time since the buffer was last empty, and a third timer representative of a time since a packet loss was last triggered by an active queue management (AQM) decision; andupdating the packet drop rate if said AQL is greater than said first threshold and comparison with said first timer indicates that within a given time no buffer overflow has occurred, and comparison with said second timer indicates that within a given time the buffer has not been empty or comparison with said third timer indicates that within a given time there has been at least one packet loss triggered by said AQM decision.2. The method of claim 1 , wherein an instantaneous queue length (IQL) value of said buffer is calculated claim 1 , further including:comparing the IQL with a second threshold; andtriggering packet losses when the IQL is greater than said second threshold, at time intervals whose duration depends on a distance between said IQL and said ...

Подробнее
28-11-2013 дата публикации

FLEXIBLE QUEUES IN A NETWORK SWITCH

Номер: US20130315054A1
Принадлежит: MARVELL WORLD TRADE LTD.

In an apparatus for receiving and forwarding data packets on a network, a network device includes a plurality of ports for coupling to the network and for transmitting packets to devices disposed in or coupled to the network. At least one processor configured to process packets received via the network processes packets by selectively forwarding processed packets to one or more of the ports. A plurality of queues are defined in a memory, each configured to store packets to be transmitted by ports in the plurality of ports. A queue manager is configured to selectively assign a subset of the plurality of queues to a subset of the plurality of ports. 1. A network device comprising:a plurality of ports for coupling to a network and for transmitting packets to devices disposed in or coupled to the network;at least one processor configured to process packets received via the network, the processing including selectively forwarding processed packets to one or more of the ports;a plurality of queues defined in a memory, the plurality of queues configured to store packets to be transmitted by ports in the plurality of ports; anda queue manager configured to selectively assign a subset of the plurality of queues to a subset of the plurality of ports.2. The network device of claim 1 , wherein the queue manager comprises at least one of the following structures used by the queue manager to direct data packets to queues and/or ports of the network device:(i) a port-to-queue table to configurably define which queues are assigned to each port;(ii) a queue-to-port table to configurably define which port is the destination for each queue;(iii) a queue-to-port-group table to configurably define which port group is the destination for each queue; and(iv) a queue priority table to configurably define a priority associated with each queue.3. The network device of claim 1 , wherein the queue manager is configured to select the subset of ports to exclude one or more defective one of the ...

Подробнее
28-11-2013 дата публикации

METHOD FOR IMPROVING THE QUALITY OF DATA TRANSMISSION IN A PACKET-BASED COMMUNICATION NETWORK

Номер: US20130315062A1
Принадлежит:

The invention relates to a method for improving the quality of data transmission in a packet-based communication network comprising a plurality of network nodes (K). Each of the network nodes (K) has a number of ports (P) with which at least one queue (Q) is associated respectively, and by means of which ports a communication connection (KV) to another network node (K) can be produced. According to the method of the invention, at least the queues (Q) of those ports which are arranged, in the network nodes (K), along, respective communication paths that are formed in the communication network, are monitored for their queue length. In addition, a degree of overload of the affected port(s) (P) is determined from the queue length, and on the basis of the degree of overload of the communication path(s) (PF, PF, PF) running across the affected overloaded port (P), a runtime delay (delay) and/or a delay variation (jitter) in the data transmission can be inferred. Finally, the overload amount rises above a predetermined threshold value for at least one of the communication paths (PF, PF, PF) running across an overloaded port (P). An alternative communication path (PF′) is configured, the overloaded ports (P) thus being bypassed. 1. A method for improving the quality of data transmission in a packet-based communication network , which comprises a plurality of network nodes (K) , wherein each of the network nodes (K) has a number of ports (P) , with each of which at least one queue (Q) is associated and via which a communication connection (KV) to another network node (K) can be produced , in whichat least the queues (Q) of those ports (P) which are disposed in the network nodes (K) along respective communication paths that are formed in the communication network are monitored for their queue length;{'b': 1', '2', '3, 'a degree of overload for the port(s) in question (P) is determined from the queue length, wherein a runtime delay (delay) and/or delay variation (jitter) in ...

Подробнее
05-12-2013 дата публикации

System for performing data cut-through

Номер: US20130322271A1
Принадлежит: Broadcom Corp

A system transfers data. The system includes an ingress node transferring data at a determined bandwidth. The ingress node includes a buffer and operates based on a monitored node parameter. The system includes a controller in communication with the ingress node. The controller is configured to allocate, based on the monitored node parameter, an amount of the determined bandwidth for directly transferring data to bypass the buffer of the ingress node.

Подробнее
05-12-2013 дата публикации

Router and many-core system

Номер: US20130322459A1
Автор: HUI XU
Принадлежит: Toshiba Corp

According to one embodiment, a router includes a plurality of input ports and a plurality of output ports. The input ports receive a packet including control information indicating a type of access. Each of the input ports includes a first buffer and a second buffer which store the packet. The output ports output the packet. Each of the input ports selects at least one of the first buffer and the second buffer as a buffer in which the packet is stored on the basis of the control information and a state of the output port serving as a destination port of the packet.

Подробнее
26-12-2013 дата публикации

Systems, methods, and apparatuses for implementing frame aggregation with screen sharing

Номер: US20130346499A1
Автор: Barry Spencer
Принадлежит: Salesforce com Inc

In accordance with disclosed embodiments, there are provided methods, systems, and apparatuses for implementing frame aggregation with screen sharing including, for example, means for receiving, at a server, a stream of delta frames from a publishing client as part of a screen sharing session with one or more viewing clients; establishing a FIFO buffer for each of the respective one or more viewing clients on 1:1 basis; queuing a copy of the stream of delta frames into each of the FIFO buffers corresponding to the one or more viewing clients, wherein the stream of delta frames are transmitted from the respective FIFO buffers to the corresponding one or more client viewers; monitoring each of the respective FIFO buffers for each of the one or more viewing clients to determine if two or more delta frames are concurrently queued in any single one of the respective FIFO buffers at any given time; aggregating the two or more delta frames into a single aggregated delta frame; re-queuing the aggregated delta frame; and transmitting the aggregated delta frame to the respective viewing client. Other related embodiments are disclosed.

Подробнее
02-01-2014 дата публикации

PACKET SCHEDULING METHOD AND APPARATUS CONSIDERING VIRTUAL PORT

Номер: US20140003435A1

In a scheduling apparatus, a packet start time with respect to an input packet is calculated, and a slot corresponding to the packet start time is selected from a scheduler including a plurality of slots. Whether to store the packet in the selected slot in consideration of the number of packets stored in the selected slot and the number of packets corresponding to virtual ports corresponding to the input packet is determined, and here, the virtual port is an output port of a switch device which is connected to the outside and does not have a scheduling function. 1. A scheduling method comprising:obtaining a session identifier from a header of an input packet and storing the packet in a packet memory;calculating a packet start time of the packet;selecting a slot corresponding to the packet start time from a scheduling including a plurality of slots;determining whether to store the session identifier of the input packet in the selected slot in consideration of the number of session identifiers already stored in the selected slot and the number of session identifiers representing virtual ports corresponding to the input packet, the virtual port being an output port of a switch device which is connected to the outside and does not have a scheduling function; andwhen the session identifier is determined to be stored with respect to the selected slot, storing the session identifier in the selected slot, and when the start time of the packet arrives, reading the packet stored in the packet memory by using the stored session identifier and outputting the same.2. The method of claim 1 , wherein the storing of the packet comprises:reading the corresponding session identifier present in the head of the input packet;obtaining a characteristic parameter corresponding to the session identifier; andstoring the packet in the packet memory by using the characteristic parameter.3. The method of claim 2 , wherein the characteristic parameter comprises a service bandwidth as an ...

Подробнее
06-02-2014 дата публикации

Coherent data forwarding when link congestion occurs in a multi-node coherent system

Номер: US20140040526A1
Принадлежит: Oracle International Corp

Systems and methods for efficient data transport across multiple processors when link utilization is congested. In a multi-node system, each of the nodes measures a congestion level for each of the one or more links connected to it. A source node indicates when each of one or more links to a destination node is congested or each non-congested link is unable to send a particular packet type. In response, the source node sets an indication that it is a candidate for seeking a data forwarding path to send a packet of the particular packet type to the destination node. The source node uses measured congestion levels received from other nodes to search for one or more intermediate nodes. An intermediate node in a data forwarding path has non-congested links for data transport. The source node reroutes data to the destination node through the data forwarding path.

Подробнее
20-02-2014 дата публикации

PROVIDING A BUFFERLESS TRANSPORT METHOD FOR MULTI-DIMENSIONAL MESH TOPOLOGY

Номер: US20140050224A1
Принадлежит:

In one embodiment, the present invention includes a method for determining whether a packet received in an input/output (I/O) circuit of a node is destined for the node and if so, providing the packet to an egress queue of the I/O circuit and determining whether one or more packets are present in an ingress queue of the I/O circuit and if so, providing a selected packet to a first or second output register according to a global schedule that is independent of traffic flow. Other embodiments are described and claimed. 1. A system comprising:a n×m mesh system including a plurality of nodes, the plurality of nodes arranged in a first dimension and a second dimension; anda plurality of interconnects each to couple a pair of the plurality of nodes, wherein the mesh system is configured in a first traffic independent connection state for a first cycle of a traffic schedule period and in a second traffic independent connection state for a second cycle of the traffic schedule period.2. The system of claim 1 , wherein the first traffic independent connection state comprises a pass-through state in which packets are communicated between neighboring nodes in the first dimension.3. The system of claim 2 , wherein the second traffic independent connection state comprises a turn state in which packets are communicated between neighboring nodes in the second dimension.4. The system of claim 1 , wherein the mesh system comprises a bufferless transport medium.5. The system of claim 1 , wherein the traffic schedule period is a fixed cyclic schedule of S clocks claim 1 , where S is max (n claim 1 ,m) and the mesh system is configured in the first traffic independent connection state for S−x cycles of the fixed cyclic schedule and in the second traffic independent connection state for x cycles of the fixed cyclic schedule.6. The system of claim 1 , wherein each of the plurality of nodes includes a plurality of input ports and a plurality of output ports claim 1 , each of the plurality ...

Подробнее
27-02-2014 дата публикации

PACKET VALIDATION IN VIRTUAL NETWORK INTERFACE ARCHITECTURE

Номер: US20140059221A1
Принадлежит: Solarflare Communications, Inc.

Roughly described, a network interface device receiving data packets from a computing device for transmission onto a network, the data packets having a certain characteristic, transmits the packet only if the sending queue has authority to send packets having that characteristic. The data packet characteristics can include transport protocol number, source and destination port numbers, source and destination IP addresses, for example. Authorizations can be programmed into the NIC by a kernel routine upon establishment of the transmit queue, based on the privilege level of the process for which the queue is being established. In this way, a user process can use an untrusted user-level protocol stack to initiate data transmission onto the network, while the NIC protects the remainder of the system or network from certain kinds of compromise. 1. A method comprising:establishing, by a privileged mode process, a first virtual address space resource for a first user-level process;programming, by the privileged mode process, first authorizations into a network interface device indicating one or more first particular characteristics of data packets the first user-level process is authorized to transmit via the network interface device onto a network;subsequently enqueueing a first data packet in the first virtual address space resource by the first user-level process, without involving the privileged mode process; andsubsequently determining, by the network interface device and without involving the privileged mode process, whether said first data packet has any of the one or more first particular characteristics indicated in the first authorizations, and only if so, transmitting, by the network interface device and without involving the privileged mode process, said first data packet onto the network.2. A method according to claim 1 , wherein one of the one or more particular first characteristics comprises at least one characteristic selected from:a particular network ...

Подробнее
06-03-2014 дата публикации

Fast data packet transfer operations

Номер: US20140064293A1
Принадлежит: Individual

A fast send method may be selectively implemented for certain data packets received from an application for transmission through a network interface. When the fast send method is triggered for a data packet, the application requesting transmission of the data packet may be provided a completion notice nearly immediately after the data packet is received. The fast send method may be used for data packets similar to previously-transmitted data packets for which the information in the data packet is already vetted. For example, a data packet with a similar source address, destination address, source port, destination port, application identifier, and/or activity identifier may have already been vetted.

Подробнее
06-03-2014 дата публикации

DATA TRANSMISSION DEVICE AND DATA TRANSMISSION METHOD

Номер: US20140064298A1
Автор: Ukai Megumi
Принадлежит:

A data transmission device includes a packet storing unit that temporarily retains therein multiple data packets. The data transmission device includes a top location instructing unit that indicates a location in the packet storing unit to retain a new created data packet. The data transmission device includes a location information storing unit that has a plurality of entries storing therein a top location of the data packets stored in the data packet storing unit. 1. A data transmission device comprising:a packet storing unit that temporarily retains therein multiple data packets;a top location instructing unit that indicates a location in the packet storing unit to retain a new created data packet; anda location information storing unit that has a plurality of entries storing therein a top location of the data packets stored in the data packet storing unit.2. The data transmission device according to claim 1 , further comprising:a write instructing unit that instructs an entry of the location information storing unit for storing the top location of a new data packet stored in the data packet storing unit; anda read instructing unit that instructs a readable entry in the location information storing unit.3. The data transmission device according to claim 2 , whereinthe data transmission device issues one or more data packets, and the data transmission device further comprises:a checked number storing unit that retains therein sequence numbers given to response packets transmitted from a data receiving device that is a destination of the data packets; anda calculation updating unit that calculates, by using a first sequence number retained in the checked number storing unit and a second sequence number that is retained in the checked number storing unit and that is given to a response packet that is immediately subsequently received after a response packet giving to the first sequence number, the number of the data packets normally received by the data receiving ...

Подробнее
06-03-2014 дата публикации

Detecting and recovering from a transmission channel change during a streaming media session

Номер: US20140068084A1
Принадлежит: Apple Inc

A method for detecting and recovering from a transmission channel change during a streaming media session is disclosed. The method can include a wireless communication device detecting a stall condition resulting from a transmission channel change. The method can further include the wireless communication device capturing a snapshot of a current transmission parameter state of the streaming media session in response to detecting the stall condition. The method can also include the wireless communication device using the snapshot to restore the streaming media session to the transmission parameter state captured by the snapshot following completion of the transmission channel change.

Подробнее
13-03-2014 дата публикации

TRANSFER DEVICE AND TRANSFER METHOD

Номер: US20140071993A1
Принадлежит:

A transfer device increments a value of a phase ID at predetermined time intervals, and registers a packet ID of a transmitted data packet and a phase ID on a determination table in an associated manner. When having received a response packet from a receiving-side transfer device, the transfer device determines an unarrived packet on the basis of received packet IDs contained in the received response packet and packet IDs of transmitted data packets. Then, the transfer device determines whether a data packet corresponding to the unarrived packet is lost or on-the-fly from a relationship between a phase ID of the unarrived packet and the maximum phase contained in the received response packet, and retransmits the corresponding data packet only if it is lost. 1. A transfer device comprising:a transmitting unit that sequentially transmits packets with assigned unique first identification numbers for identifying the packets to a destination, and sequentially transmits second identification numbers incremented at predetermined time intervals to the destination;a registering unit that registers a first identification number of a packet which has been transmitted by the transmitting unit and a second identification number at the point of time when the packet has been transmitted from the transmitting unit on a table in an associated manner;a determining unit that receives first identification numbers of packets having arrived at the destination and the latest second identification number having arrived at the destination, and determines a first identification number of a packet to be retransmitted to the destination on the basis of the first identification numbers of the packets having arrived at the destination, the latest second identification number having arrived at the destination, and the table; anda retransmitting unit that retransmits a packet on the basis of a result of determination by the determining unit.2. The transfer device according to claim 1 , whereinthe ...

Подробнее
20-03-2014 дата публикации

System and Method for Reducing the Data Packet Loss Employing Adaptive Transmit Queue Length

Номер: US20140078900A1
Принадлежит: TATA CONSULTANCY SERVICES LIMITED

The present invention provides a system and method for reduction of data packet loss for the multiple network interfaces. Particularly, the invention provides a cross layer system for reduction of data packet loss based on dynamic analysis of network conditions. Further, the invention provides a system and method of estimation of network condition and adapting the transmit queue of the multiple interfaces according to channel condition/available bandwidth of the associated network. 1. A method for reducing data packet loss in a communication network having multiple interfaces , the method comprising the steps of:creating a network driver module for registering a plurality of virtual physical interfaces;capturing a plurality of characteristic features associated with each of the virtual physical interfaces and corresponding user inputs;determining a previous bandwidth and a current bandwidth of a communication channel associated with each active interface of the plurality of virtual physical interfaces based on a predefined configurable time interval;estimating an effective network channel bandwidth ratio, wherein the effective network channel bandwidth ratio is a ratio of bandwidth of a previous time interval and the current bandwidth associated with each of said active interface involved in the communication;assigning at least one adaptive transmission queue to each of said active interface;determining a current transmission queue length of each active interface and deriving an effective transmission queue length ratio, where the effective transmission queue length ratio is a ratio of a default and current transmission queue lengths of each of the active interface;dynamically configuring the current transmission queue length for each active interface, wherein the effective transmission queue length ratio is proportional with the effective network channel bandwidth ratio, wherein a proportionality constant is commensurate with a predefined threshold value; ...

Подробнее
27-03-2014 дата публикации

METHOD AND SYSTEM FOR WEIGHTED FAIR QUEUING

Номер: US20140086259A1
Автор: VENABLES Bradley D.
Принадлежит: ROCKSTAR CONSORTIUM US LP

A system for scheduling data for transmission in a communication network includes a credit distributor and a transmit selector. The communication network includes a plurality of children. The transmit selector is communicatively coupled to the credit distributor. The credit distributor operates to grant credits to at least one of eligible children and children having a negative credit count. Each credit is redeemable for data transmission. The credit distributor further operates to affect fairness between children with ratios of granted credits, maintain a credit balance representing a total amount of undistributed credits available, and deduct the granted credits from the credit balance. The transmit selector operates to select at least one eligible and enabled child for dequeuing, bias selection of the eligible and enabled child to an eligible and enabled child with positive credits, and add credits to the credit balance corresponding to an amount of data selected for dequeuing. 1. A network node for a packet-based communication network , the network node comprising:at least one output port coupled to the communication network and configured to transmit packets over the communication network;a plurality of data queues configured to queue packets to be transmitted by the network node over the communication network; and classification logic configured to classify each data source as one of an eligible data source and an ineligible data source, the classification logic being configured to reclassify an eligible data source as an ineligible data source when a predetermined condition is met;', 'a credit balance credit counter configured to maintain a credit balance of available credits;', 'credit allocation logic configured to allocate credits from the credit balance to eligible data sources;', 'data source credit counters configured to maintain a respective credit count for each eligible data source;', 'queue maintenance logic configured to maintain a plurality of ...

Подробнее
03-04-2014 дата публикации

Maintaining load balancing after service application with a netwok device

Номер: US20140092738A1
Принадлежит: Juniper Networks Inc

In general, techniques are described for maintaining load balancing after service application. A network device comprising ingress and egress forwarding components and a service card may implement the techniques. An ingress forwarding component receives a packet and, in response to a determination that the service is to be applied to the packet, updates the packet to include an ingress identifier that identifies the ingress forwarding component, thereafter transmitting the updated packet to the service card. The service card applies the service to the updated packet to generate a serviced packet and transmits the serviced packet to the ingress forwarding component identified by the ingress identifier so as to maintain load balancing of packet flows across the plurality of forwarding components. The ingress forwarding component determines a next hop to which to forward the serviced packet and the egress forwarding component forwards the serviced packet to the determined next hop.

Подробнее
03-04-2014 дата публикации

Transmitting and receiving data based on multipath

Номер: US20140092809A1
Принадлежит: International Business Machines Corp

Methods, apparatuses and systems for transmitting and receiving data based on multipath for transmitting data based on multipath include: establishing WiMAX connection-based multiple paths between a first device and a second device; transmitting data frames in a data queue in the multiple paths; obtaining the quality condition of the multiple paths; and based on the quality condition, adjusting the transmission of the data frames in the data queue in the multiple paths. According to one aspect, there is provided a method for receiving data based on multipath, which includes: establishing WiMAX connection-based multiple paths between a first device and a second device; receiving a plurality of data frames in the multiple paths; processing the received plurality of data frames based on quality condition of the multiple paths. There are further provided corresponding apparatuses and systems.

Подробнее
01-01-2015 дата публикации

Enhanced Link Aggregation in a Communications System

Номер: US20150003466A1
Автор: Shavit Alon, Soffer Ran
Принадлежит:

An enhanced link aggregation (ELAG) transmitter device, is provided. The ELAG transmitted device includes a packet interface configured to receive a plurality of data packets from a communication networking, and also includes a controller configured to segment and reschedule one or more data packets of the plurality of data packets and to add a sequence number to the one or more data packets. Additionally, the ELAG transmitter device includes a distributor configured to logically distribute the one or more segmented and rescheduled data packets across an aggregate link based on an actual available bandwidth of the aggregate link and in accordance with at least one of a packet based distribution scheme, a segmented packet distribution scheme, and a byte based distribution scheme. 1. An enhanced link aggregation (ELAG) transmitter device , comprising:a packet interface configured to receive a plurality of data packets;a controller configured to segment and reschedule one or more data packets of the plurality of data packets and to add a sequence number to the one or more data packets; anda distributor configured to logically distribute the one or more segmented and rescheduled data packets across an aggregate link based on an actual available bandwidth of the aggregate link and in accordance with at least one of a packet based distribution scheme, a segmented packet distribution scheme, and a byte based distribution scheme.2. The ELAG transmitter device of claim 1 , wherein the aggregate link comprises a plurality of Ethernet links that are connected to the ELAG transmitted device claim 1 , and wherein the distributor is configured to detect the actual available bandwidth of the aggregate link by monitoring the plurality of Ethernet links.3. The ELAG transmitter device of claim 1 , further comprising: wherein the aggregate link comprises a plurality of non-Ethernet links that are connected to the ELAG transmitted device, and', 'wherein the feedback information is ...

Подробнее
06-01-2022 дата публикации

DYNAMIC ROUTING OF QUEUED NETWORK-BASED COMMUNICATIONS USING REAL-TIME INFORMATION AND MACHINE LEARNING

Номер: US20220006724A1
Принадлежит:

Methods for dynamic routing of queued network-based communications using real-time information and machine learning are performed by systems and devices. Requests associated with fulfillments are received over a network from requestor systems, and the requests are queued in a data structure of a queue. Information that includes geolocation information from a user device of a user that is associated with the fulfillment, temporal information from the user device, or related request information associated with another request is then received over the network, and a fulfiller and a fulfillment time for the fulfillment are determined from the information. The request is provided from the queue to the fulfiller at the fulfillment time over the network. 1. A system for dynamic routing of queued network-based communications performed by a host system that includes:one or more memory devices that store executable program code; and receive, over a network from a requestor system, a request associated with a fulfillment;', 'queue the request in a queue comprising a data structure;', geolocation information from a user device of a user that is associated with the fulfillment;', 'temporal information from the user device; or', 'related request information associated with another request;, 'receive information over the network, the information comprising at least one of, 'determine a fulfiller and a fulfillment time for the fulfillment based at least on the information; and', 'provide, from the queue, the request to the fulfiller at the fulfillment time over the network., 'one or more processors operable to access the one or more memory devices and to execute the executable program code, the executable program code being configured to2. The system of claim 1 , wherein the request includes a pre-designated fulfiller to complete the fulfillment; andwherein said queue is performed irrespective of the pre-designated fulfiller being included in the request.3. The system of claim 1 , ...

Подробнее
13-01-2022 дата публикации

METHOD AND DEVICE FOR TRANSMITTING DATA

Номер: US20220014306A1
Принадлежит:

Methods and devices for transmitting data via a transmission medium. One example method includes ascertaining a probability of at least one transmission error during a future data transmission, and determining, based on the probability, whether the future data transmission should be at least temporarily suspended. 1. A method for transmitting data (D) via a transmission medium (M) , the method comprising:{'b': '100', 'ascertaining () a probability (W) of at least one transmission error during a future data transmission, and'}{'b': '110', 'determining (), based on the probability (W), whether the future data transmission should be at least temporarily suspended.'}2110122110124. The method according to claim 1 , further comprising: a) if the outcome of the determination () is that the future data transmission should be at least temporarily suspended claim 1 , suspending () the future data transmission for a specifiable time period claim 1 , and/or b) if the outcome of the determination () is that the future data transmission should not be suspended claim 1 , executing () the future data transmission.3100102104. The method according to claim 1 , wherein ascertaining () the probability (W) of at least one transmission error in a future data transmission comprises at least one of the following elements: a) evaluating () contextual information claim 1 , wherein in particular the contextual information indicates claim 1 , for example claim 1 , a temporary degradation of the future data transmission; b) evaluating () current knowledge regarding existing communication characteristics associated with data transmission via the transmission medium (M).4110. The method according to claim 1 , wherein the determination () of whether the future data transmission should be at least temporarily suspended is also carried out based on a maximum number of permissible claim 1 , in particular consecutive claim 1 , failures of data transmissions.5110. The method according to claim 1 , ...

Подробнее
05-01-2017 дата публикации

Method, apparatus, and system for processing interference in massive multiple-input multiple-output system

Номер: US20170005762A1
Автор: An Liu, Kinnang LAU, Rongdao Yu
Принадлежит: Huawei Technologies Co Ltd

The present invention relates to the field of communications technologies, and discloses a method, an apparatus, and a system for suppressing interference in a massive multiple-input multiple-output system, which overcome a disadvantage of sensitivity to a backhaul delay during an inter-cell interference cancellation process in an existing massive multiple-input multiple-output system. A specific embodiment of the present invention includes: obtaining channel correlation matrixes of all links, and further calculating a combined outer precoder set according to the channel correlation matrixes, where each combined outer precoder includes at least one outer precoder, and the outer precoder is a semi-unitary matrix and is not sensitive to a backhaul delay. Technical solutions of the present invention are mainly applied to a process of processing interference in a massive multiple-input multiple-output system.

Подробнее
13-01-2022 дата публикации

Method For Wireless Event-Driven Everything-to-Everything (X2X) Payload Delivery

Номер: US20220014884A1
Принадлежит: METROLLA INC.

A computer-based system to improve data transfer rates between stationary or moving remote devices having sensors and a server. A sensor data ingestion device is provided at the remote devices, having instructions therein which, when executed, receive and process data to place it in a form suitable for rapid download by a remote server. A memory buffer and a queue unit including computational circuitry configured to generate a queue status associated with triggering event data which is stored in the memory buffer. Instructions in the sensor data ingestion device may provide for a queue modification procedure, or the server may present a queue modification menu including one or more instances of a priority level choice. A queue controller may be provided. Streaming messages are downloaded by the server using a persistent WebSocket connection. Files configured to file chunks at the sensor data ingestion device, and the file chunks are downloaded by a server using a webhook. The file chunks are reconfigured at the server into time layered files associated with triggering event data. Time layered files may include a plurality of files having LiDAR point cloud data, video data readable topics, or GPS position data therein. 1. A non-transitory computer-readable medium having instructions that , when executed:receives one or more inputs corresponding to one or more outputs from one or more sensors at a first device; (a) is received with notice that the event input corresponds to occurrence of a triggering event; or', '(b) is processed, and is determined upon processing that the event input is a triggering event;, 'receives an event input, and wherein the event input'}based (1) at least one of the one or more inputs and (2) the event input, executes programmed instructions to determine a result comprising whether or not to select to establish communication with a server in real time, and in such case, selects a time bounded range anchored to the triggering event and stores ...

Подробнее
07-01-2016 дата публикации

PORT-BASED FAIRNESS PROTOCOL FOR A NETWORK ELEMENT

Номер: US20160006664A1
Принадлежит:

Methods, apparatuses, and computer-readable medium for providing a fairness protocol in a network element are disclosed herein. An example method includes receiving one or more packets at each of a plurality of ingress ports of the network element, and scheduling the packets into a plurality of queues, wherein each of the queues is associated with packets that are sourced from one of the ingress ports. The method also includes monitoring a bandwidth of traffic sourced from each of the ingress ports, identifying a port among the ingress ports that sources a smallest bandwidth of traffic, and arbitrating among the queues when transmitting packets from an egress port of the network element by giving precedence to the identified port that sources the smallest bandwidth of traffic. Additionally, arbitrating among the queues distributes a bandwidth of the egress port equally among the ingress ports. 1. A method for providing a fairness protocol in a network element , comprising:receiving one or more packets at each of a plurality of ingress ports of the network element;scheduling the one or more packets into a plurality of queues, wherein each of the plurality of queues is associated with packets that are sourced from one of the plurality of ingress ports;monitoring a bandwidth of traffic sourced from each of the plurality of ingress ports;identifying a port among the plurality of ingress ports that sources a smallest bandwidth of traffic; andarbitrating among the plurality of queues when transmitting packets from an egress port of the network element by giving precedence to the identified port that sources the smallest bandwidth of traffic, wherein arbitrating among the plurality of queues distributes a bandwidth of the egress port equally among the plurality of ingress ports.2. The method of claim 1 , wherein monitoring a bandwidth of traffic sourced from each of the plurality of ingress ports further comprises maintaining a bandwidth table comprising a counter for ...

Подробнее
04-01-2018 дата публикации

MONITORING PACKET RESIDENCE TIME AND CORRELATING PACKET RESIDENCE TIME TO INPUT SOURCES

Номер: US20180006920A1
Принадлежит:

An output circuit, included in a device, may determine counter information associated with a packet provided via an output queue managed by the output circuit. The output circuit may determine that a latency event, associated with the output queue, has occurred. The output circuit may provide the counter information and time of day information associated with the counter information. The output circuit may provide a latency event notification associated with the output queue. An input circuit, included in the device, may receive the latency event notification associated with the output queue. The input circuit may determine performance information associated with an input queue. The input queue may correspond to the output queue and may be managed by the input circuit. The input circuit may provide the performance information associated with the input queue and time of day information associated with the performance information. 120-. (canceled)21. A device , comprising: [ 'the average residence time corresponding to an amount of time that a group of packets are located in the output queue;', 'compute, based on counter information associated with an output queue, an average residence time associated with the output queue,'}, 'determine, based on computing the average residence time, whether a latency event associated with the output queue has occurred;', 'determine, based on determining that the latency event has occurred, input queue performance information associated with an input queue corresponding to the output queue; and', 'provide input queue performance information associated with the input queue., 'one or more processors to22. The device of claim 21 , where the one or more processors are further to:receive an indication to determine whether a latency event has occurred; anddetermine the counter information based on receiving the indication.23. The device of claim 21 , where the device is configured to automatically determine whether a latency event ...

Подробнее
04-01-2018 дата публикации

Estimating multiple distinct-flow counts in parallel

Номер: US20180006921A1
Принадлежит: Mellanox Technologies TLV Ltd

A network switch includes circuitry, multiple ports and multiple hardware-implemented distinct-flow counters. The multiple ports are configured to receive packets from a communication network. Each of the multiple hardware-implemented distinct-flow counters is configured to receive (i) a respective count definition specifying one or more packet-header fields and (ii) a respective subset of the received packets, and to estimate a respective number of distinct flows that are present in the subset, by evaluating, over the packets in the subset, a number of distinct values in the packet-header fields belonging to the count definition. The circuitry is configured to provide each of the distinct-flow counters with the respective subset of the received packets, including providing a given packet to a plurality of the distinct-flow counters, and to identify an event-of-interest based on numbers of distinct flows estimated by the distinct-flow counters.

Подробнее
07-01-2021 дата публикации

ALLOCATING BANDWIDTH BETWEEN BANDWIDTH ZONES ACCORDING TO USER LOAD

Номер: US20210006501A1
Автор: Ong David T.
Принадлежит:

A bandwidth management system includes a plurality of queues respectively corresponding to a plurality of zones. An enqueuing module receives network traffic from one or more incoming network interfaces, determines a belonging zone to which the network traffic belongs, and enqueues the network traffic on a queue corresponding to the belonging zone. A dequeuing module selectively dequeues data from the queues and passes the data to one or more outgoing network interfaces. When dequeuing data from the queues the dequeuing module dequeues an amount of data from a selected queue, and the amount of data dequeued from the selected queue is determined according to user load of a zone to which the selected queue corresponds. 1. A bandwidth management system for allocating bandwidth between a plurality of bandwidth zones at an establishment serving a plurality of users , each of the bandwidth zones having a number of users competing for bandwidth allocated thereto , the bandwidth management system comprising:a computer server providing a first queue and a second queue, wherein the first queue queues first data associated with a first bandwidth zone of the plurality of bandwidth zones, and the second queue queues second data associated with a second bandwidth zone of the plurality of bandwidth zones; anda computer readable medium storing a plurality of software instructions for execution by the computer server;wherein, by the computer server executing the software instructions loaded from the computer readable medium, the computer server is operable to repeatedly dequeue a first amount of the first data from the first queue and a second amount of the second data from the second queue, and pass the first amount of the first data and the second amount of the second data to one or more outgoing network interfaces; andthe computer server is further operable to automatically adjust the first amount and the second amount over time such that the first amount is larger than the ...

Подробнее
07-01-2021 дата публикации

A bursty traffic allocation method, device and proxy server

Номер: US20210006505A1
Автор: Weicai CHEN
Принадлежит: Wangsu Science and Technology Co Ltd

A bursty traffic allocation method includes: receiving statistical data sent by a proxy server deployed in a service node, where the statistical data is used to characterize an operating state of the service node and/or one or more physical machines in the service node; determining whether there is a bursty condition in a target service, and if there is a bursty condition in the target service, generating a resource scheduling task matching the service node based on the statistical data; feeding back the resource scheduling task to the proxy server, to allow the proxy server to expand a physical machine in the service node according to a resource amount specified in the resource scheduling task; and receiving a resource expansion message fed back by the proxy server for the resource scheduling task, and pulling bursty traffic of the target service to a physical machine specified in the resource expansion message.

Подробнее
03-01-2019 дата публикации

Technologies for scalable network packet processing with lock-free rings

Номер: US20190007330A1
Принадлежит: Intel Corp

Technologies for network packet processing include a computing device that receives incoming network packets. The computing device adds the incoming network packets to an input lockless shared ring, and then classifies the network packets. After classification, the computing device adds the network packets to multiple lockless shared traffic class rings, with each ring associated with a traffic class and output port. The computing device may allocate bandwidth between network packets active during a scheduling quantum in the traffic class rings associated with an output port, schedule the network packets in the traffic class rings for transmission, and then transmit the network packets in response to scheduling. The computing device may perform traffic class separation in parallel with bandwidth allocation and traffic scheduling. In some embodiments, the computing device may perform bandwidth allocation and/or traffic scheduling on each traffic class ring in parallel. Other embodiments are described and claimed.

Подробнее
03-01-2019 дата публикации

Openflow Match and Action Pipeline Structure

Номер: US20190007331A1
Принадлежит:

An embodiment of the invention includes a packet processing pipeline. The packet processing pipeline includes match and action stages. Each match and action stage in incurs a match delay when match processing occurs and each match and action stage incurs an action delay when action processing occurs. A transport delay occurs between successive match and action stages when data is transferred from a first match and action stage to a second match and action stage. 1. A packet processing pipeline comprising: a plurality of match tables wherein each of the match tables in the plurality of match tables is assigned a physical table ID identifying each of the match tables as a specific physical table;', 'wherein each of the match tables in the plurality of match tables is assigned a logical table ID identifying each of the match tables as a specific logical table;', 'a physical table ID configuration register, the physical table ID configuration register identifying which of the physical tables is assigned to which logical table;', 'wherein each of the logical tables comprises, 'wherein each of the logical tables in a match and action stage accepts physical signals from all of the physical tables with their associated action units in the match and action stage, wherein the physical signals are selected through a multiplexer controlled by a physical table ID configuration register;, 'a plurality of match and action stages wherein each of the match and action stages in the plurality of match and action stages compriseswhere the physical signals include a table successor id from each of the physical tables with associated action units.2. The packet processing pipeline of wherein each of the match and action stages accepts a start-adr table ID from a prior match and action stages;wherein the start-adr table ID is input to a start-adr inverse thermometer decoder with enable;wherein logical table ID bits of the start-adr table ID drive a code input of the inverse thermometer ...

Подробнее
03-01-2019 дата публикации

PACKET SERVICING PRIORITY BASED ON COMMUNICATION INITIALIZATION

Номер: US20190007333A1
Принадлежит:

Techniques directed to servicing communications based on when communication sessions are initialized for nodes are described. For example, a routing device may prioritize packets in a buffer according to when nodes have initiated communication sessions with a service provider or another node. The routing device may give priority to nodes that have first initiated communication sessions. This may avoid communication sessions ending prematurely due to time-out periods and/or avoid delays in completing communication sessions. 1. A method comprising:receiving, by a router, a first Protocol Data Unit (PDU) designated to be sent to a first node, the first PDU being associated with a first communication session;receiving, by the router, a second PDU designated to be sent to a second node, the second PDU being associated with a second communication session, the router acting as a parent to the first node and the second node;storing, by the router, the first PDU and the second PDU in a buffer;generating, by the router, a tracking list indicating that the second communication session associated with the second node was initiated before the first communication session associated with the first node;determining, by the router, that the second node has a higher priority than the first node based at least in part on the tracking list; andsending, by the router, the second PDU to the second node.2. The method of claim 1 , wherein:the receiving the second PDU comprises receiving the second PDU after receiving the first PDU; andthe storing comprises storing the first PDU in the buffer before storing the second PDU in the buffer.3. The method of claim 1 , further comprising:based at least in part on determining that the second node has the higher priority than the first node, searching the buffer for the second PDU that is designated to be sent to the second node;identifying the second PDU in the buffer based at least in part on the searching; andprioritizing the second PDU higher in ...

Подробнее
03-01-2019 дата публикации

CONTROLLING FAIR BANDWIDTH ALLOCATION EFFICIENTLY

Номер: US20190007338A1
Принадлежит:

Micro-schedulers control bandwidth allocation for clients, each client subscribing to a respective predefined portion of bandwidth of an outgoing communication link. A macro-scheduler controls the micro-schedulers, by allocating the respective subscribed portion of bandwidth associated with each respective client that is active, by a predefined first deadline, with residual bandwidth that is unused by the respective clients being shared proportionately among respective active clients by a predefined second deadline, while minimizing coordination among micro-schedulers by the macro-scheduler periodically adjusting respective bandwidth allocations to each micro-scheduler. 1. A computer system , comprising:at least one processor;a network interface card configured to provide a bandwidth of an outgoing communication link shared by multiple clients individually subscribing to a predefined portion of the bandwidth; and allocate the respective predefined portion of the bandwidth subscribed to each client that is active on transmitting via the outgoing communication link, by a predefined first deadline;', 'subsequent to an initial period, allocate residual bandwidth of the outgoing communication link that is unused by the respective clients in an amount proportionate among respective clients that are active by a predefined second deadline, wherein the predefined first deadline and the predefined second deadline differ by a time interval of the initial period; and', 'periodically adjust respective allocation of the bandwidth to each of the clients by repeating the allocating the respectively predefined portion or the allocating the residual bandwidth operations over additional periods subsequent to the initial period., 'a memory operatively coupled to the at least one processor and the network interface card, the memory containing instructions that are executable by the at least one processor to cause the computer system to2. The computer system of claim 1 , wherein ...

Подробнее
03-01-2019 дата публикации

MULTIPLEXING METHOD FOR SCHEDULED FRAMES IN AN ETHERNET SWITCH

Номер: US20190007344A1
Автор: Mangin Christophe
Принадлежит: Mitsubishi Electric Corporation

The method comprises the steps of: a) providing a plurality of memory buffers, associated to respective indexes of priority, each buffer comprising one queue of frames having a same index of priority, b) sorting the received frames in a chosen buffer according to their index of priority, c) in each buffer, sorting the frames according to their respective timestamps, for ordering the queue of frames in each buffer from the earliest received frame on top of the queue to the latest received frame at the bottom of the queue, and d) feeding the transmitting ports with each frame or block of frame to transmit, in an order determined according to the index of priority of the frame, as well as an order of the frame or of the block of frame in the queue associated to the index of priority of the frame. 1. A method for multiplexing data frames , in a packet-switched type network , a first plurality of receiving ports, receiving said data frames, and', 'a second plurality of transmitting ports, for transmitting at least blocks of said data frames,, 'at least a part of said network comprising one or several switches havingeach frame including a data field comprising information related to an index of priority for transmitting the frame,wherein a clock is provided to said switches so as to apply a timestamp of reception of each frame in each receiving port, and a memory medium is further provided so as to store transitorily each received frame along with its timestamp,and wherein the method comprises the steps of:a) providing a plurality of memory buffers , associated to respective indexes of priority, each buffer comprising one queue of frames having a same index of priority,b) sorting the received frames in a chosen buffer according to their index of priority,c) in each buffer, sorting the frames according to their respective timestamps, for ordering the queue of frames in each buffer from the earliest received frame on top of the queue to the latest received frame at the ...

Подробнее
03-01-2019 дата публикации

TECHNOLOGIES FOR EXTRACTING EXTRINSIC ENTROPY FOR WORKLOAD DISTRIBUTION

Номер: US20190007347A1
Принадлежит:

Technologies for distributing network packet workload are disclosed. A compute device may receive a network packet and determine network packet extrinsic entropy information that is based on information that is not part of the contents of the network packet, such as an arrival time of the network packet. The compute device may use the extrinsic entropy information to assign the network packet to one of several packet processing queues. Since the assignment of network packets to the packet processing queues depend at least in part on extrinsic entropy information, similar or even identical packets will not necessarily be assigned to the same packet processing queue. 1. A compute device for distributing network packet workload based on extrinsic entropy , the compute device comprising:a processor;a memory; receive a network packet;', 'determine network packet extrinsic entropy information, wherein the network packet extrinsic entropy information is not based on content of the network packet;', 'select, based on the network packet extrinsic entropy information, a packet processing queue from a plurality of packet processing queues; and', 'assign the network packet to the selected packet processing queue., 'a network interface controller to2. The compute device of claim 1 , wherein the network interface controller is further to determine information associated with a temporal characteristic of the arrival of the network packet at the compute device claim 1 ,wherein to determine the network packet extrinsic entropy information comprises to determine the network packet extrinsic entropy information based on the temporal characteristic of the arrival of the network packet.3. The compute device of claim 2 , wherein the network interface controller is further to determine a timestamp of the arrival time of the network packet claim 2 , wherein the temporal characteristics of the arrival of the network packet is the timestamp of the arrival time claim 2 ,wherein to select, ...

Подробнее
02-01-2020 дата публикации

COMPUTERIZED METHODS AND SYSTEMS FOR MANAGING CLOUD COMPUTER SERVICES

Номер: US20200007456A1
Принадлежит:

Systems, methods, and other embodiments associated with managing instances of services are described. In one embodiment, a method includes constructing pre-provisioned instances of a service within a first pool and constructing pre-orchestrated instances of the service within a second pool. In response to receiving a request for the service, the method executes executable code of a first pre-orchestrated instance as an executing instance and removing the pre-orchestrated instance from the second pool. A pre-provisioned instance is selected from the first pool to create a second pre-orchestrated instance within the second pool, and the pre-provisioned instance is removed from the first pool. 1. A non-transitory computer-readable medium storing computer-executable instructions that when executed by a processor of a computing device causes the processor to:construct pre-provisioned instances of a service within a first pool of a zone of computing resources, wherein the service executes executable code using the computing resources, and wherein each pre-provisioned instance comprises a computing environment of computing resources configured for subsequent installation and execution of the executable code of the service;construct pre-orchestrated instances of the service within a second pool, wherein each pre-orchestrated instance comprises a computing environment within which the executable code of the service is installed in a non-executing state; execute the executable code of a first pre-orchestrated instance as an executing instance of the service for remote access over a network by a remote device;', 'remove the pre-orchestrated instance from the second pool;', 'select a pre-provisioned instance from the first pool and create a second pre-orchestrated instance within the second pool by installing the executable code into the pre-provisioned instance, wherein the executable code is in the non-executing state; and', 'remove the pre-provisioned instance from the first ...

Подробнее
02-01-2020 дата публикации

TECHNOLOGIES FOR ADAPTIVE NETWORK PACKET EGRESS SCHEDULING

Номер: US20200007470A1
Принадлежит:

Technologies for adaptive network packet egress scheduling include a switch configured to configure an eligibility table for a plurality of ports of the switch, wherein the eligibility table includes a plurality of rounds. The switch is further configured to retrieve an eligible mask corresponding to a round of a plurality of rounds of the eligibility table presently being scheduled and determine a ready mask that indicates a ready status of each port. The switch is further configured to determine, for each port, whether the eligible status and the ready status indicate that port is both eligible and ready, and schedule, in response to a determination that at least one port has been determined to be both eligible and ready, each of the at least one port that has been determined to be both eligible and ready. Additional embodiments are described herein. 1. A switch for adaptive network packet egress scheduling , the switch comprising:a plurality of ports;adaptive schedule configuration management circuitry to configure an eligibility table for the plurality of ports, wherein the eligibility table includes a plurality of rounds; and retrieve an eligible mask corresponding to a round of the plurality of rounds of the eligibility table presently being scheduled, wherein the eligible mask indicates an eligible status of each of the plurality of ports, and wherein the eligible status indicates whether a respective port of the eligible mask is eligible to be serviced in the round,', 'determine a ready mask that indicates a ready status of each of the plurality of ports, wherein the ready status indicates whether a respective port of the ready mask is available to be serviced,', 'determine, for each port, whether the eligible status and the ready status indicate that port is both eligible and ready, and', 'schedule, in response to a determination that at least one port has been determined to be both eligible and ready, each of the at least one port that has been determined ...

Подробнее
12-01-2017 дата публикации

Efficient Means of Combining Network Traffic for 64Bit and 31Bit Workloads

Номер: US20170012889A1
Принадлежит:

A method, system and computer-usable medium are disclosed for performing a network traffic combination operation. With the network traffic combination operation, a plurality of input queues are defined by an operating system for an adapter based upon workload type (e.g., as determined by a transport layer). Additionally, the operating system defines each input queue to match a virtual memory architecture of the transport layer (e.g., one input queue is defined as 31 bit and other input queue is defined as 64 bit). When data is received off the wire as inbound data from a physical NIC, the network adapter associates the inbound data with the appropriate memory type. Thus, data copies are eliminated and memory consumption and associated storage management operations are reduced for the smaller bit architecture communications while allowing the operating system to continue executing in a larger bit architecture configuration, 16-. (canceled)7. A system comprising:a processor;a data bus coupled to the processor; and providing a memory type attribute, the memory type attribute comprising a first memory type attribute indication for a first memory size and a second memory type attribute indication for a second memory size, the first memory size being different from the second memory size;', 'providing a first input queue, the first input queue being configured according the first memory type attribute indication;', 'providing a second input queue, the second input queue being configured according the second memory type attribute indication;', 'separating network traffic into the first input queue and the second input queue based on memory size addressability., 'a computer-usable medium embodying computer program code, the computer-usable medium being coupled to the data bus, the computer program code used for separating network traffic into processing queues based on memory size addressability and comprising instructions executable by the processor and configured for8. ...

Подробнее
14-01-2016 дата публикации

PACKET DETECTION AND BANDWIDTH CLASSIFICATION FOR VARIABLE-BANDWIDTH PACKETS

Номер: US20160014005A1
Принадлежит:

A receiver receives packets without prior knowledge of their bandwidths. The receiver calculates a first auto-correlation function for a first channel, a second auto-correlation function for a second channel, and a dot product of the first auto-correlation function and the second auto-correlation function. A packet is detected and its bandwidth classified based at least in part on the dot product. 1. A method of packet detection and bandwidth classification , comprising:calculating a first auto-correlation function for a first channel;calculating a second auto-correlation function for a second channel;calculating a dot product of the first auto-correlation function and the second auto-correlation function; anddetecting a packet, the detecting comprising classifying a bandwidth of the packet based at least in part on the dot product.2. The method of claim 1 , wherein:the first auto-correlation function is a first averaged auto-correlation function;the second auto-correlation function is a second averaged auto-correlation function;calculating the first averaged auto-correlation function comprises generating a first unaveraged auto-correlation function for the first channel in accordance with a predefined delay and taking a moving average of the first unaveraged auto-correlation function; andcalculating the second averaged auto-correlation function comprises generating a second unaveraged auto-correlation function for the second channel in accordance with the predefined delay and taking a moving average of the second unaveraged auto-correlation function.3. The method of claim 2 , wherein the predefined delay corresponds to a training-field periodicity for the packet.4. The method of claim 1 , wherein classifying the bandwidth comprises determining that the bandwidth of the packet includes both the first and second channels claim 1 , based at least in part on the dot product satisfying a first threshold.5. The method of claim 4 , wherein the first auto-correlation ...

Подробнее
14-01-2016 дата публикации

METHOD FOR ALLOCATING FRAME TRANSMISSION TIME SLOTS

Номер: US20160014046A1
Принадлежит: TELEFONAKTIEBOLAGET L M ERICSSON (PUBL)

Distributed frame transmission method for a local client in a Local switched Network, the method comprises the steps of: determining a number of frame transmission time slots based on the number of local clients in the Network and a Time Distribution Window (TDW) and establishing an Identity Number, ID, of a specific receiving client and allocating a specific frame transmission time slot among said number of frame transmission time slots for transmitting frames to said specific receiving client from a buffer queue dedicated to said specific receiving client based on an ID of the local client, the established ID of said receiving client and the total number of local clients in the Local switched Network. 1. A distributed frame transmission method for a local client in a Local switched Network , the method comprises the steps of;determining a number of frame transmission time slots based on the number of local clients in the Local switched Network and a Time Distribution Window (TDW);establishing an Identity Number (ID) of a specific receiving client;allocating a specific frame transmission time slot among said number of frame transmission time slots for transmitting frames to said specific receiving client from a buffer queue dedicated to said specific receiving client based on an ID of the local client, the established ID of said receiving client and the total number of local clients in the Local switched Network.2. The method according to claim 1 , wherein the step of determining a number of frame transmission time slots comprises:obtaining a pre-determined Time Distribution Window;dividing said obtained Time Distribution Window equally among the local clients of the Local switched Network to thereby obtain a number of equally sized time slots corresponding to the number of transmitting local clients.3. The method according to claim 1 , wherein the step of establishing an ID of a receiving client is performed by using a register comprising a List of Local Addresses ...

Подробнее
10-01-2019 дата публикации

Network Flow Control Method And Network Device

Номер: US20190014053A1
Автор: HUANG QUN, Huang Yong
Принадлежит:

Embodiments of this application provide a network flow control method and a network device. The method includes: receiving a packet flow; determining, based on a service type of the packet flow, a service pipeline used for transmitting the packet flow, where service types of all packet flows in the service pipeline are the same; and based on a bandwidth weight allocated to the service type, transferring the packet flow in the service pipeline to a physical port. In the embodiments of this application, packet flows are allocated to different service pipelines based on a service type, and bandwidth weights are allocated, in a centralized manner, to service pipelines that carry a same service type. 1. A network flow control method , comprising:receiving a packet flow;determining, based on a service type of the packet flow, a service pipeline used for transmitting the packet flow, wherein service types of all packet flows in the service pipeline are the same; andbased on a bandwidth weight allocated to the service type, transferring the packet flow using the service pipeline.2. The method according to claim 1 , wherein before the transferring the packet flow using the service pipeline claim 1 , the method further comprises:performing in-pipeline scheduling on the packet flows in the service pipeline based on the service type of the packet flow in the service pipeline.3. The method according to claim 2 , wherein the performing in-pipeline scheduling on the packet flows in the service pipeline based on the service type of the packet flow in the service pipeline comprises one of the following:if the service type of the packet flow in the service pipeline is a throughput sensitive service, performing scheduling on the packet flows in the service pipeline in a first in first out queue scheduling manner;if the service type of the packet flow in the service pipeline is a deadline sensitive service, determining a priority of each packet flow based on a deadline of each service ...

Подробнее
10-01-2019 дата публикации

Application and network aware adaptive compression for better qoe of latency sensitive applications

Номер: US20190014055A1
Принадлежит: Citrix Systems Inc

This disclosure is directed to embodiments of systems and methods for performing compression of data in a queue. A device intermediary between a client and a server may determine that a length of time to move existing data maintained in a queue from the queue exceeds a predefined threshold. The device may identify, responsive to the determination, a first quantity of the existing data to undergo compression, and a second quantity of the existing data according to a compression ratio of the compression. The device may reserve, according to the second quantity, a first portion of the queue that maintained the first quantity of the existing data, to place compressed data obtained from applying the compression on the first quantity of the existing. The device may place incoming data into the queue beyond the reserved first portion of the queue.

Подробнее
15-01-2015 дата публикации

QUEUE CREDIT MANAGEMENT

Номер: US20150016254A1
Принадлежит:

To prevent buffer overflow, a receiving entity may use credits to control the total amount of packets any single transmitting entity can forward. Once the assigned credits are spent, the transmitting entity cannot send data portions to the receiving entity until additional credits are provided. However, the logic in the receiving entity may be designed to manage a maximum number of credits that is less than the capacity of the buffer in the transmitting entity. For example, the receiving entity is designed to manage a maximum of eight credits but the buffer has room for twelve data portions. To use the buffer efficiently, the transmitting entity may identify when extra buffer storage is available and provide additional credits. In addition, the transmitting entity may control when the credits are provided such that the receiving entity is not allocated more credits that it was designed to manage. 1. A method comprising:providing a number of credits to a first module not in excess of a maximum number of credits the first module is designed to manage, wherein the first module maintains a credit count that is decreased each time a data packet is transmitted from the first module to a second module;storing a plurality of received data packets from the first module in a memory buffer in the second module, the memory buffer having predefined memory locations dedicated to storing data packets received only from the first module, wherein the number of memory locations exceeds the maximum number of credits the first module is designed to manage; andupon determining that the credit count of the first module is less than the maximum number of credits and that there is available space in the memory locations, providing an extra credit to the first module, thereby increasing the credit count.2. The method of claim 1 , further comprising:performing arbitration to determine if one of the received data packets stored in the memory buffer is to be forward on an output link of the ...

Подробнее
15-01-2015 дата публикации

Traffic Management with Ingress Control

Номер: US20150016266A1
Принадлежит:

One embodiment provides a network device. The network device includes a a processor including at least one processor core; a network interface configured to transmit and receive packets at a line rate; a memory configured to store a scheduler hierarchical data structure; and a scheduler module. The scheduler module is configured to prefetch a next active pipe structure, the next active pipe structure included in the hierarchical data structure, update credits for a current pipe and an associated subport, identify a next active traffic class within the current pipe based, at least in part, on a current pipe data structure, select a next queue associated with the identified next active traffic class, and schedule a next packet from the selected next queue for transmission by the network interface if available traffic shaping token bucket credits and available traffic class credits are greater than or equal to a next packet credits. 1. A network device , comprising:a processor comprising at least one processor core;a network interface configured to transmit and receive packets at a line rate;a memory configured to store a scheduler hierarchical data structure; anda scheduler module configured to prefetch a next active pipe structure, the next active pipe structure included in the hierarchical data structure, update credits for a current pipe and an associated subport, identify a next active traffic class within the current pipe based, at least in part, on a current pipe data structure, select a next queue associated with the identified next active traffic class, and schedule a next packet from the selected next queue for transmission by the network interface if available traffic shaping (TS) token bucket credits and available traffic class credits are greater than or equal to a next packet credits.2. The network device of claim 1 , wherein the scheduler module is further configured to identify the next active pipe based claim 1 , at least in part claim 1 , on an active ...

Подробнее
10-01-2019 дата публикации

A Mobil Terminal, a Buffering Module, and Methods Therein for Uploading a File in a Communications Network

Номер: US20190014502A1
Принадлежит: Telefonaktiebolaget LM Ericsson AB

According to a first aspect of embodiments herein, the object is achieved by a method performed by a mobile terminal for uploading a file in a communications network. After the mobile terminal (201) has selected a buffering module to be used for uploading the file, it sends (202) to the selected buffering module, the file to be uploaded in the communications network. The mobile terminal receives (204) a notification from the selected buffering module. The notification is about an estimated time to any one or more out of: upload start and upload end. The mobile terminal then refrains (205) from re-sending to the selected buffering module the file to be uploaded in the communications network until the estimated time to any one or more out of: upload start and upload end has expired.

Подробнее
14-01-2021 дата публикации

TECHNIQUES FOR SCHEDULING MULTIPATH DATA TRAFFIC

Номер: US20210014171A1
Принадлежит:

A multipath scheduler device for scheduling data traffic includes: at least one first type data path; at least one second type data path; and a scheduler configured to schedule a first portion of the data traffic for transmission via the at least one first type data path and to schedule a second portion of the data traffic for delayed transmission via the at least one second type data path. 1. A multipath scheduler device for scheduling data traffic , the multipath scheduler device comprising:at least one first type data path;at least one second type data path; anda scheduler configured to schedule a first portion of the data traffic for transmission via the at least one first type data path and to schedule a second portion of the data traffic for delayed transmission via the at least one second type data path.2. The multipath scheduler device of claim 1 , wherein the scheduler is configured to schedule the second portion of the data traffic for delayed transmission via the at least one second type data path based on a capacity threshold of the at least one first type data path and/or based on a traffic distribution function for distributing the data traffic between the at least one first type data path and the at least one second type data path.3. The multipath scheduler device of claim 1 , wherein data traffic transmission via the at least one second type data path is more expensive than data traffic transmission via the at least one first type data path with respect to latency claim 1 , reliability claim 1 , capacity claim 1 , complexity and/or cost.4. The multipath scheduler device of claim 1 , wherein the scheduler is configured to schedule the second portion of the data traffic for delayed transmission via the second type data path based on a bursty traffic pattern of the data traffic being detected claim 1 , wherein the scheduler is configured to detect a bursty traffic pattern based on a peak behavior of data traffic scheduled for transmission via the at ...

Подробнее
09-01-2020 дата публикации

Self-adjusting control loop

Номер: US20200014581A1
Автор: James Fan, Jeffrey Aaron
Принадлежит: AT&T INTELLECTUAL PROPERTY I LP

In one embodiment, a method includes monitoring, by a control loop including a processor and a memory, a first environment. The control loop includes one or more predetermined control loop parameters. The method also includes receiving, by the control loop and in response to monitoring the first environment, first data from the first environment and receiving, by the control loop, information from an adaptation control loop. The method also includes determining, by the control loop, to automatically adjust at least one of the one or more predetermined control loop parameters based at least in part on the information received from the adaptation control loop and automatically adjusting, by the control loop, the one or more predetermined control loop parameters. The method further includes determining, by the control loop, to initiate an action based on the first data collected from the first environment and the one or more adjusted control loop parameters.

Подробнее
09-01-2020 дата публикации

METHOD, ENTITY AND PROGRAM FOR TRANSMITTING COMMUNICATION SIGNAL FRAMES

Номер: US20200014778A1
Автор: Mangin Christophe
Принадлежит: Mitsubishi Electric Corporation

The invention relates to a method implemented by a communicating entity in a packet-switched network, comprising at least one port for transmitting communication signal frames comprising a first type of frames, intended to be transmitted in a plurality of streams for which a traffic shaping is defined, and a second type of frames, for which no traffic shaping is defined, each frame being able to be fragmented so as to transmit a fragment only of a frame of said second type. The communicating entity stores a plurality of first queues of frames of the first type, the first queues being associated respectively to said plurality of streams, and at least one second queue for frames of the second type. The entity further schedules transmissions of first type frames, and between at least two first type frames, transmission of at least a fragment of at least one second type frame. 2. The method of claim 1 , wherein each of said first and second queues is stored in a first-in-first-out type buffer (BUFF) claim 1 , and wherein a queue to which a first type frame is assigned is determined according to the stream to which said first type frame belongs claim 1 , depending on features of said first type frame.3. The method of claim 2 , wherein said features of said first type frame include a frame length and a current rate of the stream to which said first type frame belongs claim 2 ,and wherein a time taken for the transmission of each first type frame in its queue is calculated and respective transmission times are scheduled for said first type frames according to said respective times taken for the transmissions of the first type frames.4. The method of claim 2 , wherein a frame belonging to a shaped stream is selected for transmission by comparing scheduled transmission times (TTTi) of frames placed at the head of each first queue claim 2 , the frame associated to the smallest scheduled transmission time (miniTTTi) being selected as a next candidate for transmission.5. The ...

Подробнее
21-01-2016 дата публикации

BANDWIDTH ZONES IN SYSTEM HAVING NETWORK INTERFACE COUPLED TO NETWORK WITH WHICH A FIXED TOTAL AMOUNT OF BANDWIDTH PER UNIT TIME CAN BE TRANSFERRED

Номер: US20160021027A1
Автор: Ong David T.
Принадлежит:

A bandwidth management system includes a plurality of queues respectively corresponding to a plurality of zones. An enqueuing module receives network traffic from one or more incoming network interfaces, determines a belonging zone to which the network traffic belongs, and enqueues the network traffic on a queue corresponding to the belonging zone. A dequeuing module selectively dequeues data from the queues and passes the data to one or more outgoing network interfaces. When dequeuing data from the queues the dequeuing module dequeues an amount of data from a selected queue, and the amount of data dequeued from the selected queue is determined according to user load of a zone to which the selected queue corresponds. 1. A system comprising:one or more first network interfaces coupled to a first network with which a fixed total amount of bandwidth per unit time can be transferred;one or more second network interfaces coupled to a second network;a plurality of queues, each of the queues corresponding to a respective one of a plurality of bandwidth zones, the bandwidth zones including a plurality of first level guaranteed bandwidth zones and only one first level remaining bandwidth zone not entitled to any guaranteed bandwidth; andone or more processors operable to:determine a belonging zone to which network traffic received from either of the first or second network interfaces belongs;enqueue the network traffic on a queue corresponding to the belonging zone; andcycle though the queues, dequeue data, and thereafter pass the dequeued data to one of the first or second network interfaces for transmission to a destination network address;wherein, when dequeing data from a particular queue, the one or more processors are operable to automatically determine an amount of data to dequeue from the particular queue according to a bandwidth limit for the particular queue;the bandwidth limit for each of the queues corresponding to the first level guaranteed bandwidth zones ...

Подробнее
19-01-2017 дата публикации

Method And Apparatus For Managing Network Congestion

Номер: US20170019343A1
Принадлежит:

A manner of managing congestion in a data-traffic network. In one embodiment a network node such as a bridge, switch, or router includes an AQM having a PI controller configured to calculate p′ using the difference between Q and a Target Q, wherein p′ is pand p is the probability that a received packet will be dropped or marked, and some drop decision functions are configured to indicate that the node should drop a received packet by comparing p′ to two random values. A marking decision function may also be present and configured to indicate that the node should mark a received packet by comparing p′ to one random value. A congestion control classifier, which is in some embodiments an ECN classifier, is also present to classify a received packet and facilitate making the proper dropping or marking decision. 1. A method of data traffic congestion management , comprising:receiving data packets;placing at least a portion of the received packets in a queue buffer;measuring load of queue buffer to extract at least one queue parameter Q;providing the at least at least one queue parameter to an AQM;{'sup': '0.5', 'calculating p′ as a proportional and integral control function on the Q parameter and a target Q parameter, wherein p′ is pand p is the probability that a received packet will be dropped or marked;'}determining whether to apply a drop decision or mark decision to a received packet; and{'sup': '2', 'determining, if applying a drop decision, whether to drop a received packet by a drop probability that is proportional with p′.'}2. The congestion-management method of claim 1 , wherein a drop determination is made when p′ is larger than two random values.3. The congestion-management method of claim 2 , wherein one of the random values is generated for each received packet and the other random value is a random value applied to the previous received packet.4. The congestion-management method of claim 1 , further comprising dropping a received packet when a drop ...

Подробнее
03-02-2022 дата публикации

HYBRID PACKET MEMORY FOR BUFFERING PACKETS IN NETWORK DEVICES

Номер: US20220038384A1
Принадлежит:

A network device processes received packets to determine port or ports of the network device via which to transmit the packets. The network device classifies the packets into packet flows and selects, based at least in part on one or more characteristics of data being transmitted in the respective packet flows, a first packet memory having a first memory access bandwidth or a second packet memory having a second memory access bandwidth, and buffers the packets in the selected first or second packet memory which the packets are being processed by the network device. After processing the packets, the network device retrieves the packets from the first packet memory or the second packet memory in which the packets are buffered, and forwards the packets to the determined one or more ports for transmission of the packets. 1. A method for processing packets in a network device , the method comprising:receiving, at a packet processor of the network device, packets ingressing via a network port among a plurality of network ports of the network device;processing, with the packet processor, the packets at least to determine one or more network ports, of the plurality of network ports, via which the packets are to be transmitted from the network device;classifying, with the packet processor according at least in part to source address information and destination address information obtained from headers of the packets, the packets into packet flows;selecting, with the packet processor based at least in part on one or more characteristics of data being transmitted in the respective packet flows, one of i) a first packet memory having a first memory access bandwidth and ii) a second packet memory having a second memory access bandwidth different from the first memory access bandwidth of the first packet memory, for buffering packets that belong to the respective packet flows while the packets are being processed by the network device, the one or more data characteristics being ...

Подробнее
03-02-2022 дата публикации

Tag-based data packet prioritization in dual connectivity systems

Номер: US20220038556A1
Принадлежит: T Mobile USA Inc

A component of a cellular communication system is configured to prioritize data packets based on packet tags that have been associated with the data packets. The packet tags may comprise an application identifier and a customer identifier, as examples. A Packet Data Convergence Protocol (PDCP) layer of a radio protocol stack receives a data packet and associated packet tags and assigns the data packet to a preferred transmission queue or a non-preferred transmission queue, based on the packet tags associated with the data packet. In order to manage queue overflows, data packets of the non-preferred transmission queue may be discarded when they have been queued for more than a predetermined length of time. Data packets of the preferred transmission queue, however, are retained regardless of how long they have been queued.

Подробнее
18-01-2018 дата публикации

Credit Loop Deadlock Detection and Recovery in Arbitrary Topology Networks

Номер: US20180019947A1
Принадлежит:

A credit loop that produces a deadlock is identified in a network of switches that are interconnected for packet traffic flows therethrough. The identification is carried out by periodically transmitting respective credit loop control messages from the loop-participating switches via their deadlock-suspected egress ports to respective next-hop switches. The CLCMs has switch port-unique identifiers (SPUIDs). The loop is identified when in one of the next-hop switches the SPUID of a received CLCM is equal to the SPUID of a transmitted CLCM thereof. A master switch is selected for resolving the deadlock. 1. A method of communication , comprising the steps of:in a network of switches that are interconnected for packet traffic flows therethrough, identifying a credit loop that produces a deadlock, the credit loop comprising loop-participating switches having respective deadlock-suspected egress ports by:periodically transmitting respective credit loop control messages (CLCMs) from the loop-participating switches via the deadlock-suspected egress ports to respective next-hop switches, the CLCMs having switch port-unique identifiers (SPUIDs);in one of the next-hop switches determining that the SPUID of a current received CLCM is equal to the SPUID of a transmitted CLCM thereof; andthereafter selecting a master switch for resolving the deadlock.2. The method according to claim 1 , further comprising the steps of:in the one next-hop switch making a determination that the SPUID of a currently received CLCM exceeds the SPUID of a previously transmitted CLCM from the one next-hop switch; andresponsively to the determination continuing to periodically transmit CLCMs to a subsequent next-hop switch.3. The method according to claim 1 , further comprising the steps of:in the one next-hop switch making a determination that the SPUID of a currently received CLCM is less than the SPUID of a previously transmitted CLCM from the one next-hop switch; andresponsively to the determination ...

Подробнее
18-01-2018 дата публикации

Flow Controller Automatically Throttling Rate of Service Provided by Web API

Номер: US20180019950A1
Принадлежит:

A mechanism is provided in a data processing system for automatically throttling the rate of service provided by a Web application programming interface (API) for a software service. A flow controller executing on the data processing system assigns a queue to each consumer of the software service. Responsive to receiving a current request for the software service from a given consumer of the software service, a flow controller executing on the data processing system adds the current request to a given queue assigned to the given consumer. The flow controller sends a next request from the given queue to the Web API based on a licensed rate of service of the given consumer. 1. A method , in a data processing system , for automatically throttling the rate of service provided by a Web application programming interface (API) for a software service , the method comprising:assigning, by a flow controller executing on the data processing system, a queue to each consumer of the software service;responsive to receiving a current request for the software service from a given consumer of the software service, adding, by the flow controller, the current request to a given queue assigned to the given consumer; andsending, by the flow controller, a next request from the given queue to the Web API based on a licensed rate of service of the given consumer.2. The method of claim 1 , wherein the given queue is a first-in-first-out queue claim 1 , wherein the flow controller adds the current request at claim 1 , the back of the given queue claim 1 , and wherein the flow controller sends the next request from the front of the given queue.3. The method of claim I claim 1 , wherein sending the next request based on the licensed rate of service comprises:setting, by the flow controller, a timer associated with the given queue based on the licensed rate of service of the given consumer; andresponsive to expiration of the timer, notifying the Web API that a request is ready in the given ...

Подробнее
18-01-2018 дата публикации

DPCM DATA COMPRESSION USING COMPRESSED DATA TAGS

Номер: US20180020033A1
Автор: Goh Beng Heng
Принадлежит: STMICROELECTRONICS ASIA PACIFIC PTE LTD

Disclosed herein is a method including receiving a stream of packets into a buffer, each packet having a processed video data portion and a page count portion, the processed video data portion being a result of a modulo operation performed on a word of video data, and the page count portion being a data page number on which the word of video data is to be placed. Each packet is read from the buffer, and an output packet including the video data portion and a data tag portion is generated therefrom. The data tag portion is associated with, but does not directly represent, the data page number where the word of video data of the processed video data portion or of video data of a processed video data portion of a next packet, is to be placed. Each data tag portion contains fewer bits than each corresponding page count portion. 1. An electronic device , comprising: a buffer configured to receive a stream of packets, each packet having a processed video data portion and a page count portion associated with the processed video data portion, wherein the processed video data portion is a result of a modulo operation performed on a word of video data, wherein the page count portion is a data page number on which the word of video data is to be placed;', 'a data packer configured to read each packet from the buffer, and to generate from each an output packet comprising the processed video data portion and a data tag portion;', 'wherein the data tag portion is associated with, but does not directly represent, the data page number on which the word of video data associated with the processed video data portion, or the word of video data associated with a processed video data portion of a next packet, is to be placed;', 'wherein each data tag portion contains fewer bits than each corresponding page count portion; and, 'processing circuitry comprisinga transmission chain coupled to the processing circuitry and configured to transmit each output packet.2. The electronic device of ...

Подробнее
17-04-2014 дата публикации

Dynamic Assignment of Traffic Classes to a Priority Queue in a Packet Forwarding Device

Номер: US20140105012A1
Автор: Stephen Lau, Tal Lavian
Принадлежит: ROCKSTAR CONSORTIUM US LP

Responsive to detecting that bandwidth consumption of a packet flow has exceeded a threshold, packet forwarding treatment is changed in accordance with at least one class of packet flow from a first packet forwarding treatment to a second packet forwarding treatment.

Подробнее
17-04-2014 дата публикации

Pre-fill Retransmission Queue

Номер: US20140105219A1
Принадлежит: Futurewei Technologies, Inc.

A method of discontinuous transmission data communication in a digital subscriber line (DSL) transceiver unit, the method comprising determining that a number of a plurality of bits available to transmit is enough to fill a data transfer unit (DTU), forming a DTU, by a DTU framer, comprising the plurality of bits, transferring the DTU to a retransmission queue, and determining the DTUs from the retransmission queue to be transmitted over a next time period used for transmitting over the DSL subscriber line by the DSL transceiver unit. 1. A method of discontinuous transmission data communication in a digital subscriber line (DSL) transceiver unit , the method comprising:determining that a number of a plurality of bits available to transmit is enough to fill a data transfer unit (DTU);forming a DTU, by a DTU framer, comprising the plurality of bits;transferring the DTU to a retransmission queue; anddetermining the DTUs from the retransmission queue to be transmitted over a next time period used for transmitting over the DSL subscriber line by the DSL transceiver unit.2. The method of claim 1 , wherein the determining that the number of a plurality of bits available to transmit is enough to fill a DTU is performed by a transport protocol specific-transmission convergence (TPS-TC) sub-layer.3. The method of claim 2 , wherein a physical media specific-transmission convergence (PMS-TC) sub-layer comprises the DTU framer and the retransmission queue claim 2 , and wherein the DTU framer is directly connected to the retransmission queue.4. The method of claim 2 , wherein the TPS-TC sub-layer comprises the DTU framer claim 2 , wherein a physical media specific-transmission convergence (PMS-TC) sub-layer comprises the retransmission queue claim 2 , and wherein the DTU framer is directly connected to the retransmission queue.5. The method of claim 1 , wherein no idle DTUs are stored in the retransmission queue.6. A discontinuous transmission data communication digital ...

Подробнее
18-01-2018 дата публикации

FORWARDING NODE SELECTION AND ROUTING FOR DELAY-TOLERANT MESSAGES

Номер: US20180020390A1
Принадлежит:

A wireless relay device may receive a request indicating that a wireless device has one or more delay-tolerant messages to be forwarded to a network. The wireless relay device may send a response message to the wireless device indicative of an estimated time to network contact. The wireless relay device then receives and caches the delay-tolerant messages to be forwarded. The wireless relay device may forward at least one of the one or more delay-tolerant messages. 1. A method for wireless communication , comprising:receiving, at a first wireless relay device, a request indicating that a wireless device has one or more delay-tolerant messages to be forwarded to a network;sending a response message to the wireless device indicative of an estimated time to network contact;receiving and caching the one or more delay-tolerant messages to be forwarded; andforwarding at least one of the one or more delay-tolerant messages.2. The method of claim 1 , further comprising:receiving a message from a second wireless relay device indicating at least an estimated time to network contact for the second wireless relay device; andforwarding the at least one of the one or more delay-tolerant messages to the second wireless relay device.3. The method of claim 2 , further comprising:receiving a cost metric from the second wireless relay device, wherein forwarding the at least one of the one or more delay tolerant messages is based at least in part on the received cost metric.4. The method of claim 1 , whereinsending the response message to the wireless device comprises: including the estimated time to network contact in the response message.5. The method of claim 1 , whereinthe request comprises a delay tolerance or a deadline for when the one or more delay-tolerant messages are to be sent to the network.6. The method of claim 5 , whereinsending the response message to the wireless device comprises: comparing the estimated time to network contact with the delay tolerance or deadline; ...

Подробнее
22-01-2015 дата публикации

APPARATUS AND METHOD FOR SYNCHRONOUS HARDWARE TIME STAMPING

Номер: US20150023365A1
Принадлежит: GARRETTCOM, INC.

Methods and apparatus that may be used to provide timestamps to physical layer devices are provided. One method includes obtaining a time value from a clock associated with a physical layer device that is communicatively coupled to a primary data packet switch. The method further includes adding a processing time to the time value to generate a timestamp and transmitting the timestamp to a multiplexer circuit. The method further includes writing the timestamp in parallel from the multiplexer circuit to a plurality of external physical layer devices that are communicatively coupled to a secondary data packet switch and are located external to a housing of the secondary data packet switch. 121-. (canceled)22. A method of timestamping for a packet switching apparatus , the method comprising:obtaining a time value from a clock associated with a physical layer device that is communicatively coupled to a primary data packet switch;adding a processing time to the time value to generate a timestamp;transmitting the timestamp to a multiplexer circuit; andwriting the timestamp in parallel from the multiplexer circuit to a plurality of physical layer devices that are communicatively coupled to a secondary data packet switch and are located external to a housing of the secondary data packet switch.23. The method of claim 22 , wherein the plurality of physical layer devices comprises dedicated physical layer devices that are associated with the secondary data packet switch.24. The method of claim 23 , wherein the plurality of external physical layer devices further comprises physical layer devices that are connected to the primary data packet switch.25. The method of claim 23 , wherein the primary data packet switch is a gigabit Ethernet switch claim 23 , and the secondary data packet switch is a fast Ethernet switch.26. The method of claim 23 , wherein the secondary data packet switch comprises a plurality of internal physical layer devices that are located within the housing ...

Подробнее
16-01-2020 дата публикации

Methods and systems for increasing wireless communication throughput of a bonded vpn tunnel

Номер: US20200021459A1
Принадлежит: PISMO LABS TECHNOLOGY LTD

The present disclosure provides for devices, systems, and methods which optimize throughput of bonded connections over multiple variable bandwidth logical paths by adjusting a tunnel bandwidth weighting schema during a data transfer session in response to a change in bandwidth capabilities of one or more tunnels. By making such adjustments, embodiments of the present invention are able to optimize the bandwidth potential of multiple connections being used in a session, while minimizing the adverse consequences of reduced bandwidth issues which may occur during the data transfer session.

Подробнее
21-01-2021 дата публикации

RELEASE-TIME BASED PRIORITIZATION OF ON-BOARD CONTENT

Номер: US20210021528A1
Принадлежит: VIASAT, INC.

Approaches are described for release-time-driven (RTD) prioritization of on-board content scheduling and delivery to in-transit transport craft via communications systems. In context of a constrained network, content is scheduled to be delivered to those in-transit on-board media servers in a manner driven by respective release times and other prioritization factors associated with the updated content. Each content is associated with a RTD priority profile that can define a release time, a release priority, and a profile plot for the content. The RTD priority profiles can be used to compute priority surfaces that define priority scores over a multidimensional space for a particular time. A subset of the content can be selected for delivery based on the priority surfaces, and can be scheduled for delivery according to network capacity determinations. 1. A method for release-time driven prioritized delivery of on-board content to on-board media servers disposed on a plurality of transport crafts via a communications network , the method comprising:identifying a plurality of release-time-driven (RTD) priority profiles, each RTD priority profile associated with a respective content file set of a plurality of content file sets, each of the plurality of content file sets having an assigned release time and an assigned release priority score,wherein each RTD priority profile defines priority scores over a range of times for delivery of the respective content file set via the communications network, the range of times comprising a first time window during which the priority scores are less than the assigned release priority score of the respective content file set, and a second time window succeeding the first time window and comprising the assigned release time of the respective content file set, the priority scores equaling the release priority score during at least a portion of the second time window;computing a priority surface as a function of the plurality of RTD ...

Подробнее
21-01-2021 дата публикации

Transmitting data using a relay user equipment

Номер: US20210021536A1
Принадлежит: Lenovo Singapore Pte Ltd

Apparatuses, methods, and systems are disclosed for transmitting data corresponding to a relay UE. One method includes transmitting data and first information indicating relay information corresponding to retransmission of the data by a relay UE to at least one UE. The first information comprises a relay identifier, an indication that retransmission of the data is based on feedback received by the relay UE, an indication that retransmission of the data is based on a multi-hop count, an indication for the relay UE to transfer the data from a receiver buffer to a transmit buffer and to retransmit the data from the transmit buffer, an indication for the relay UE to retransmit the data to an indicated destination node, or some combination thereof. The method includes transmitting second information indicating a remaining packet delay budget to the at least one UE.

Подробнее
10-02-2022 дата публикации

Flow Table Aging Optimized For Dram Access

Номер: US20220043756A1
Принадлежит:

A flow table management system can include a hardware memory module communicatively coupled to a network interface card. The hardware memory module is configured to store a flow table including a plurality of network flow entries. The network interface card further includes a flow table age cache configured to store a set of recently active network flows and a flow table management module configured to manage a duration for which respective network flow entries in the flow table stored in the hardware memory module remain in the flow table using the flow table age cache. In some implementations, age information about each respective flow in the flow table is stored in the hardware memory module in an age state table that is separate from the flow table. 1. A method of managing a flow table , comprising:providing a hardware memory module coupled to a network component, the hardware memory module storing a flow table including a plurality of entries, each entry corresponding to a network flow;providing on the network component, a flow table age cache configured to store a set of recently active network flows; upon the network component processing a data packet associated with a network flow, updating the flow table age cache with information indicating activity associated with the network flow;', looking up the network flow associated with the entry in the flow table age cache;', 'in response to the network flow associated with the entry being found in the flow table age cache, updating timer information for the network flow stored in the memory module; and', 'in response to the network flow associated with the entry not being found in the flow table age cache, evaluating an age of the entry based on information stored in the hardware memory module and removing the entry from the flow table in response to the age of the entry exceeding a threshold time value associated with the network flow., 'periodically conducting a scan of the entries in the flow table, the scan ...

Подробнее
26-01-2017 дата публикации

TRANSMITTING AND RECEIVING DATA BASED ON MULTIPATH

Номер: US20170026277A1
Принадлежит:

Methods, apparatuses and systems for transmitting and receiving data based on multipath for transmitting data based on multipath include: establishing WiMAX connection-based multiple paths between a first device and a second device; transmitting data frames in a data queue in the multiple paths; obtaining the quality condition of the multiple paths; and based on the quality condition, adjusting the transmission of the data frames in the data queue in the multiple paths. According to one aspect, there is provided a method for receiving data based on multipath, which includes: establishing WiMAX connection-based multiple paths between a first device and a second device; receiving a plurality of data frames in the multiple paths; processing the received plurality of data frames based on quality condition of the multiple paths. There are further provided corresponding apparatuses and systems. 1. An apparatus for transmitting data based on multipath , comprising:an establishing module configured to establish Worldwide Interoperability for Microwave Access (WiMAX) connection-based multiple paths between a first device and a second device;a transmitting module configured to transmit data frames in a data queue in the multiple paths;an obtaining module configured to obtain a quality condition of the multiple paths; andan adjusting module configured to adjust the transmission of the data frames in the data queue in the multiple paths based on the quality condition.2. The apparatus according to claim 1 , wherein the apparatus is implemented on a Medium Access Control (MAC) layer.3. The apparatus according to claim 1 , wherein the adjusting module comprises:a data transmitting module configured to, in response to the quality condition of each of the multiple paths satisfying a first threshold range, transmit same data frames in the data queue in each of the multiple paths;wherein the first threshold range is a range of values associated with the quality condition.4. The ...

Подробнее
28-01-2016 дата публикации

DYNAMIC PATH SWITCHOVER DECISION OVERRIDE BASED ON FLOW CHARACTERISTICS

Номер: US20160028616A1
Принадлежит:

In one embodiment, a device in a network receives a switchover policy for a particular type of traffic in the network. The device determines a predicted effect of directing a traffic flow of the particular type of traffic from a first path in the network to a second path in the network. The device determines whether the predicted effect of directing the traffic flow to the second path would violate the switchover policy. The device causes the traffic flow to be routed via the second path in the network, based on a determination that the predicted effect of directing the traffic flow to the second path would not violate the switchover policy for the particular type of traffic. 1. A method comprising:receiving, at a device in a network, a switchover policy for a particular type of traffic in the network;determining, by the device, a predicted effect of directing a traffic flow of the particular type of traffic from a first path in the network to a second path in the network;determining, by the device, whether the predicted effect of directing the traffic flow to the second path would violate the switchover policy; andcausing, by the device, the traffic flow to be routed via the second path in the network, based on a determination that the predicted effect of directing the traffic flow to the second path would not violate the switchover policy for the particular type of traffic.2. The method as in claim 1 , wherein the predicted effect comprises a predicted amount of packet reordering that would occur were the traffic flow directed to the second path claim 1 , and wherein the switchover policy indicates a threshold amount of acceptable packet reordering for the particular type of traffic.3. The method as in claim 1 , wherein the predicted effect comprises a predicted decrease in bandwidth for the traffic flow were the traffic flow routed via the second path claim 1 , and wherein the switchover policy indicates a threshold amount of acceptable decrease in bandwidth for ...

Подробнее
28-01-2016 дата публикации

METHODS AND SYSTEMS FOR SCHEDULING OFDM FRAMES

Номер: US20160028642A1
Автор: MUKHERJEE Biswaroop
Принадлежит:

System and methods for scheduling OFDM frames are provided. Each packet is assigned to a frame bucket, this amounting to a temporary decision of when to transmit the packet. Each packet is marked with one or more metrics. The metrics are used to sort packets and make scheduling decisions. Packets are analyzed to determine their suitability for MIMO transmission. 1. A method for transmitting orthogonal frequency division multiplexing (OFDM) frames in a multi-access OFDM communication system , the method comprising:by a base station of the multi-access OFDM communication system:scheduling a set of packets in accordance with a plurality of scheduling metrics,generating at least one ordering of packets to transmit in an OFDM frame; andtransmitting the OFDM frame using a plurality of transmit antennas,wherein each OFDM frame is constructed from packets assigned to the OFDM frame, each packet being placed in a respective rectangular time-frequency burst within the OFDM frame, and wherein time-frequency bursts transmitted to a first set of mobile stations are grouped for simultaneous transmission using at least two of the plurality of transmit antennas,wherein channel conditions for the first set of mobile stations have a higher orthogonality compared to channel conditions for a second set of mobile stations.2. The method of claim 1 , wherein the plurality of scheduling metrics includes at least one compulsory scheduling metric that remains static for multiple OFDM frames and at least one optional scheduling metric that dynamically changes for each OFDM frame.3. The method of claim 1 , wherein the set of packets are scheduled by temporarily assigning each packet to a particular OFDM frame using a deadline for departure for the corresponding packet.4. The method of claim 1 , wherein the plurality of scheduling metrics includes at least one scheduling metric selected from a group consisting of:earliest time of departure, deadline for departure, user/operator priority, link ...

Подробнее
28-01-2016 дата публикации

ARBITRATION OF MULTIPLE-THOUSANDS OF FLOWS FOR CONVERGENCE ENHANCED ETHERNET

Номер: US20160028643A1
Принадлежит:

In one embodiment, a method includes selecting a flow from a head of a first control queue or a second control queue. The method also includes providing service to the selected flow. Moreover, the method includes decreasing a service credit of the selected flow by an amount corresponding to an amount of service provided to the selected flow. In another embodiment, a computer program product includes a computer readable storage medium having program code embodied therewith. The embodied program code is readable/executable by a device to select, by the device, a flow from a head of a first control queue or a second control queue. The embodied program code is also readable/executable to provide, by the device, service to the selected flow, and decrease, by the device, a service credit of the selected flow by an amount corresponding to an amount of service provided to the selected flow. 1. A method comprising:selecting a flow from a head of a first control queue or a second control queue;providing service to the selected flow; anddecreasing a service credit of the selected flow by an amount corresponding to an amount of service provided to the selected flow.2. The method as recited in claim 1 , wherein the flow is selected from the second control queue in response to a determination that the first control queue is empty.3. The method as recited in claim 1 , wherein the flow is selected from the second control queue in response to an indication that the first control queue should be avoided.4. The method as recited in claim 1 , further comprising:receiving a plurality of flows, each flow comprising packets of data;assigning a service credit to each of the plurality of flows; andassigning a weight parameter to each of the plurality of flows.5. The method as recited in claim 1 , further comprising:enqueuing a second flow which receives new unserviced packets and is not present in a control queue at an end of the first control queue in response to a determination that a ...

Подробнее
28-01-2016 дата публикации

WIRELESS TERMINAL STATION AND BASE STATION

Номер: US20160029377A1
Принадлежит: SHARP KABUSHIKI KAISHA

Demodulation is effectivley performed that uses an MMSE adaptive array which uses a guard section of a signal that uses a cyclic prefix. According the present invention, there is provided a wireless terminal station that is applied to a wireless communication system which is made up of multiple wireless terminal stations and a base station, the wireless terminal station including: a delay time setting module that sets a delay time based on a transmission timing identification number; and a transmission module that, in a case where any other wireless terminal station starts transmission within a predetermined time after all communication within the wireless communication system is ended, starts transmission after the delay time has elapsed from a point in time at which the transmission has started. 1. A first wireless terminal station that is applied to a wireless communication system which is made up of multiple wireless terminal stations and a base station , comprising:a delay time setting module that sets a first delay time based on a first transmission timing identification number;a carrier sensing module that detects that a second wireless terminal station included in the wireless communication system and different from the first wireless terminal station has started transmission; anda transmission module that starts transmission after the first delay time has elapsed from a point in time at which the second wireless terminal station has started the transmission after all frame transmission completion within the wireless communication system has ended.2. A wireless terminal station that is applied to a wireless communication system which is made up of multiple wireless terminal stations and a base station , comprising:a delay time setting module that sets a delay time based on a transmission timing identification number;a carrier sensing module that detects that; anda transmission module that starts transmission after the delay time has elapsed from a point in ...

Подробнее
28-01-2016 дата публикации

WIRELESS COMMUNICATION DEVICE AND WIRELESS COMMUNICATION SYSTEM

Номер: US20160029386A1
Автор: SEKIYA Masahiro
Принадлежит:

A wireless communication device includes a wireless transmitting/receiving part, a response channel selecting part and an oscillating part. The wireless transmitting/receiving part receives a first frame in which data destined for an address of the wireless communication device and data destined for an address of another wireless communication device are included. The oscillating part outputs a carrier signal having a frequency that corresponds to the response channel selected by the response channel selecting part to the wireless transmitting/receiving part. The response channel selecting part decides an ordinal rank for response channel selection based on the information included in the first frame and selects a response channel that corresponds to the decided ordinal rank The wireless transmitting/receiving part uses the carrier signal to transmit the response frame in the response channel. 1. A wireless communication device that performs wireless communication , comprising:a wireless transmitting/receiving part that receives a first frame transmitted from a transmission source in which data destined for an address of the wireless communication device and data destined for an address of another wireless communication device are included;a response channel selecting part that selects a response channel used for transmission of a response frame to the transmission source based on information included in the first frame received by the wireless transmitting/receiving part; andan oscillating part that outputs a carrier signal having a frequency that corresponds to the response channel selected by the response channel selecting part to the wireless transmitting/receiving part,wherein the response channel selecting part decides an ordinal rank for response channel selection based on the information included in the first frame and selects a response channel that corresponds to the decided ordinal rank, andthe wireless transmitting/receiving part uses the carrier signal ...

Подробнее
25-01-2018 дата публикации

Method Of Transmitting Data Between A Source Node And Destination Node

Номер: US20180026900A1
Принадлежит:

A method is disclosed for transmitting data between a source node and destination node connected via multiple paths of a heterogeneous network, at least one of the paths delivering packets with a non-deterministic delivery time. Data is divided into frames, each frame comprising a number of packets, where processing by the destination node of an information packet p is conditional on receipt of the data for any information packet i where i Подробнее

25-01-2018 дата публикации

Packet buffering

Номер: US20180026902A1
Автор: Xiaohu Tang, Zhuxun Wang

A first device as a buffer server in an Ethernet transmits a first buffer client querying packet from a port of enabling a distributed buffer function of the first device, receives a first buffer client registering packet from a second device through the port, and adds the second device into a distributed buffer group of the port. When the first device detects that a sum of sizes of packets entering the port and not transmitted reaches a preset first flow-splitting threshold in a first preset time period, the first device forwards a packet entering the port and not transmitted to a buffer client selected from the distributed buffer group of the port.

Подробнее
25-01-2018 дата публикации

TECHNIQUES AND APPARATUSES FOR CONNECTION TERMINATION IN A MULTI-SUBSCRIBER IDENTITY MODULE (MULTI-SIM) DEVICE

Номер: US20180026903A1
Принадлежит:

Certain aspects of the present disclosure generally relate to wireless communications. In some aspects, a wireless communication device may identify a completion of Internet Protocol multimedia subsystem (IMS) activity associated with an IMS only subscription of the wireless communication device. In some aspects, the wireless communication device may trigger a transfer from a connected mode to an idle mode based on identifying the completion of the IMS activity. Numerous other aspects are provided. 1. A method for wireless communication , comprising:identifying, by a wireless communication device, a completion of Internet Protocol multimedia subsystem (IMS) activity associated with an IMS only subscription of the wireless communication device; andtriggering, by the wireless communication device, a transfer from a connected mode to an idle mode based on identifying the completion of the IMS activity.2. The method of claim 1 , wherein triggering the transfer from the connected mode to the idle mode based on identifying the completion of the IMS activity includes:triggering a tracking area update (TAU) procedure to trigger the transfer from the connected mode to the idle mode.3. The method of claim 2 , further comprising: 'the TAU request message including data indicating the completion of the IMS activity.', 'transmitting a TAU request message to trigger the TAU procedure,'}4. The method of claim 1 , wherein triggering the transfer from the connected mode to the idle mode based on identifying the completion of the IMS activity includes:releasing a radio connection to trigger the transfer from the connected mode to the idle mode.5. The method of claim 1 , further comprising:transferring from the connected mode to the idle mode based on triggering the transfer from the connected mode to the idle mode.6. The method of claim 1 , where the IMS activity is associated with a first subscription; and 'maintaining a second subscription for other network activity when triggering ...

Подробнее
29-01-2015 дата публикации

Method and Apparatus for Processing Inbound and Outbound Quanta of Data

Номер: US20150029860A1
Принадлежит:

A method for processing inbound and/or outbound data wherein a processing policy is determined for a quantum of data. A quantum of inbound data is received and a data notification for the received data is prepared. The notification for the quantum of received inbound data is delivered to a processor according to the processing policy. When selecting a quantum of outbound data, an outbound data work request for the outbound data is prepared and delivered to an output unit according to the processing policy. 1. A. method for processing outbound data comprising:determining a processing policy for a quantum of outbound data;selecting a quantum of outbound data;preparing an outbound data work request for the outbound data; anddelivering the work request to an output unit according to the processing policy.2. The method of wherein determining a processing policy comprises:receiving a node attribute; anddetermining at least one of a work queue and a quality-of-service attribute according to the node attribute.3. The method of wherein receiving a node attribute comprises receiving at least one of a processor task assignment claim 2 , a process priority indicator claim 2 , output device location indicator claim 2 , quantity of processors indicator claim 2 , quantity of processors assigned to output device indicator claim 2 , quantity of processors allowed for interrupt indicator claim 2 , multi-priority interrupts allowed indicator claim 2 , and memory accessed by task indicator.4. The method of wherein determining a processing policy comprises:receiving an output device attribute; anddetermining at least one of a work queue and a quality-of-service attribute according to the output device attribute.5. The method of wherein receiving an output device attribute comprises receiving at least one of a queue quantity indicator claim 4 , a queue scheduling scheme indicator claim 4 , quantity of processors available for interrupt indicator claim 4 , device bandwidth indicator claim ...

Подробнее
10-02-2022 дата публикации

NON-DISRUPTIVE IMPLEMENTATION OF POLICY CONFIGURATION CHANGES

Номер: US20220045907A1
Принадлежит:

Techniques for non-disruptive configuration changes are provided. A packet is received at a network device, and the packet is buffered in a common pool shared by a first processing pipeline and a second processing pipeline, where the first processing pipeline corresponds to a first policy and the second processing pipeline corresponds to a second policy. A first copy of a packet descriptor for the packet is queued in a first scheduler based on processing the first copy of the packet descriptor with the first processing pipeline. A second copy of the packet descriptor is queued in a second scheduler associated based on processing the second copy of the packet descriptor with the second processing pipeline. Upon determining that the first policy is currently active on the network device, the first copy of the packet descriptor is dequeued from the first scheduler. 1. A method , comprising:receiving, at a network device; a packet;buffering the packet in a common pool shared by a first processing pipeline and a second processing pipeline, wherein the first processing pipeline corresponds to a first policy and the second processing pipeline corresponds to a second policy;queueing a first copy of a packet descriptor for the packet in a first scheduler based on processing the first copy of the packet descriptor with the first processing pipeline;queueing a second copy of the packet descriptor in a second scheduler associated based on processing the second copy of the packet descriptor with the second processing pipeline; andupon determining that the first policy is currently active on the network device, dequeueing the first copy of the packet descriptor from the first scheduler.2. The method of claim 1 , further comprising:retrieving the packet from the common pool, based on the first copy of the packet descriptor; andprocessing the packet based on the first policy.3. The method of claim 1 , further comprising:receiving an instruction to activate the second policy on the ...

Подробнее
10-02-2022 дата публикации

HYPERSCALAR PACKET PROCESSING

Номер: US20220045942A1
Принадлежит:

The disclosed systems and methods provide hyperscalar packet processing. A method includes receiving a plurality of network packets from a plurality of data paths. The method also includes arbitrating, based at least in part on an arbitration policy, the plurality of network packets to a plurality of packet processing blocks comprising one or more full processing blocks and one or more limited processing blocks. The method also includes processing, in parallel, the plurality of network packets via the plurality of packet processing blocks, wherein each of the one or more full processing blocks processes a first quantity of network packets during a clock cycle, and wherein each of the one or more limited processing blocks processes a second quantity of network packets during the clock cycle that is greater than the first quantity of network packets. The method also includes sending the processed network packets through data buses. 1. A method comprising:receiving a plurality of network packets from a plurality of data paths;arbitrating, based at least in part on an arbitration policy, the plurality of network packets to a plurality of packet processing blocks comprising one or more full processing blocks and one or more limited processing blocks;processing, during a clock cycle, a first quantity of network packets using one of the one or more full processing blocks; placing a data payload from a network packet of the second quantity of network packets in a first buffer;', 'placing start and end of packet data from the network packet in a second buffer;', 'processing the start and end of packet data using the limited processing block; and', 'reassembling the packet data with the processed start and end of packet data; and, 'processing, during the clock cycle, a second quantity of network packets using one of the one or more limited processing blocks, wherein processing the second quantity of network packets using one of the one or more limited processing blocks ...

Подробнее
10-02-2022 дата публикации

HIGHLY DETERMINISTIC LATENCY IN A DISTRIBUTED SYSTEM

Номер: US20220045964A1
Принадлежит:

A distributed computing system, such as may be used to implement an electronic trading system, supports a notion of fairness in latency. The system does not favor any particular client. Thus, being connected to a particular access point into the system (such as via a gateway) does not give any particular device an unfair advantage or disadvantage over another. That end is accomplished by precisely controlling latency, that is, the time between when request messages arrive at the system and a time at which corresponding response messages are permitted to leave. The precisely controlled, deterministic latency can be fixed over time, or it can vary according to some predetermined pattern, or vary randomly within a pre-determined range of values. 1. A system comprising:a plurality of gateways, connected to receive inbound messages from two or more participant devices; determine a time based value (TBV) for a selected one of the inbound messages;', 'forward the selected inbound message with its respective TBV to one or more compute nodes;', 'receive a response message from the one or more compute nodes, the response message having information derivable from the TBV; and', 'send a response message to at least one of the participant devices as an outbound message, the outbound message sent at a deterministic egress time that depends on both the information derivable from the TBV and a deterministic latency., 'one or more of the gateways each further configured to'}2. The system of wherein the TBV depends on a timestamp that relates to a receive time for the selected inbound message.3. The system of wherein the TBV depends on a desired egress time for the response message.4. The system of additionally comprising:a packet scheduler, configured to receive the response message, the packet scheduler comprising a set of indexed locations, each associated with a desired egress time; andwherein the TBV is a value that depends on a value of an indexed location associated with the ...

Подробнее