Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 44. Отображено 44.
20-01-2004 дата публикации

Effective channel priority processing for transfer controller with hub and ports

Номер: US0006681270B1

A data transfer controller with hub and ports uses an effective channel priority processing technique and algorithm. Data transfer requests are queued in a first-in-first-out fashion at the data source ports. Each data transfer request has a priority level for execution. In effective channel priority processing the priority level assigned to a source port is the greatest priority level of any data transfer request in the corresponding first-in-first-out queue. This techniques prevents a low priority data transfer request at the output of a source port queue from blocking a higher priority data transfer request further back in the queue. Raising the priority of all data transfer requests within a source port queue enables the low priority data transfer request to complete enabling the high priority data transfer request to be reached. Thus both the low priority data transfer request and the high priority data transfer request in the queue of a single port are serviced before intermediate ...

Подробнее
12-08-2003 дата публикации

Unified memory system architecture including cache and directly addressable static random access memory

Номер: US0006606686B1

A data processing apparatus includes a central processing unit and a memory configurable as cache memory and directly addressable memory. The memory is selectively configurable as cache memory and directly addressable memory by configuring a selected number of ways as directly addressable memory and configuring remaining ways as cache memory. Control logic inhibits indication that tag bits matches address bits and that a cache entry is the least recently used for cache eviction if the corresponding way is configured as directly addressable memory. In an alternative embodiment, the memory is selectively configurable as cache memory and directly addressable memory by configuring a selected number of sets equal to 2M, where M is an integer, as cache memory and configuring remaining sets as directly addressable memory.

Подробнее
17-02-2004 дата публикации

Configuration bus reconfigurable/reprogrammable interface for expanded direct memory access processor

Номер: US0006694385B1

The configuration bus interconnection protocol provides the configuration interfaces to the memory-mapped registers throughout the digital signal processor chip. The configuration bus is a parallel set of communications protocols, but for control of peripherals rather than for data transfer. While the expanded direct memory access processor is heavily optimized for maximizing data transfers, the configuration bus protocol is made to be as simple as possible for ease of implementation and portability.

Подробнее
15-03-2005 дата публикации

Request queue manager in transfer controller with hub and ports

Номер: US0006868087B1

A transfer controller with hub and ports is viewed as a communication hub between the various locations of a global memory map. A request queue manager serves as a crucial part of the transfer controller. The request queue manager receives these data transfer request packets from plural transfer requests nodes. The request queue manager sorts transfer request packets by their priority level and stores them in the queue manager memory. The request queue manager processes dispatches transfer request packets to a free data channel based upon priority level and first-in-first-out within priority level.

Подробнее
22-08-2006 дата публикации

Electrical fuse control of memory slowdown

Номер: US0007095671B2

Electrical fuses (eFuses) are applied to the task of memory performance adjustment to improve upon earlier fuse techniques by not requiring an additional processing step and expensive equipment. Standard electrical fuse (eFuse) hardware chains provide a soft test feature wherein the effect of memory slow-down can be tested prior to actually programming the fuses. Electrical fuses thus provide a very efficient non-volatile method to match the logic-memory interface through memory trimming, drastically cutting costs and cycle times involved.

Подробнее
09-08-2005 дата публикации

Electrical fuse control of memory slowdown

Номер: US0006928011B2

Electrical fuses (eFuses) are applied to the task of memory performance adjustment to improve upon earlier fuse techniques by not requiring an additional processing step and expensive equipment. Standard electrical fuse (eFuse) hardware chains provide a soft test feature wherein the effect of memory slow-down can be tested prior to actually programming the fuses. Electrical fuses thus provide a very efficient non-volatile method to match the logic-memory interface through memory trimming, drastically cutting costs and cycle times involved.

Подробнее
16-11-2006 дата публикации

Independent source read and destination write enhanced DMA

Номер: US20060256796A1
Принадлежит:

The present invention provides for independent source-read and destination-write functionality for Enhanced Direct Memory Access (EDMA). Allowing source read and destination write pipelines to operate independently makes it possible for the source pipeline to issue multiple read requests and stay ahead of the destination write for fully pipelined operation. The result is that fully pipelined capability may be achieved and utilization of the full DMA bandwidth and maximum throughput performance are provided.

Подробнее
16-11-2006 дата публикации

Concurrent read response acknowledge enhanced direct memory access unit

Номер: US20060259648A1
Принадлежит:

An extended direct memory access (EDMA) operation issues a read command to the source port to request data. The port returns the data along with response information, which contains the channel and valid byte count. The EDMA stores the read data into a write buffer and acknowledges to the source port that the EDMA can accept more data. The read response and data can come from more than one port and belong to different channels. Removing channel prioritizing according to this invention allows the EDMA to store read data in the write buffer and the EDMA then can acknowledge the port read response concurrently across all channels. This improves the EDMA inbound and outbound data flow dramatically.

Подробнее
18-06-2002 дата публикации

Superscalar memory transfer controller in multilevel memory organization

Номер: US0006408345B1

This invention is a data processing system including a central processing unit executing program instructions to manipulate data, at least one level one cache, a level two unified cache, a directly addressable memory and a direct memory access unit adapted for connection to an external memory. A superscalar memory transfer controller schedules plural non-interfering memory movements to and from the level two unified cache and the directly addressable memory each memory cycle in accordance with a predetermined priority of operation. The level one cache preferably includes a level one instruction cache and a level one data cache. The superscalar memory transfer controller is capable of scheduling plural cache tag memory read accesses and one cache tag memory write access in a single memory cycle. The superscalar memory transfer controller is capable of scheduling plural of cache access state machines in a single memory cycle. The superscalar memory transfer controller is capable of scheduling ...

Подробнее
02-03-2010 дата публикации

Concurrent read response acknowledge enhanced direct memory access unit

Номер: US0007673076B2

An enhanced direct memory access (EDMA) operation issues a read command to the source port to request data. The port returns the data along with response information, which contains the channel and valid byte count. The EDMA stores the read data into a write buffer and acknowledges to the source port that the EDMA can accept more data. The read response and data can come from more than one port and belong to different channels. Removing channel prioritizing according to this invention allows the EDMA to store read data in the write buffer and the EDMA then can acknowledge the port read response concurrently across all channels. This improves the EDMA inbound and outbound data flow dramatically.

Подробнее
15-07-2003 дата публикации

Hub interface unit and application unit interfaces for expanded direct memory access processor

Номер: US0006594713B1

An expanded direct memory access processor has ports which may be divided into two sections. The first is an application specific design referred to as the application unit, or application unit. Between the application unit and the expanded direct memory access processor hub is a second module, known as the hub interface unit hub interface unit which serves several functions. It provides buffering for read and write data, it prioritizes read and write commands from the source and destination pipelines such that the port sees a single interface with both access types consolidated and finally, it acts to decouple the port interface clock domain from the core processor clock domain through synchronization.

Подробнее
16-11-2006 дата публикации

Configurable multiple write-enhanced direct memory access unit

Номер: US20060259665A1
Принадлежит: Texas Instruments Inc

The configurable multiple write-enhanced EDMA of this invention processes multiple priority channels and utilizes as much write data bus as practical. A write queue stores write requests with their corresponding data width and priority. A dispatch circuit dispatches a highest priority maximum data width write request if that is the highest priority stored write request or if the prior dispatch was not a maximum data width write request. The dispatch circuit dispatches two write requests if their total data width is less than or equal to the maximum data width and they both have a priority higher than the highest priority maximum data width write request.

Подробнее
16-09-2003 дата публикации

Timing window elimination in self-modifying direct memory access processors

Номер: US0006622181B1

A direct memory access function for servicing real-time events, ensures that any parameter reloads occur during times when the direct memory access channel is idle and guarantees completion before the channel begins active operation again. The direct memory access channel whose parameters are to be updated is disabled during the update cycle. This ensures that no requests are processed until the new parameters have been written to the direct memory access channel parameters. A second direct memory access channel may be used to reload the data transfer parameters permitting a self-modifying direct memory access function.

Подробнее
16-11-2006 дата публикации

Command re-ordering in hub interface unit based on priority

Номер: US20060259568A1
Принадлежит:

Command reordering in the hub-interface unit (HIU) of Enhanced Direct Memory Access (EDMA) functions is described. Without command reordering in the EDMA, commands are issued by the HIU to the peripheral in order of issue. If the higher priority transfers are issued later by the EDMA, the previously issued lower priority transfers would block the higher priority transfers. Command reordering in the HIU causes transfers to be reordered and issued to the peripheral based on their priority. Reordering allows the EDMA and HIU is to give due service to high priority transfer requests with decreased weight placed on the order in which the requests were issued.

Подробнее
04-08-2005 дата публикации

Programmable built in self test of memory

Номер: US20050172180A1
Принадлежит:

The pBIST solution to memory testing is a balanced hardware-software oriented solution. pBIST hardware provides access to all memories and other such logic (e.g. register files) in pipelined logic allowing back-to-back accesses. The approach then gives the user access to this logic through CPU-like logic in which the programmer can code any algorithm to target any memory testing technique required. Because hardware inside the chip is used at-speed, the full device speed capabilities are available. CPU-like hardware can be programmed and algorithms can be developed and executed after tape-out and while testing on devices in chip form is in process.

Подробнее
18-03-2003 дата публикации

Multilevel cache system coherence with memory selectively configured as cache or direct access memory and direct memory access

Номер: US0006535958B1

A data processing system having a central processing unit, at least one level one cache, a level two unified cache, a directly addressable memory and a direct memory access unit includes a snoop unit generating snoop accesses to the at least one level one cache upon a direct memory access to the directly addressable memory. The snoop unit generates a write snoop access to both level one caches upon a direct memory access write to or a direct memory access read from the directly addressable memory. The level one cache also invalidates a cache entry upon a snoop hit and also writes back a dirty cache entry to the directly addressable memory. A level two memory is selectively configurable as part level two unified cache and part directly addressable memory.

Подробнее
02-12-2014 дата публикации

Cache with multiple access pipelines

Номер: US0008904115B2

Parallel pipelines are used to access a shared memory. The shared memory is accessed via a first pipeline by a processor to access cached data from the shared memory. The shared memory is accessed via a second pipeline by a memory access unit to access the shared memory. A first set of tags is maintained for use by the first pipeline to control access to the cache memory, while a second set of tags is maintained for use by the second pipeline to access the shared memory. Arbitrating for access to the cache memory for a transaction request in the first pipeline and for a transaction request in the second pipeline is performed after each pipeline has checked its respective set of tags.

Подробнее
20-06-2002 дата публикации

Active ports in a transfer controller with hub and ports

Номер: US20020078269A1
Принадлежит:

In a transfer controller with hub and ports architecture one of the data ports is an active data port. This active data port can supply its own source information, destination information and data quantity in a data transfer request. This data transfer request is serviced in a manner similar to other data transfer requests. The active data port may specify itself as the data destination in an active read. Alternatively, the active data port may specify itself as the data source in an active data write.

Подробнее
10-01-2006 дата публикации

Active ports in a transfer controller with hub and ports

Номер: US0006985982B2

In a transfer controller with hub and ports architecture one of the data ports is an active data port. This active data port can supply its own source information, destination information and data quantity in a data transfer request. This data transfer request is serviced in a manner similar to other data transfer requests. The active data port may specify itself as the data destination in an active read. Alternatively, the active data port may specify itself as the data source in an active data write.

Подробнее
02-12-2003 дата публикации

Parallel transfer size calculation and annulment determination in transfer controller with hub and ports

Номер: US0006658503B1

The transfer controller with hub and ports originally developed as a communication hub between the various locations of a global memory map within the DSP is described. Using the technique of this invention, parallel size calculation/write annulment decision capability is employed. This technique facilitates the process of setting up complex transfers without risking brute force inefficient processor cycles. Annulment determination allows detection of cases when a set of data cannot be output immediately and the destination pipeline postpones execution of the write command.

Подробнее
11-05-2010 дата публикации

Command re-ordering in hub interface unit based on priority

Номер: US0007716388B2

Command reordering in the hub interface unit (HIU) of Enhanced Direct Memory Access (EDMA) functions is described. Without command reordering in the EDMA, commands are issued by the HIU to the peripheral in order of issue. If the higher priority transfers are issued later by the EDMA, the previously issued lower priority transfers would block the higher priority transfers. Command reordering in the HIU causes transfers to be reordered and issued to the peripheral based on their priority. Reordering allows the EDMA and HIU to give due service to high priority transfer requests with decreased weight placed on the order in which the requests were issued.

Подробнее
15-07-2003 дата публикации

Method and apparatus for operating one or more caches in conjunction with direct memory access controller

Номер: US0006594711B1

A data processing apparatus includes a data processor core having integral cache memory and local memory, and external memory interface and a direct memory access unit. The direct memory access unit is connected to a single data interchange port of the data processor core and to an internal data interchange port of the external memory interface. The direct memory access unit transports data according to commands received from the data processor core to or from devices external to the data processing unit via the external memory interface. As an extension of this invention, a single direct memory access unit may serve a multiprocessing environment including plural data processor cores. The data processor core, external memory interface and direct memory access unit are preferably embodied in a single integrated circuit. The data processor core preferably includes an instruction cache for temporarily storing program instructions and a data cache for temporarily storing data. The data processor ...

Подробнее
16-11-2006 дата публикации

Hardware configurable hub interface unit

Номер: US20060259569A1
Принадлежит:

A data transfer apparatus with hub and ports includes design configurable hub interface units (HIU) between the ports and corresponding external application units. The configurable HIU provides a single generic superset HIU that can be configured for specific more specialized applications during implementation as part of design synthesis. Configuration allows the super-set configurable HIU to be crafted into any one of several possible special purpose HIUs. This configuration is performed during the design phase and is not applied in field applications. Optimization aimed at eliminating functional blocks not needed in a specific design and simplifying and modifying other functional blocks allows for the efficient configuring of these other types of HIUs. Configuration of HIUs for specific needs can result in significant savings in silicon area and in power consumption.

Подробнее
03-09-2002 дата публикации

Automated method for testing cache

Номер: US0006446241B1

A method generates a list of allowed states in a cache design by applying each input transaction sequentially to all found legal cache states. If application of an input transaction to a current search cache results in a new cache state, then this new cache state is added to the list of legal cache states and to a list of search cache states. This is repeated for all input transactions and all such found legal cache states. At the same time a sequence of input transactions reaching each new cache state is formed. This new sequence is the sequence of input transactions for the prior cache state and the current input transaction. The method generates a series of test sequences from the list of allowed states and their corresponding sequence of input transactions which are applied to the control logic cache design and to a reference memory. If the response of the control logic cache design fails to match the response of the reference memory, then a design fault is detected.

Подробнее
16-05-2006 дата публикации

Transfer request bus node for transfer controller with hub and ports

Номер: US0007047284B1

A transfer request bus and transfer request bus node is described which is suitable for use in a data transfer controller processing multiple concurrent transfer requests despite the attendant collisions which result when conflicting transfer requests occur. Transfer requests are passed from an upstream transfer request node to downstream transfer request node and then to a transfer request controller with queue. At each node a local transfer request can also be inserted to be passed on to the transfer controller queue. Collisions at each transfer request node are resolved using a token passing scheme wherein a transfer request node possessing the token allows a local request to be inserted in preference to the upstream request.

Подробнее
19-11-2002 дата публикации

Unified multilevel memory system architecture which supports both cache and addressable SRAM

Номер: US0006484237B1

A data processing apparatus is embodied in a single integrated circuit. The data processing apparatus includes a central processing unit, at least one level one cache, a level two unified cache and a directly addressable memory. The at least one level one cache preferably includes a level one instruction cache temporarily storing program instructions for execution by the central processing unit and a level one data cache temporarily storing data for manipulation by said central processing unit. The level two unified cache and the directly addressable memory are preferably embodied in a single memory selectively configurable as a part level two unified cache and a part directly addressable memory. The single integrated circuit data processing apparatus further includes a direct memory access unit connected to the directly addressable memory and adapted for connection to an external memory. The direct memory access unit controls data transfer between the directly addressable memory and the external memory.

Подробнее
29-03-2012 дата публикации

Cache with Multiple Access Pipelines

Номер: US20120079204A1
Принадлежит: Texas Instruments Inc

Parallel pipelines are used to access a shared memory. The shared memory is accessed via a first pipeline by a processor to access cached data from the shared memory. The shared memory is accessed via a second pipeline by a memory access unit to access the shared memory. A first set of tags is maintained for use by the first pipeline to control access to the cache memory, while a second set of tags is maintained for use by the second pipeline to access the shared memory. Arbitrating for access to the cache memory for a transaction request in the first pipeline and for a transaction request in the second pipeline is performed after each pipeline has checked its respective set of tags.

Подробнее
29-09-2005 дата публикации

Electrical fuse control of memory slowdown

Номер: US20050213411A1
Принадлежит:

Electrical fuses (eFuses) are applied to the task of memory performance adjustment to improve upon earlier fuse techniques by not requiring an additional processing step and expensive equipment. Standard electrical fuse (eFuse) hardware chains provide a soft test feature wherein the effect of memory slow-down can be tested prior to actually programming the fuses. Electrical fuses thus provide a very efficient non-volatile method to match the logic-memory interface through memory trimming, drastically cutting costs and cycle times involved.

Подробнее
13-10-2009 дата публикации

Hardware configurable hub interface unit

Номер: US0007603487B2

A data transfer apparatus with hub and ports includes design configurable hub interface units (HIU) between the ports and corresponding external application units. The configurable HIU provides a single generic superset HIU that can be configured for specific more specialized applications during implementation as part of design synthesis. Configuration allows the super-set configurable HIU to be crafted into any one of several possible special purpose HIUs. This configuration is performed during the design phase and is not applied in field applications. Optimization aimed at eliminating functional blocks not needed in a specific design and simplifying and modifying other functional blocks allows for the efficient configuring of these other types of HIUs. Configuration of HIUs for specific needs can result in significant savings in silicon area and in power consumption.

Подробнее
03-02-2005 дата публикации

Electrical fuse control of memory slowdown

Номер: US20050024960A1
Принадлежит:

Electrical fuses (eFuses) are applied to the task of memory performance adjustment to improve upon earlier fuse techniques by not requiring an additional processing step and expensive equipment. Standard electrical fuse (eFuse) hardware chains provide a soft test feature wherein the effect of memory slow-down can be tested prior to actually programming the fuses. Electrical fuses thus provide a very efficient non-volatile method to match the logic-memory interface through memory trimming, drastically cutting costs and cycle times involved.

Подробнее
25-11-2003 дата публикации

External direct memory access processor interface to centralized transaction processor

Номер: US0006654819B1

An external direct memory access unit includes an event recognizer recognizing plural event types, a priority encoder selecting for service one recognized external event, a parameter memory storing service request parameters corresponding to each event type and an external direct memory access controller recalling service request parameters from the parameter memory corresponding to recognized events and submitting them to a centralized transaction processor. The service request parameters include a priority for centralized transaction processor independent of the event recognition priority. The service request parameters may be stored in the form of a linked list. The service requests are preferably direct memory accesses which may include writes to the parameter memory for self modification. The centralized transaction processor may signal an event to event recognizer upon completion of a requested data transfer.

Подробнее
16-12-2003 дата публикации

Programmer initiated cache block operations

Номер: US0006665767B1

This invention enables a program controlled cache state operation on a program designated address range. The program controlled cache state operation could be writeback of data cached from the program designated address range to a higher level memory or such writeback and invalidation of data cached from the program designated address range. A cache operation unit includes a base address register and a word count register loadable by the central processing unit. The program designated address range is from a base address for a number of words of the word count register. In the preferred embodiment the program controlled cache state operation begins upon loading the word count register. The cache operation unit may operate on fractional cache entries by handling misaligned first and last cycles. Alternatively, The cache operation unit may operate only on whole cache entries. The base address register increments and the word count register decrements until when the word count reaches zero ...

Подробнее
18-08-2009 дата публикации

Independent source read and destination write enhanced DMA

Номер: US0007577774B2

The present invention provides for independent source-read and destination-write functionality for Enhanced Direct Memory Access (EDMA). Allowing source read and destination write pipelines to operate independently makes it possible for the source pipeline to issue multiple read requests and stay ahead of the destination write for fully pipelined operation. The result is that fully pipelined capability may be achieved and utilization of the full DMA bandwidth and maximum throughput performance are provided.

Подробнее
29-01-2008 дата публикации

Programmable built in self test of memory

Номер: US0007325178B2

The pBIST solution to memory testing is a balanced hardware-software oriented solution. pBIST hardware provides access to all memories and other such logic (e.g. register files) in pipelined logic allowing back-to-back accesses. The approach then gives the user access to this logic through CPU-like logic in which the programmer can code any algorithm to target any memory testing technique required. Because hardware inside the chip is used at-speed, the full device speed capabilities are available. CPU-like hardware can be programmed and algorithms can be developed and executed after tape-out and while testing on devices in chip form is in process.

Подробнее
11-10-2005 дата публикации

Write allocation counter for transfer controller with hub and ports

Номер: US0006954468B1

The transfer controller with hub and ports uses a write allocation counter and algorithm to control data reads from a source port. The write allocation count is the amount of data that can be consumed immediately by the write reservation station of a slow destination port and the channel data router buffers. This is used to throttle fast source port read operations to whole read bursts until space to adsorb the read data is available. This ensures that the source port response queue is not blocked with data that cannot be consumed by the channel data router and the slow destination port. This condition would otherwise block a fast source port from providing data to the other destination ports.

Подробнее
05-04-2012 дата публикации

DIE EXPANSION BUS

Номер: US20120084483A1
Автор: Sanjive Agarwala
Принадлежит:

A die expansion bus efficiently couples a supplemental portion of a processing system to an original portion of the processing system on a die. The die expansion bus couples bus subsystems of the supplemental portion of the processing system to the bus subsystems of the original portion of the processing system. The original portion of the processing system is arranged to control the data resources of the supplemental portion of the processing system by accessing the memory endpoints associated with the bus subsystems of the supplemental portion of the processing system. 1. A method for updating processing system designs on integrated circuits , comprising:arranging an original portion of a processing system in a substrate in accordance with a substantially fixed layout of the original portion of the processing system, wherein the original portion of the processing system includes data resources for processing data and includes bus subsystems having memory endpoints for controlling the data resources of the original portion of the processing system;arranging a supplemental portion of the processing system in the substrate in accordance with a layout of the supplemental portion of the processing system, wherein the supplemental portion of the processing system includes data resources for processing data and includes bus subsystems having memory endpoints for controlling the data resources of the supplemental portion of the processing system, and wherein the layout of the supplemental portion of the processing system is generated after the layout of the original portion of the processing system has been substantially fixed; andarranging a die expansion bus in the substrate wherein the die expansion bus is arranged to couple the bus subsystems of the supplemental portion of the processing system to the bus subsystems of the original portion of the processing system, and wherein the original portion of the processing system is arranged to control the data resources of ...

Подробнее
18-02-2021 дата публикации

Systems And Methods For Facilitating Expert Communications

Номер: US20210050118A1
Принадлежит:

A communication system and method including displaying a request screen, the request screen including an area for selecting one or more diagnoses or entering a name of an illness and a submit indicator; generating a request in response to a requestor selecting the submit indicator on the request screen; identifying one or more other users as eligible users eligible to provide content in response to the request, the identifying including matching respective records of the one or more other users to the request; sending out invitations to one or more of the eligible users; and displaying a session screen, the session screen corresponding to a communication session and including an indicator for each of one or more participants in the communication session, wherein the one or more participants includes the requestor and one or more eligible users who have accepted an invitation. 1. A communication method comprising:displaying a request screen, the request screen comprising an area for selecting one or more diagnoses or entering a name of an illness and a submit indicator;generating a request in response to a requestor selecting the submit indicator on the request screen;identifying one or more other users as eligible users eligible to provide content in response to the request, the identifying comprising matching respective records of the one or more other users to the request;sending out invitations to one or more of the eligible users; anddisplaying a session screen, the session screen corresponding to a communication session and including an indicator for each of one or more participants in the communication session, wherein the one or more participants comprise the requestor and one or more eligible users who have accepted an invitation.2. The communication method according to claim 1 , wherein the communication session is at least partially anonymous such that the identity of at least one of the participants is not displayed or otherwise accessible to the other ...

Подробнее
09-03-1994 дата публикации

Method of detecting zero condition of arithmetic or logical computation result, and circuit for same

Номер: EP0585619A2
Принадлежит: Texas Instruments Inc

An arithmetic or logical computation result detection circuit (60) is described. The circuit has a set of one-bit-zero cells (62) which receive a first operand, A, a second operand, B, and a C in , and generates a set of one-bit-zero signals, Z. A combinatorial circuit (64) receives the set of one-bit-zero signals and provides a selected output which is a known function of the one-bit-zero signals. In a preferred embodiment, the combinatorial circuit (64) is a logical AND function which detects a condition when all the one-bit-zero signals are positively asserted. In various embodiments of the preferred invention the one-bit-zero signals may be operable to detect an arithmetic zero condition for operations of addition, subtraction, or a logic operation. Other devices, systems and methods are also disclosed.

Подробнее
07-05-2009 дата публикации

Message controller with dynamically loadable message descriptor

Номер: US20090119326A1
Принадлежит: Honeywell International Inc

A dynamic message controller is provided. The message provider includes a plurality of message descriptors and a message parser. Each message descriptor includes a unique pattern for a message that corresponds to a specific type of message received by the message controller. The message parser module is configured to match incoming messages with a select one of the message descriptors based on matched message patterns. The message parser module is further configured to load the matched message descriptor and parse an associated incoming message based on the matched message descriptor.

Подробнее
04-01-2006 дата публикации

Write allocation counter for transfer controller with hub and ports

Номер: EP1132823A3
Принадлежит: Texas Instruments Inc

The transfer controller with hub and ports uses a write allocation counter (510) and algorithm to control data reads from a source port. The write allocation count is the amount of data that can be consumed immediately by the write reservation station of a slow destination port and the channel data router buffers. The method tests to determine if a reservation station at the slow destination port is fully allocated (503). If not fully allocated, the method does a pre-write to allocate reservation station space for the data transfer (511) and increments the write allocation counter (508). A read of the fast port takes place when: the write allocation counter is greater than the read burst size (502) and there was a pre-write in the previous cycle (504); the write allocation counter is greater than the capacity of the data router (501); or the write allocation counter is less than the read burst size and the reservation station is full (503). This is used to throttle fast source port read operations to whole read bursts until space to adsorb the read data is available. This ensures that the source port response queue is not blocked with data that cannot be consumed by the channel data router and the slow destination port. This condition would otherwise block a fast source port from providing data to the other destination ports.

Подробнее
11-01-2006 дата публикации

Active ports in a transfer controller with hub and ports

Номер: EP1202183A3
Принадлежит: Texas Instruments Inc

In a transfer controller with hub (100) and ports (111, 112, 113, 114, 115) architecture one of the data ports (411) is an active data port. This active data port (411) can supply its own source information, destination information and data quantity in a data transfer request (501, 601, 701) with another data port (509, 609, 709). This data transfer request is serviced in a manner similar to other data transfer requests by as source pipeline (130), a destination pipeline (140) and a data router (150). The active data port (411) may specify itself as the data destination in an active read (Figure 5). Alternatively, the active data port (411) may specify itself as the data source in an active data write (Figure 6).

Подробнее