Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 38. Отображено 38.
25-06-2004 дата публикации

ADAPTIVE POLLING BY RECOGNIZING POWER CONSUMPTION FOR COMPUTER SYSTEM

Номер: KR20040054491A
Принадлежит:

PURPOSE: An adaptive polling by recognizing power consumption for a computer system is provided to reduce power consumption by adaptively polling a device without violating a system response restriction. CONSTITUTION: For a pending service request, the device of a data processing system is polled(54). The data processing system polls the device in order to observer that the pending service request is present(58). If the pending service request is present, the data processing system processes the request by moving to an interrupt service routine of the device(64). It is judged that the device is idle based on accumulated data(72). In response to a judgment result, a processor is kept in a low power state(74). © KIPO 2005 ...

Подробнее
09-08-2012 дата публикации

Architecture Support for Debugging Multithreaded Code

Номер: US20120203979A1

Mechanisms are provided for debugging application code using a content addressable memory. The mechanisms receive an instruction in a hardware unit of a processor of the data processing system, the instruction having a target memory address that the instruction is attempting to access. A content addressable memory (CAM) associated with the hardware unit is searched for an entry in the CAM corresponding to the target memory address. In response to an entry in the CAM corresponding to the target memory address being found, a determination is made as to whether information in the entry identifies the instruction as an instruction of interest. In response to the entry identifying the instruction as an instruction of interest, an exception is generated and sent to one of an exception handler or a debugger application. In this way, debugging of multithreaded applications may be performed in an efficient manner. 1. A method , in a processor of a data processing system , for debugging application code , comprising:receiving an instruction in a hardware unit of the processor, the instruction having a target memory address that the instruction is attempting to access in a memory of the data processing system;searching a content addressable memory (CAM) associated with the hardware unit for an entry in the CAM corresponding to the target memory address;in response to an entry in the CAM corresponding to the target memory address being found, determining whether information in the entry identifies the received instruction as an instruction of interest; andin response to the entry identifying the received instruction as an instruction of interest, generating an exception and sending the exception to one of an exception handler or a debugger application.2. The method of claim 1 , wherein searching the CAM comprises searching entries in the CAM for an entry having a starting address and length corresponding to a range of memory addresses within which the target memory address is ...

Подробнее
08-04-2010 дата публикации

Total cost based checkpoint selection

Номер: US20100088494A1

A method, system, and computer usable program product for total cost based checkpoint selection are provided in the illustrative embodiments. A cost associated with taking a checkpoint is determined. The cost includes an energy cost. An interval between checkpoints is computed so as to minimize the cost. An instruction is sent to schedule the checkpoints at the computed interval. The energy cost may further include a cost of energy consumed in collecting and saving data at a checkpoint, a cost of energy consumed in re-computing a computation lost due to a failure after taking the checkpoint, or a combination thereof. The cost may further include, converted to a cost equivalent, administration time consumed in recovering from a checkpoint, computing resources expended in taking a checkpoint, computing resources expended after a failure in restoring information from a checkpoint, performance degradation of an application while taking a checkpoint, or a combination thereof.

Подробнее
15-05-2012 дата публикации

Scalable space-optimized and energy-efficient computing system

Номер: US0008179674B2

A scalable space-optimized and energy-efficient computing system is provided. The computing system comprises a plurality of modular compartments in at least one level of a frame configured in a hexadron configuration. The computing system also comprises an air inlet, an air mixing plenum, and at least one fan. In the computing system the plurality of modular compartments are affixed above the air inlet, the air mixing plenum is affixed above the plurality of modular compartments, and the at least one fan is affixed above the air mixing plenum. When at least one module is inserted into one of the plurality of modular compartments, the module couples to a backplane within the frame.

Подробнее
15-08-2006 дата публикации

Power aware adaptive polling

Номер: US0007093141B2

A method for adapting the periodicity of polling for pending service requests, by polling system devices for pending service requests, recording whether or not there was a pending service request and, based on accumulated data, determining whether or not the system devices are idle. Based on this determination, the system may elect to enter a power conservation mode until device activity is signaled, or an adjustable period of time elapses. The adaptation mechanism may alter the periodicity of the timer interrupt, disable or enable device interrupts, and modify variables used to determine system idleness (including minimum latency and minimum idleness thresholds). In this manner, the system can conserve power while maintaining system performance and responsiveness.

Подробнее
17-06-2014 дата публикации

Method and system for performance isolation in virtualized environments

Номер: US0008756608B2

A method, a system, an apparatus, and a computer program product for allocating resources of one or more shared devices to one or more partitions of a virtualization environment within a data processing system. At least one user defined resource assignment is received for one or more devices associated with the data processing system. One or more registers, associated with the one or more partitions are dynamically set to execute the at least one resource assignment, whereby the at least one resource assignment enables a user defined quantitative measure (number and/or percentage) of devices to operate when the one or more transactions are executed via the partition. The system enables the one or more devices to execute one or more transactions at a bandwidth/capacity that is less than or equal to the user defined resource assignment and minimizes performance interference among partitions.

Подробнее
23-06-2011 дата публикации

METHOD OF EXPLOITING SPARE PROCESSORS TO REDUCE ENERGY CONSUMPTION

Номер: US20110154348A1

A method, system, and computer program product for reducing power and energy consumption in a server system with multiple processor cores is disclosed. The system may include an operating system for scheduling user workloads among a processor pool. The processor pool may include active licensed processor cores and inactive unlicensed processor cores. The method and computer program product may reduce power and energy consumption by including steps and sets of instructions activating spare cores and adjusting the operating frequency of processor cores, including the newly activated spare cores to provide equivalent computing resources as the original licensed cores operating at a specified clock frequency.

Подробнее
01-10-2013 дата публикации

Heatsink allowing in-situ maintenance in a stackable module

Номер: US0008547692B2

A modular processing module allowing in-situ maintenance is provided. The modular processing module comprises a set of processing module sides. Each processing module side comprises a circuit board, a plurality of connectors, and a plurality of processing nodes. Each processing module side couples to another processing module side using at least one connector in the plurality of connectors such that, when all of the set of processing module sides are coupled together, the modular processing module is formed. The modular processing module comprises an exterior connection to a power source and a communication system and at least one heatsink that couples to at least a portion of the plurality of processing nodes on one of the processing module sides and is designed such that, when a set of heatsinks in the modular processing module are installed, an empty space is left in a center of the modular processing module.

Подробнее
01-12-2011 дата публикации

Mechanisms for Reducing DRAM Power Consumption

Номер: US20110296097A1

Mechanisms are provided for inhibiting precharging of memory cells of a dynamic random access memory (DRAM) structure. The mechanisms receive a command for accessing memory cells of the DRAM structure. The mechanisms further determine, based on the command, if precharging the memory cells following accessing the memory cells is to be inhibited. Moreover, the mechanisms send, in response to the determination indicating that precharging the memory cells is to be inhibited, a command to blocking logic of the DRAM structure to block precharging of the memory cells following accessing the memory cells.

Подробнее
01-12-2011 дата публикации

Heatsink Allowing In-Situ Maintenance in a Stackable Module

Номер: US20110292596A1

A modular processing module allowing in-situ maintenance is provided. The modular processing module comprises a set of processing module sides. Each processing module side comprises a circuit board, a plurality of connectors, and a plurality of processing nodes. Each processing module side couples to another processing module side using at least one connector in the plurality of connectors such that, when all of the set of processing module sides are coupled together, the modular processing module is formed. The modular processing module comprises an exterior connection to a power source and a communication system and at least one heatsink that couples to at least a portion of the plurality of processing nodes on one of the processing module sides and is designed such that, when a set of heatsinks in the modular processing module are installed, an empty space is left in a center of the modular processing module.

Подробнее
10-01-2006 дата публикации

Energy-induced process migration

Номер: US0006985952B2

Cluster systems having central processor units (CPUs) with multiple processors (MPs) are configured as high density servers. Power density is managed within the cluster systems by assigning a utilization to persistent states and connections within the cluster systems. If a request to reduce overall power consumption within the cluster system is received, persistent states and connections are moved (migrated) within the multiple processors based on their utilization to balance power dissipation within the cluster systems. If persistent connections and states, that must be maintained have a low rate of reference, they may be maintained in processors that are set to a standby mode where memory states are maintained. In this way the requirement to maintain persistent connections and states does not interfere with an overall strategy of managing power within the cluster systems.

Подробнее
05-07-2012 дата публикации

Optimizing Energy Consumption and Application Performance in a Multi-Core Multi-Threaded Processor System

Номер: US20120173906A1

A mechanism is provided for scheduling application tasks. A scheduler receives a task that identifies a desired frequency and a desired maximum number of competing hardware threads. The scheduler determines whether a user preference designates either maximization of performance or minimization of energy consumption. Responsive to the user preference designating the performance, the scheduler determines whether there is an idle processor core in a plurality of processor cores available. Responsive to no idle processor being available, the scheduler identifies a subset of processor cores having a smallest load coefficient. From the subset of processor cores, the scheduler determines whether there is at least one processor core that matches desired parameters of the task. Responsive to at least one processor core matching the desired parameters of the task, the scheduler assigns the task to one of the at least one processor core that matches the desired parameters.

Подробнее
22-03-2016 дата публикации

Method of exploiting spare processors to reduce energy consumption

Номер: US0009292662B2

A method, system, and computer program product for reducing power and energy consumption in a server system with multiple processor cores is disclosed. The system may include an operating system for scheduling user workloads among a processor pool. The processor pool may include active licensed processor cores and inactive unlicensed processor cores. The method and computer program product may reduce power and energy consumption by including steps and sets of instructions activating spare cores and adjusting the operating frequency of processor cores, including the newly activated spare cores to provide equivalent computing resources as the original licensed cores operating at a specified clock frequency.

Подробнее
19-11-2013 дата публикации

Architectural support for automated assertion checking

Номер: US0008589895B2

A mechanism is provided for automatic detection of assertion violations. An application may write assertion tuples to the assertion checking mechanism. An assertion tuple forms a Boolean expression (predicate or invariant) that the developer of the application wishes to check. If the assertion defined by the tuple remains true, then the application does not violate the assertion. For any instruction that stores a value to a memory location or register at a target address, the assertion checking mechanism compares the target address to the addresses specified in the assertion tuples. If the target address matches one of the tuple addresses, then the assertion checking mechanism reads a value from the other address in the tuple. The assertion checking mechanism then recomputes the assertion using the retrieved value along with the value to be stored. If the assertion checking mechanism detects an assertion violation, the assertion checking mechanism raises an exception.

Подробнее
20-10-2015 дата публикации

Technology for web site crawling

Номер: US0009165077B2

A web site page has a reference for providing an address for a next page. The web site is crawled by a crawler program, which parses the reference from one of the web pages and sends the reference to an applet running in a browser. The address for the next page is determined by the browser responsive to the reference and is sent to the crawler. The crawler selects non-hypertext-link parameters from the web page of the web site server by performing a programmed action sequence, including selecting items from lists of the web page in a particular sequence. The crawler sends the applet running in the browser, for the query to the web server for the next page referenced by the one web page, the selected parameters and a context arising from the particular sequence.

Подробнее
09-09-2014 дата публикации

Optimizing energy consumption and application performance in a multi-core multi-threaded processor system

Номер: US0008832479B2

A mechanism is provided for scheduling application tasks. A scheduler receives a task that identifies a desired frequency and a desired maximum number of competing hardware threads. The scheduler determines whether a user preference designates either maximization of performance or minimization of energy consumption. Responsive to the user preference designating the performance, the scheduler determines whether there is an idle processor core in a plurality of processor cores available. Responsive to no idle processor being available, the scheduler identifies a subset of processor cores having a smallest load coefficient. From the subset of processor cores, the scheduler determines whether there is at least one processor core that matches desired parameters of the task. Responsive to at least one processor core matching the desired parameters of the task, the scheduler assigns the task to one of the at least one processor core that matches the desired parameters.

Подробнее
02-10-2012 дата публикации

Heatsink allowing in-situ maintenance in a stackable module

Номер: US0008279597B2

A modular processing module allowing in-situ maintenance is provided. The modular processing module comprises a set of processing module sides. Each processing module side comprises a circuit board, a plurality of connectors, and a plurality of processing nodes. Each processing module side couples to another processing module side using at least one connector in the plurality of connectors such that, when all of the set of processing module sides are coupled together, the modular processing module is formed. The modular processing module comprises an exterior connection to a power source and a communication system and at least one heatsink that couples to at least a portion of the plurality of processing nodes on one of the processing module sides and is designed such that, when a set of heatsinks in the modular processing module are installed, an empty space is left in a center of the modular processing module.

Подробнее
07-04-2010 дата публикации

Method of assigning virtual memory to physical memory, storage controller and computer system

Номер: CN0001617113B
Принадлежит:

A method of assigning virtual memory to physical memory in a data processing system allocates a set of contiguous physical memory pages for a new page mapping, instructs the memory controller to movethe virtual memory pages according to the new page mapping, and then allows access to the virtual memory pages using the new page mapping while the memory controller is still copying the virtual memorypages to the set of physical memory pages. The memory controller can use a mapping table which temporarily stores entries of the old and new page addresses, and releases the entries as copying for ea ch entry is completed. The translation lookaside buffer(TLB)entries in the processor cores are updated for the new page addresses prior to completion of copying of the memory pages by the memory controller. The invention can be extended to non-uniform memory array(NUMA)systems. For systems with cache memory, any cache entry which is affected by the page move can be updated by modifying its addresstag ...

Подробнее
22-03-2007 дата публикации

HARDWARE SUPPORT FOR SUPERPAGE COALESCING

Номер: US2007067604A1
Принадлежит:

A method of assigning virtual memory to physical memory in a data processing system allocates a set of contiguous physical memory pages for a new page mapping, instructs the memory controller to move the virtual memory pages according to the new page mapping, and then allows access to the virtual memory pages using the new page mapping while the memory controller is still copying the virtual memory pages to the set of physical memory pages. The memory controller can use a mapping table which temporarily stores entries of the old and new page addresses, and releases the entries as copying for each entry is completed. The translation look aside buffer (TLB) entries in the processor cores are updated for the new page addresses prior to completion of copying of the memory pages by the memory controller. The invention can be extended to non-uniform memory array (NUMA) systems. For systems with cache memory, any cache entry which is affected by the page move can be updated by modifying its address ...

Подробнее
11-06-2013 дата публикации

Minimizing aggregate cooling and leakage power

Номер: US0008463456B2

A mechanism is provided for minimizing system power in a data processing system. A management control unit determines whether a convergence has been reached in the data processing system. If convergence fails to be reached, the management control unit determines whether a maximum fan flag is set to indicate that a fan is operating at a maximum speed. Responsive to the maximum fan flag failing to be set, a thermal threshold of the data processing system is either increased or decreased and thereby a fan speed of the data processing system is either increased or decreased based on whether the system power of the data processing system has either increased or decreased and based on whether a temperature of the data processing system has either increased or decreased. Thus, a new thermal threshold and a new fan speed are formed. The process is then repeated until convergence has been met.

Подробнее
01-12-2011 дата публикации

Scalable Space-Optimized and Energy-Efficient Computing System

Номер: US20110292594A1

A scalable space-optimized and energy-efficient computing system is provided. The computing system comprises a plurality of modular compartments in at least one level of a frame configured in a hexadron configuration. The computing system also comprises an air inlet, an air mixing plenum, and at least one fan. In the computing system the plurality of modular compartments are affixed above the air inlet, the air mixing plenum is affixed above the plurality of modular compartments, and the at least one fan is affixed above the air mixing plenum. When at least one module is inserted into one of the plurality of modular compartments, the module couples to a backplane within the frame.

Подробнее
01-12-2011 дата публикации

Stackable Module for Energy-Efficient Computing Systems

Номер: US20110292597A1

A modular processing module is provided. The modular processing module comprises a set of processing module sides. Each processing module side comprises a circuit board, a plurality of connectors coupled to the circuit board, and a plurality of processing nodes coupled to the circuit board. Each processing module side in the set of processing module sides couples to another processing module side using at least one connector in the plurality of connectors such that, when all of the set of processing module sides are coupled together, the modular processing module is formed. The modular processing module comprises an exterior connection to a power source and a communication system.

Подробнее
19-05-2005 дата публикации

Hardware support for superpage coalescing

Номер: US2005108496A1
Принадлежит:

A method of assigning virtual memory to physical memory in a data processing system allocates a set of contiguous physical memory pages for a new page mapping, instructs the memory controller to move the virtual memory pages according to the new page mapping, and then allows access to the virtual memory pages using the new page mapping while the memory controller is still copying the virtual memory pages to the set of physical memory pages. The memory controller can use a mapping table which temporarily stores entries of the old and new page addresses, and releases the entries as copying for each entry is completed. The translation lookaside buffer (TLB) entries in the processor cores are updated for the new page addresses prior to completion of copying of the memory pages by the memory controller. The invention can be extended to non-uniform memory array (NUMA) systems. For systems with cache memory, any cache entry which is affected by the page move can be updated by modifying its address ...

Подробнее
29-05-2014 дата публикации

Technology for Web Site Crawling

Номер: US20140149382A1

A web site page has a reference for providing an address for a next page. The web site is crawled by a crawler program, which parses the reference from one of the web pages and sends the reference to an applet running in a browser. The address for the next page is determined by the browser responsive to the reference and is sent to the crawler. The crawler selects non-hypertext-link parameters from the web page of the web site server by performing a programmed action sequence, including selecting items from lists of the web page in a particular sequence. The crawler sends the applet running in the browser, for the query to the web server for the next page referenced by the one web page, the selected parameters and a context arising from the particular sequence. 1. A method for crawling a web site , the method comprising:querying a web site server by a crawler program, wherein at least one page of the web site has a reference, wherein the reference is specified by a script to produce an address for a next page;parsing such a reference from one of the web pages by the crawler program and sending the reference to an applet running in a browser; anddetermining the address for the next page by the browser executing the reference and sending the address to the crawler, wherein the crawler automatically selects non-hypertext-link parameters from the one web page of the web site server by performing a programmed action sequence, including selecting items from lists of the one web page in a particular sequence, and the crawler sends the applet running in the browser, for the query to the web server for the next page referenced by the one web page, the selected parameters and a context arising from the particular sequence.2. The method of claim 1 , the browser being configured to use a certain proxy and refer to a resolver file for hostname-to-IP-address-resolution claim 1 , wherein the web site server has an IP address and the proxy for the browser has a certain IP address ...

Подробнее
21-08-2012 дата публикации

Mechanisms for reducing DRAM power consumption

Номер: US0008250298B2

Mechanisms are provided for inhibiting precharging of memory cells of a dynamic random access memory (DRAM) structure. The mechanisms receive a command for accessing memory cells of the DRAM structure. The mechanisms further determine, based on the command, if precharging the memory cells following accessing the memory cells is to be inhibited. Moreover, the mechanisms send, in response to the determination indicating that precharging the memory cells is to be inhibited, a command to blocking logic of the DRAM structure to block precharging of the memory cells following accessing the memory cells.

Подробнее
02-08-2005 дата публикации

Data storage on a multi-tiered disk system

Номер: US0006925529B2

A method and a computer usable medium including a program for operating disks having units, comprising: providing a first tier of at least one disk, the first tier storing at least one popular unit, providing a second tier of at least one disk, the second tier storing at least one unpopular unit, powering on at least one first tier disk, powering down the second tier, determining whether a request for a unit requires processing on the first tier or second tier, accessing the requested unit if the requested unit requires processing on the first tier, and powering on a second tier disk to copy the requested unit from the second tier disk to a first tier disk, if the requested unit is stored on the second tier.

Подробнее
19-02-2013 дата публикации

Optimizing energy consumption and application performance in a multi-core multi-threaded processor system

Номер: US0008381004B2

A mechanism is provided for scheduling application tasks. A scheduler receives a task that identifies a desired frequency and a desired maximum number of competing hardware threads. The scheduler determines whether a user preference designates either maximization of performance or minimization of energy consumption. Responsive to the user preference designating the performance, the scheduler determines whether there is an idle processor core in a plurality of processor cores available. Responsive to no idle processor being available, the scheduler identifies a subset of processor cores having a smallest load coefficient. From the subset of processor cores, the scheduler determines whether there is at least one processor core that matches desired parameters of the task. Responsive to at least one processor core matching the desired parameters of the task, the scheduler assigns the task to one of the at least one processor core that matches the desired parameters.

Подробнее
16-09-2014 дата публикации

Debugging multithreaded code by generating exception upon target address CAM search for variable and checking race condition

Номер: US0008838939B2

Mechanisms are provided for debugging application code using a content addressable memory. The mechanisms receive an instruction in a hardware unit of a processor of the data processing system, the instruction having a target memory address that the instruction is attempting to access. A content addressable memory (CAM) associated with the hardware unit is searched for an entry in the CAM corresponding to the target memory address. In response to an entry in the CAM corresponding to the target memory address being found, a determination is made as to whether information in the entry identifies the instruction as an instruction of interest. In response to the entry identifying the instruction as an instruction of interest, an exception is generated and sent to one of an exception handler or a debugger application. In this way, debugging of multithreaded applications may be performed in an efficient manner.

Подробнее
27-10-2011 дата публикации

Single Thread Performance in an In-Order Multi-Threaded Processor

Номер: US20110265068A1

A mechanism is provided for improving single-thread performance for a multi-threaded, in-order processor core. In a first phase, a compiler analyzes application code to identify instructions that can be executed in parallel with focus on instruction-level parallelism and removing any register interference between the threads. The compiler inserts as appropriate synchronization instructions supported by the apparatus to ensure that the resulting execution of the threads is equivalent to the execution of the application code in a single thread. In a second phase, an operating system schedules the threads produced in the first phase on the hardware threads of a single processor core such that they execute simultaneously. In a third phase, the microprocessor core executes the threads specified by the second phase such that there is one hardware thread executing an application thread.

Подробнее
22-09-2011 дата публикации

Minimizing Aggregate Cooling and Leakage Power

Номер: US20110231030A1

A mechanism is provided for minimizing system power in a data processing system. A management control unit determines whether a convergence has been reached in the data processing system. If convergence fails to be reached, the management control unit determines whether a maximum fan flag is set to indicate that a fan is operating at a maximum speed. Responsive to the maximum fan flag failing to be set, a thermal threshold of the data processing system is either increased or decreased and thereby a fan speed of the data processing system is either increased or decreased based on whether the system power of the data processing system has either increased or decreased and based on whether a temperature of the data processing system has either increased or decreased. Thus, a new thermal threshold and a new fan speed are formed. The process is then repeated until convergence has been met.

Подробнее
20-10-2011 дата публикации

Architecture Support for Debugging Multithreaded Code

Номер: US20110258421A1

Mechanisms are provided for debugging application code using a content addressable memory. The mechanisms receive an instruction in a hardware unit of a processor of the data processing system, the instruction having a target memory address that the instruction is attempting to access. A content addressable memory (CAM) associated with the hardware unit is searched for an entry in the CAM corresponding to the target memory address. In response to an entry in the CAM corresponding to the target memory address being found, a determination is made as to whether information in the entry identifies the instruction as an instruction of interest. In response to the entry identifying the instruction as an instruction of interest, an exception is generated and sent to one of an exception handler or a debugger application. In this way, debugging of multithreaded applications may be performed in an efficient manner.

Подробнее
14-07-2011 дата публикации

Architectural Support for Automated Assertion Checking

Номер: US20110173592A1

A mechanism is provided for automatic detection of assertion violations. An application may write assertion tuples to the assertion checking mechanism. An assertion tuple forms a Boolean expression (predicate or invariant) that the developer of the application wishes to check. If the assertion defined by the tuple remains true, then the application does not violate the assertion. For any instruction that stores a value to a memory location or register at a target address, the assertion checking mechanism compares the target address to the addresses specified in the assertion tuples. If the target address matches one of the tuple addresses, then the assertion checking mechanism reads a value from the other address in the tuple. The assertion checking mechanism then recomputes the assertion using the retrieved value along with the value to be stored. If the assertion checking mechanism detects an assertion violation, the assertion checking mechanism raises an exception.

Подробнее
22-01-2013 дата публикации

Stackable module for energy-efficient computing systems

Номер: US0008358503B2

A modular processing module is provided. The modular processing module comprises a set of processing module sides. Each processing module side comprises a circuit board, a plurality of connectors coupled to the circuit board, and a plurality of processing nodes coupled to the circuit board. Each processing module side in the set of processing module sides couples to another processing module side using at least one connector in the plurality of connectors such that, when all of the set of processing module sides are coupled together, the modular processing module is formed. The modular processing module comprises an exterior connection to a power source and a communication system.

Подробнее
11-02-2014 дата публикации

Single thread performance in an in-order multi-threaded processor

Номер: US0008650554B2

A mechanism is provided for improving single-thread performance for a multi-threaded, in-order processor core. In a first phase, a compiler analyzes application code to identify instructions that can be executed in parallel with focus on instruction-level parallelism and removing any register interference between the threads. The compiler inserts as appropriate synchronization instructions supported by the apparatus to ensure that the resulting execution of the threads is equivalent to the execution of the application code in a single thread. In a second phase, an operating system schedules the threads produced in the first phase on the hardware threads of a single processor core such that they execute simultaneously. In a third phase, the microprocessor core executes the threads specified by the second phase such that there is one hardware thread executing an application thread.

Подробнее
01-12-2011 дата публикации

Optimizing Energy Consumption and Application Performance in a Multi-Core Multi-Threaded Processor System

Номер: US20110296212A1

A mechanism is provided for scheduling application tasks. A scheduler receives a task that identifies a desired frequency and a desired maximum number of competing hardware threads. The scheduler determines whether a user preference designates either maximization of performance or minimization of energy consumption. Responsive to the user preference designating the performance, the scheduler determines whether there is an idle processor core in a plurality of processor cores available. Responsive to no idle processor being available, the scheduler identifies a subset of processor cores having a smallest load coefficient. From the subset of processor cores, the scheduler determines whether there is at least one processor core that matches desired parameters of the task. Responsive to at least one processor core matching the desired parameters of the task, the scheduler assigns the task to one of the at least one processor core that matches the desired parameters.

Подробнее
20-12-2012 дата публикации

Heatsink Allowing In-Situ Maintenance in a Stackable Module

Номер: US20120320524A1

A modular processing module allowing in-situ maintenance is provided. The modular processing module comprises a set of processing module sides. Each processing module side comprises a circuit board, a plurality of connectors, and a plurality of processing nodes. Each processing module side couples to another processing module side using at least one connector in the plurality of connectors such that, when all of the set of processing module sides are coupled together, the modular processing module is formed. The modular processing module comprises an exterior connection to a power source and a communication system and at least one heatsink that couples to at least a portion of the plurality of processing nodes on one of the processing module sides and is designed such that, when a set of heatsinks in the modular processing module are installed, an empty space is left in a center of the modular processing module. 1. A modular processing module allowing in-situ maintenance comprising:a set of processing module sides, wherein each processing module side comprises:a circuit board;a plurality of connectors coupled to the circuit board; and an exterior connection to a power source and a communication system; and', 'at least one heatsink that couples to at least a portion of the plurality of processing nodes on one of the processing module sides, wherein the at least one heatsink is designed such that, when a set of heatsinks in the modular processing module are installed, an empty space is left in a center of the modular processing module, wherein the empty space in the center of the processing module is filled with a core that fills an air gap left by the set of heatsinks in the modular processing module, wherein the core is comprised of a plurality of core sections, wherein the plurality of core sections are coupled together using retention mechanisms, and wherein the plurality of core sections are coupled together using expansion mechanisms, and wherein upon release of ...

Подробнее