Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 35208. Отображено 100.
05-01-2012 дата публикации

Method and apparatus for processing distributed data

Номер: US20120005253A1
Автор: Geert Denys, Kim Marivoet
Принадлежит: EMC Corp

Some embodiments are directed to processing content units stored on a distributed computer system that comprises a plurality of independent nodes. The content units may be processed by determining which content units are stored on each node and identifying which content units warrant processing. Nodes may be selected to process the content units that warrant processing and instructions may be sent to these nodes to instruct them to process these content units.

Подробнее
02-02-2012 дата публикации

High performance locks

Номер: US20120030681A1
Автор: Kirk J. Krauss
Принадлежит: International Business Machines Corp

Systems and methods of enhancing computing performance may provide for detecting a request to acquire a lock associated with a shared resource in a multi-threaded execution environment. A determination may be made as to whether to grant the request based on a context-based lock condition. In one example, the context-based lock condition includes a lock redundancy component and an execution context component.

Подробнее
09-02-2012 дата публикации

Method for optimizing the operation of a multiprocessor integrated circuit, and corresponding integrated circuit

Номер: US20120036375A1

A method for optimizing operation which is applicable to a multiprocessor integrated circuit chip. Each processor runs with a variable parameter, for example its clock frequency, and the optimization includes determination, in real time, of a characteristic data value associated with the processor (temperature, consumption, latency), transfer of the characteristic data to the other processors, calculation by each processor of various values of an optimization function depending on the characteristic data value of the block, on the characteristic data values of the other blocks, and on the variable parameter, the function being calculated for the current value of this parameter and for other possible values, selection, from among the various parameter values, of that which yields the best value for the optimization function, and application of this variable parameter to the processor for the remainder of the execution of the task.

Подробнее
16-02-2012 дата публикации

Techniques for providing services and establishing processing environments

Номер: US20120042079A1
Принадлежит: Individual

Techniques are provided for the delivery of client services and for the establishment of client processing environments. A client receives services within a processing environment which is defined by a processing container. The processing container includes one or more processing groups, and each processing group has a particular context that supports one or more applications or services which are processing within that context. The processing groups communicate with one another via connector interfaces included within the processing container. Services and processing containers can be dynamically added or removed from the processing container.

Подробнее
05-04-2012 дата публикации

Resource reservation

Номер: US20120084785A1
Автор: James Stephens, JR.
Принадлежит: EMPIRE TECHNOLOGY DEVELOPMENT LLC

Technologies are generally described for systems and methods for requesting a reservation between a first and a second processor. In some examples, the method includes receiving a reservation request at the second processor from the first processor. The reservation request may include an identification of a resource in communication with the second processor, a time range, first key information relating to the first processor, and a first signature of the first processor based on the first key information. In some examples, the method includes verifying, by the second processor, the reservation request based on the first key information and the first signature. In some examples, the method includes determining, by the second processor, whether to accept the reservation request.

Подробнее
31-05-2012 дата публикации

Smartphone-Based Methods and Systems

Номер: US20120134548A1
Принадлежит: Digimarc Corp

Methods and arrangements involving portable devices are disclosed. One arrangement enables a content creator to select software with which that content should be rendered—assuring continuity between artistic intention and delivery. Another arrangement utilizes the camera of a smartphone to identify nearby subjects, and take actions based thereon. Others rely on near field chip (RFID) identification of objects, or on identification of audio streams (e.g., music, voice). Some of the detailed technologies concern improvements to the user interfaces associated with such devices. Others involve use of these devices in connection with shopping, text entry, sign language interpretation, and vision-based discovery. Still other improvements are architectural in nature, e.g., relating to evidence-based state machines, and blackboard systems. Yet other technologies concern use of linked data in portable devices—some of which exploit GPU capabilities. Still other technologies concern computational photography. A great variety of other features and arrangements are also detailed.

Подробнее
31-05-2012 дата публикации

Miss buffer for a multi-threaded processor

Номер: US20120137077A1
Принадлежит: Oracle International Corp

A multi-threaded processor configured to allocate entries in a buffer for instruction cache misses is disclosed. Entries in the buffer may store thread state information for a corresponding instruction cache miss for one of a plurality of threads executable by the processor. The buffer may include dedicated entries and dynamically allocable entries, where the dedicated entries are reserved for a subset of the plurality of threads and the dynamically allocable entries are allocable to a group of two or more of the plurality of threads. In one embodiment, the dedicated entries are dedicated for use by a single thread and the dynamically allocable entries are allocable to any of the plurality of threads. The buffer may store two or more entries for a given thread at a given time. In some embodiments, the buffer may help ensure none of the plurality of threads experiences starvation with respect to instruction fetches.

Подробнее
28-06-2012 дата публикации

Performing predictive modeling of virtual machine relationships

Номер: US20120167094A1
Автор: John M. Suit
Принадлежит: Red Hat Inc

An exemplary method may include collecting performance data of present operating conditions of network components operating in an enterprise network, extracting ontological component data of the network components from the collected performance data, comparing the collected performance data with predefined service tier threshold parameters, and determining if the ontological component data represents operational relationships between the network components, and establishing direct and indirect relationships between the network components based on the determined operational relationships and establishing a business application service group based on the ontological component data.

Подробнее
12-07-2012 дата публикации

Adaptively preventing out of memory conditions

Номер: US20120179889A1
Автор: Kirk J. Krauss
Принадлежит: International Business Machines Corp

A computer-implemented method of preventing an out-of-memory condition can include evaluating usage of virtual memory of a process executing within a computer, detecting a low memory condition in the virtual memory for the process, and selecting at least one functional program component of the process according to a component selection technique. The method also can include sending a notification to each selected functional program component and, responsive to receiving the notification, each selected functional program component releasing at least a portion of a range of virtual memory reserved on behalf of the selected functional program component.

Подробнее
02-08-2012 дата публикации

System and Method for Massively Multi-Core Computing Systems

Номер: US20120198465A1
Принадлежит: FutureWei Technologies Inc

A system and method for massively multi-core computing are provided. A method for computer management includes determining if there is a need to allocate at least one first resource to a first plane. If there is a need to allocate at least one first resource, the at least one first resource is selected from a resource pool based on a set of rules and allocated to the first plane. If there is not a need to allocate at least one first resource, it is determined if there is a need to de-allocate at least one second resource from a second plane. If there is a need to de-allocate at least one second resource, the at least one second resource is de-allocated. The first plane includes a control plane and/or a data plane and the second plane includes the control plane and/or the data plane. The resources are unchanged if there is not a need to allocate at least one first resource and if there is not a need to de-allocate at least one second resource.

Подробнее
09-08-2012 дата публикации

Universal architecture for client management extensions on monitoring, control, and configuration

Номер: US20120203819A1
Принадлежит: International Business Machines Corp

Provided are techniques for, under control of an agent: receiving a request from a first database client to access a service from a set of services, wherein the agent is associated with the service; receiving a request from a second database client to access the service, wherein the agent is shared by the first database client and the second database client; combining information from the first database client and the second database client; and sending the combined information to the service using a single physical connection in a client-side Client Management Extension (CMX) connection, wherein the first database client and the second database client share the single physical connection.

Подробнее
23-08-2012 дата публикации

Semantic web technologies in system automation

Номер: US20120215733A1
Принадлежит: International Business Machines Corp

A method includes maintaining descriptions of a plurality of information technology resources in a computer-readable storage medium. The method includes maintaining a plurality of evaluation strategies, wherein the evaluation strategies associate a plurality of rules with forms of changes to the plurality of information technology resources. Responsive to detecting a command to change a first property of the set of properties of a first information technology resource of the plurality of information technology resources, the method determines that a first of the evaluation strategies associates at least one of the plurality of rules with a form of the change to the first property of the first information technology resource. Also, responsive to detecting the command, the method evaluates the at least one of the plurality of rules and performs the operation of the at least one rule.

Подробнее
30-08-2012 дата публикации

Systems and methods for generating marketplace brokerage exchange of excess subscribed resources using dynamic subscription periods

Номер: US20120221454A1
Принадлежит: Red Hat Inc

Embodiments relate to systems and methods for generating a marketplace brokerage exchange of excess subscribed resources using dynamic subscription periods. A set of aggregate usage history data can record consumption of processor, software, or other resources subscribed to by a set of users, in one cloud or across multiple clouds. An entitlement engine can analyze the usage history data to identify a subscription margin for the subscribed resources, reflecting collective under-consumption of resources by the set of users on a collective basis, over different and/or dynamically updated subscription periods. In aspects, the set of estimated resource contributions of different users can be aggregated over one or more dynamic resource contribution intervals to generated a bundled brokerage resource tender, in which the processor, operating system, and/or other resources of multiple users are combined to be offered to a cloud marketplace for one or more contribution interval. The bundled resource offer can be structured to contain at least a threshold amount of resources over a minimum or other defined contribution interval, after which resources are released back to the contributing users.

Подробнее
06-09-2012 дата публикации

Method for executing virtual application delivery controllers having different application versions over a computing device

Номер: US20120227039A1
Принадлежит: Radware Ltd

A method for executing virtual application delivery controllers (vADCs) having different application versions over a computing device. The method comprises installing a virtualization infrastructure in the computing device; creating by the virtualization infrastructure a plurality of vADCs having different application versions, wherein each vADC is created from a software image maintained in a hardware infrastructure of the computing device; gathering version information associated with each of the plurality of vADCs; independently executing the plurality of vADCs over an operating system of the computing device; and controlling the execution of the plurality of the vADCs over an operating system of the computing device using the virtualization infrastructure using in part the version information. In one embodiment, each of the plurality of vADCs does not execute its own guest operating system.

Подробнее
27-09-2012 дата публикации

Mobile device workload management for cloud computing using sip and presence to control workload and method thereof

Номер: US20120246322A1
Принадлежит: International Business Machines Corp

A method is implemented in a computer infrastructure having computer executable code tangibly embodied on a computer readable storage medium having programming instructions. The programming instructions are operable to manage workload for cloud computing by transferring workload to at least one mobile device using Session Initiation Protocol (SIP).

Подробнее
11-10-2012 дата публикации

Resource Allocation Method and Device for Foreground Switch of J2ME Application

Номер: US20120258722A1
Автор: Gang Liu
Принадлежит: ZTE Corp

A resource allocation method and a resource allocation device for foreground switch of a J2ME (Java 2 Micro Edition) application are provided in the present invention. A JAVA application program receives a first message from a JAVA virtual machine when switching to background from foreground, wherein the first message carries information indicating that the JAVA application program needs to release partial resources; and the JAVA application program returns a first response message to the JAVA virtual machine so as to realize release of the resources, wherein the first response message carries information of resources to be released and/or information of resources to be reserved for restoring to an executing state. The user experience can be improved and the normal use of local applications can be ensured according to the technical solution provided by the present invention.

Подробнее
11-10-2012 дата публикации

Dynamic resource allocation method, system, and program

Номер: US20120259982A1
Принадлежит: International Business Machines Corp

A dynamic resource allocation method and system. The method includes the steps of preparing a plurality of instances in different preparation states; receiving a request on a dynamic scheduling condition from the client computer; and launching some of the plurality of instances in the different preparation states in such a combination that the dynamic scheduling condition is satisfied. The method includes computer apparatus for accomplishing the above method. A tangible storage medium includes program steps which, when executed by computer apparatus, causes the computer apparatus to perform the above method.

Подробнее
22-11-2012 дата публикации

Scalable work load management on multi-core computer systems

Номер: US20120297395A1
Принадлежит: EXLUDUS Inc

A system and method for managing the processing of work units being processed on a computer system having shared resources e.g. multiple processing cores, memory, bandwidth, etc. The system comprises a job scheduler for scheduling access to the shared resources for the work units, and an event trap for capturing resource related allocation events. The event trap is adapted to dynamically adjust the amount of availability associated with each shared resource identified by the resource related allocation event. The allocation event may define a resource release or a resource request. The event trap may increase the amount of availability for allocation events defining a resource release, and decrement the amount of availability for allocation events defining a resource request. The job scheduler allocates resources to the work units using a real time amount of availability of the shared resources in order to maximize a consumption of the shared resources.

Подробнее
29-11-2012 дата публикации

Programmatically determining an execution mode for a request dispatch utilizing historic metrics

Номер: US20120304177A1
Принадлежит: International Business Machines Corp

A request dispatcher can automatically switch between processing request dispatches in a synchronous mode and an asynchronous mode. Each dispatch can be associated with a unique identification value such as a process ID or Uniform Resource Identifier (URI), historic metrics, and a ruleset. With each execution of the request dispatch, historic metrics can be collected. Metrics can include, but is not limited to, execution duration and/or execution frequency, processor load, memory usage, network input/output, number of dependent dispatches, and the like. Utilizing historic metrics, rules can be constructed for determining which mode to execute the subsequent execution of the dispatch. As such, runtime optimization of Web applications can be further improved.

Подробнее
06-12-2012 дата публикации

Storage system comprising microprocessor load distribution function

Номер: US20120311204A1
Принадлежит: HITACHI LTD

Among a plurality of microprocessors 12, 32, when the load on a microprocessor 12 which performs I/O task processing of received I/O requests is equal to or greater than a first load, the microprocessor assigns at least an I/O task portion of the I/O task processing to another microprocessor 12 or 32, and the other microprocessor 12 or 32 executes at least the I/O task portion. The I/O task portion is a task processing portion comprising cache control processing, comprising the securing in cache memory 20 of a cache area, which is one area in cache memory 20, for storage of data.

Подробнее
10-01-2013 дата публикации

Reducing cross queue synchronization on systems with low memory latency across distributed processing nodes

Номер: US20130014124A1
Принадлежит: International Business Machines Corp

A method for efficient dispatch/completion of a work element within a multi-node data processing system. The method comprises: selecting specific processing units from among the processing nodes to complete execution of a work element that has multiple individual work items that may be independently executed by different ones of the processing units; generating an allocated processor unit (APU) bit mask that identifies at least one of the processing units that has been selected; placing the work element in a first entry of a global command queue (GCQ); associating the APU mask with the work element in the GCQ; and responsive to receipt at the GCQ of work requests from each of the multiple processing nodes or the processing units, enabling only the selected specific ones of the processing nodes or the processing units to be able to retrieve work from the work element in the GCQ.

Подробнее
24-01-2013 дата публикации

Multicore processor system, computer product, and control method

Номер: US20130024588A1
Принадлежит: Fujitsu Ltd

A multicore processor system includes a core configured to detect a change in a state of assignment of a multicore processor; obtain, upon detecting the change in the state of assignment, number of accesses of a common resource shared by the multicore processor by each of process that are assigned to cores of the multicore processor; calculate an access ratio based on the obtained number of accesses; and notify an arbitration circuit of the calculated access ratio, the arbitration circuit arbitrating accesses of the common resource by the multicore processor.

Подробнее
07-02-2013 дата публикации

Device, system and method for processing machine to machine/man service

Номер: US20130035127A1
Автор: PENG Wang
Принадлежит: ZTE Corp

The disclosure discloses a device, a method and a system for processing Machine to Machine/Man (M2M) service information to simplify processing courses of the M2M service and to improve processing efficiency of the M2M service. The device comprises: a receiving unit ( 410 ) configured to receive the M2M downlink service information transmitted to a first virtual terminal, wherein the first virtual terminal has at least one function of at least two machine terminals; a processing unit ( 420 ) configured to determine, according to a second corresponding relationship between a machine terminal and a function in the first virtual terminal, the machine terminal corresponding to each function contained in the M2M downlink service information and to split the M2M downlink service information into the service information of individual machine terminals to send to the corresponding machine terminal.

Подробнее
07-02-2013 дата публикации

On-chip memory (ocm) physical bank parallelism

Номер: US20130036274A1
Принадлежит: Cavium LLC

According to an example embodiment, a processor is provided including an integrated on-chip memory device component. The on-chip memory device component includes a plurality of memory banks, and multiple logical ports, each logical port coupled to one or more of the plurality of memory banks, enabling access to multiple memory banks, among the plurality of memory banks, per clock cycle, each memory bank accessible by a single logical port per clock cycle and each logical port accessing a single memory bank per clock cycle.

Подробнее
07-03-2013 дата публикации

Process Management Views

Номер: US20130061167A1
Принадлежит: Microsoft Corp

Two different process management views can be displayed, and a user can request to switch between the two views. The user can select a process in either view and have the selected process terminated. One view is a simplified view that identifies processes and whether they are non-responsive. The other view is an expanded view that identifies processes and the amount of various system resources used by each of those processes. Various additional information can be displayed in the expanded view, such as identifiers of various windows, tabs, and/or services associated with each of the processes.

Подробнее
07-03-2013 дата публикации

Systems and methods for generating reference results using parallel-processing computer system

Номер: US20130061230A1

A method for debugging an application includes obtaining first and second fusible operation requests; if there is a break point between the first and the second operation request, generating a first set of compute kernels including programs corresponding to the first operation request, but not to the second operation request; and generating a second set of compute kernels including programs corresponding the second operation request, but not to the first operation request; if there is no break point between the first and the second operation request, generating a third set of compute kernels which include programs corresponding to a merge of the first and second operation requests; and arranging for execution of either the first and second, or the third set of compute kernels, further including debugging the first or second set of compute kernels when there is a break point set between the first and second operation requests.

Подробнее
14-03-2013 дата публикации

Method and apparatus for multiple access of plural memory banks

Номер: US20130067173A1
Принадлежит: Cavium LLC

A processor with on-chip memory including a plurality of physical memory banks is disclosed. The processor includes a method, and corresponding apparatus, of enabling multi-access to the plurality of physical memory banks The method comprises selecting a subset of multiple access requests to be executed in at least one clock cycle over at least one of a number of access ports connected to the plurality of physical memory banks, the selected subset of access requests addressed to different physical memory banks, among the plurality of memory banks, and scheduling the selected subset of access requests, each over a separate access port.

Подробнее
28-03-2013 дата публикации

Dynamic route requests for multiple clouds

Номер: US20130080613A1
Автор: Jason Thireault
Принадлежит: Limelight Networks Inc

Aspects of the present invention include a method of dynamically routing requests within multiple cloud computing networks. The method includes receiving a request for an application from a user device, forwarding the request to an edge server within a content delivery network (CDN), and analyzing the request to gather metrics about responsiveness provided by the multiple cloud computing networks running the application. The method further includes analyzing historical data for the multiple cloud computing networks regarding performance of the application, based on the performance metrics and the historical data, determining an optimal cloud computing network within the multiple cloud computing networks to route the request, routing the request to the optimal cloud computing network, and returning the response from the optimal cloud computing network to the user device.

Подробнее
28-03-2013 дата публикации

Dynamic route requests for multiple clouds

Номер: US20130080623A1
Автор: Jason Thireault
Принадлежит: Limelight Networks Inc

Aspects of the present invention include a method of dynamically routing requests within multiple cloud computing networks. The method includes receiving a request for an application from a user device, forwarding the request to an edge server within a content delivery network (CDN), and analyzing the request to gather metrics about responsiveness provided by the multiple cloud computing networks running the application. The method further includes analyzing historical data for the multiple cloud computing networks regarding performance of the application, based on the performance metrics and the historical data, determining an optimal cloud computing network within the multiple cloud computing networks to route the request, routing the request to the optimal cloud computing network, and returning the response from the optimal cloud computing network to the user device.

Подробнее
28-03-2013 дата публикации

Distributed job scheduling in a multi-nodal environment

Номер: US20130080824A1
Принадлежит: International Business Machines Corp

Techniques are described for decentralizing a job scheduler in a distributed system environment. Embodiments of the invention may generally include receiving a job to be performed by a multi-nodal system which includes a cluster of nodes. Instead of a centralized job scheduler assigning the job to a node or nodes, each node has a job scheduler which scans a shared-file system to determine what job to execute on the node. In a job requiring multiple nodes, one of the nodes that joined the multi-nodal job becomes the primary node which then assigns and monitors the job's execution on the multiple nodes.

Подробнее
28-03-2013 дата публикации

Acquiring, presenting and transmitting tasks and subtasks to interface devices

Номер: US20130081027A1
Принадлежит: ELWHA LLC

Computationally implemented methods and systems include acquiring one or more subtasks that correspond to portions of one or more tasks configured to be carried out by two or more discrete interface devices, presenting one or more representations corresponding to the one or more subtasks, wherein the one or more representations correspond to the one or more subtasks, and transmitting subtask data corresponding to one or more subtasks in response to selection of one of the one or more corresponding representations. In addition to the foregoing, other method aspects are described in the claims, drawings, and text.

Подробнее
28-03-2013 дата публикации

ANALYSIS OF OPERATOR GRAPH AND DYNAMIC REALLOCATION OF A RESOURCE TO IMPROVE PERFORMANCE

Номер: US20130081046A1

An operator graph analysis mechanism analyzes an operator graph corresponding to an application for problems as the application runs, and determines potential reallocations from a reallocation policy. The reallocation policy may specify potential reallocations depending on whether one or more operators in the operator graph are compute bound, memory bound, communication bound, or storage bound. The operator graph analysis mechanism includes a resource reallocation mechanism that can dynamically change allocation of resources in the system at runtime to address problems detected in the operator graph. The operator graph analysis mechanism thus allows an application represented by an operator graph to dynamically evolve over time to optimize its performance at runtime. 1. A computer-implemented method executed by at least one processor for improving performance of an application at runtime , the method comprising the steps of:(A) displaying an operator graph that represents the application using a plurality of operators and a plurality of data flows between the plurality of operators;(B) analyzing the operator graph at runtime when the application runs;(C) detecting at least one problem in the operator graph based on at least one data bottleneck in the operator graph; and(D) performing at least one resource reallocation to improve performance of the application represented in the operator graph.2. The method of wherein step (D) comprises allocating at least one hardware resource.3. The method of wherein step (D) comprises allocating at least one operating system resource.4. The method of wherein step (D) comprises allocating at least one network resource.5. The method of wherein step (D) comprises allocating at least one storage resource.6. The method of wherein the allocating of the at least one storage resource comprises allocating a local cache to a solid state drive.7. The method of further comprising a reallocation policy specified by a user that specifies ...

Подробнее
02-05-2013 дата публикации

DYNAMICALLY SPLITTING JOBS ACROSS MULTIPLE AGNOSTIC PROCESSORS IN WIRELESS SYSTEM

Номер: US20130111493A1
Принадлежит: BROADCOM CORPORATION

Dynamically splitting a job in wireless system between a processor other remote devices may involve evaluating a job that a wireless mobile communication (WMC) device may be requested to perform. The job may be made of one or more tasks. The WMC device may evaluate by determining the availability of at least one local hardware resource of the wireless mobile communication device in processing the requested job. The WMC device may apportion one or more tasks making up the requested job between the wireless mobile communication device and a remote device. The apportioning may be based on the availability of the at least one local hardware resource. 1. A method comprising:evaluating, at a wireless mobile communication device, availability of at least one local hardware resource of the wireless mobile communication device in processing a requested job; andapportioning, at the wireless mobile communication device, one or more tasks making up the requested job between the wireless mobile communication device and a remote device, based on the availability of the at least one local hardware resource.2. The method according to claim 1 , further comprising subdividing claim 1 , at the wireless communication device claim 1 , the requested job into the one or more tasks.3. The method according to claim 1 , wherein the apportioning is further based on at least one of capability of the wireless mobile communication device claim 1 , connectivity between the wireless mobile communication device and the remote device claim 1 , and combinations thereof.4. The method according to claim 1 , wherein the at least one local hardware resource of the wireless mobile communication comprises power in the wireless mobile communication device.5. The method according to claim 1 , wherein the evaluating takes into account a power requirement in processing the requested job at said wireless mobile communication device.6. The method according to claim 1 , wherein the evaluating takes into account a ...

Подробнее
09-05-2013 дата публикации

Apparatuses, systems, and methods for distributed workload serialization

Номер: US20130117755A1
Автор: Christopher Bontempi
Принадлежит: McKesson Financial Holdings ULC

Apparatuses, systems, methods, and computer program products are provided for processing workload requests in a distributed computing system. In general, a cooperative workload serialization system is provided that includes a Message Queue that is configured to receive and hold workload requests from a number of requestors and a Request Manager that is in communication with the Message Queue and is configured to direct the processing of the workload requests. The system may include a Culler in communication with the Request Manager, where the Culler is configured to monitor the validity of the workload requests. The Request Manager, in turn, may be configured to remove an indicated workload request from the Message Queue based on information from the Culler that the indicated workload request is not valid.

Подробнее
23-05-2013 дата публикации

APPLICATON INTERFACE ON MULTIPLE PROCESSORS

Номер: US20130132934A1
Принадлежит: Apple Inc.

A method and an apparatus that execute a parallel computing program in a programming language for a parallel computing architecture are described. The parallel computing program is stored in memory in a system with parallel processors. The parallel computing program is stored in a memory to allocate threads between a host processor and a GPU. The programming language includes an API to allow an application to make calls using the API to allocate execution of the threads between the host processor and the GPU. The programming language includes host function data tokens for host functions performed in the host processor and kernel function data tokens for compute kernel functions performed in one or more compute processors, e.g. GPUs or CPUs, separate from the host processor. 1. A programming language system for a parallel computing architecture , the programming language system implemented by a parallel computing program stored in memory in a system having parallel processors , the system comprising:a host processor;a graphics processing unit (GPU) coupled to the host processor;a memory coupled to at least one of the host processor and the GPU, the parallel computing program being stored in the memory to allocate threads between the host processor and the GPU and wherein the programming language includes an API to allow an application to make calls using the API to allocate execution of the threads between the host processor and the GPU.2. The programming language system of claim 1 , wherein the API is called by the application asynchronously.3. The programming language system of wherein the GPU comprises graphics texture mapping hardware to map texture maps onto surfaces to be displayed on a display device.4. The programming language system of claim 1 , further comprising:a central processing unit (CPU) coupled to the host processor, wherein the threads are allocated to be executed in the CPU if the GPU is busy executing graphics processing threads.5. The ...

Подробнее
30-05-2013 дата публикации

Multi-core resource utilization planning

Номер: US20130139173A1
Автор: Stephen Carter
Принадлежит: Apple Inc

Techniques for multi-core resource utilization planning are provided. An agent is deployed on each core of a multi-core machine. The agents cooperate to perform one or more tests. The tests result in measurements for performance and thermal characteristics of each core and each communication fabric between the cores. The measurements are organized in a resource utilization map and the map is used to make decisions regarding core assignments for resources.

Подробнее
13-06-2013 дата публикации

Scalable scheduling for distributed data processing

Номер: US20130151707A1
Принадлежит: Microsoft Corp

A multi-tier scheduling approach includes a first tier comprising virtual cluster allocators that receive scheduling requests from processes and aggregate those requests and provide them to a second tier, namely a single resource distributor for the entire set of computing devices. The resource distributor, based on the requests from virtual cluster allocators, and also from information received from the computing devices themselves, generates a flow graph to identify an optimal scheduling of the assignment of resources to specific ones of the virtual clusters. Each virtual cluster allocator then, based on the assignment of resources assigned to it by the resource distributor, solves its own flow graph to identify an optimal scheduling of processes on the resources assigned. The scheduling of processes is performed iteratively by initially assigning resources to those processes having a high priority, and then, in subsequent iterations, assigning opportunistic resources to those processes having a lower priority.

Подробнее
20-06-2013 дата публикации

SCHEDULER, MULTI-CORE PROCESSOR SYSTEM, AND SCHEDULING METHOD

Номер: US20130160023A1
Принадлежит: FUJITSU LIMITED

In an embodiment, a scheduler coordinates timings at which cores execute processes, for any two sequential processes to consecutively be executable. The processes are executed in order scheduled by the scheduler by concentrating on a specific core processes obstructing the consecutive execution such as an external interrupt and an internal interrupt. The scheduler does not always cause processes of another application to be executed during all standby time periods while the scheduler determines whether a length of a standby time period is shorter than a predetermined value, and does not cause any process of the other application to be executed when the length is shorter than that. 1. A scheduler causing a specific core in a multi-core processor to execute a process comprising:first detecting from a group of processes constituting a program to be executed, a group of unset scheduling processes whose group of subsequent processes is common;second detecting for each of the unset scheduling processes of the detected group of unset scheduling processes and from group of preceding processes for the unset scheduling processes, a preceding process belonging to a group to which the unset scheduling processes belong, the group being among groups formed by grouping the processes that share same or related data to be accessed;allocating the unset scheduling processes respectively to a core in the multi-core processor and to which the detected preceding process is allocated;calculating, for each of the unset scheduling processes, elapsed time of an execution time period of the unset scheduling process from an execution ending time at which the group of preceding processes for the unset scheduling processes completely ends; andsetting, for each of the allocated unset scheduling processes, an execution starting time of the unset scheduling process at an allocation destination core to be a difference of a most recent calculated elapsed time less an execution time period of the ...

Подробнее
11-07-2013 дата публикации

Virtual data center system

Номер: US20130179550A1
Автор: Atsushi Kakizaki
Принадлежит: Otsuka Corp

A provided is a virtual data center system comprises: data center systems; a provisioning system capable of sending and receiving information from the data center systems; and an operational statistics database for storing resource information regarding the data center systems, wherein: by utilizing the resource information, the provisioning system compares resource costs based on the resources of a desired generation of data center system, in accordance with usage of the data center systems from a user system and resource costs based on resources of another generation of data center system currently used by the user, and thereby calculates the resource costs that would be reduced in the event of if migration to the desired generation of data center system occurs; and the data center systems currently used by the user are re-arranged to the desired generation of data center system in response to a control command issued from the user system.

Подробнее
18-07-2013 дата публикации

Performance interference model for managing consolidated workloads in qos-aware clouds

Номер: US20130185433A1
Автор: Qian Zhu, Teresa TUNG
Принадлежит: Accenture Global Services Ltd

The workload profiler and performance interference (WPPI) system uses a test suite of recognized workloads, a resource estimation profiler and influence matrix to characterize un-profiled workloads, and affiliation rules to identify optimal and sub-optimal workload assignments to achieve consumer Quality of Service (QoS) guarantees and/or provider revenue goals. The WPPI system uses a performance interference model to forecast the performance impact to workloads of various consolidation schemes usable to achieve cloud provider and/or cloud consumer goals, and uses the test suite of recognized workloads, the resource estimation profiler and influence matrix, affiliation rules, and performance interference model to perform off-line modeling to determine the initial assignment selections and consolidation strategy to use to deploy the workloads. The WPPI system uses an online consolidation algorithm, the offline models, and online monitoring to determine virtual machine to physical host assignments responsive to real-time conditions to meet cloud provider and/or cloud consumer goals.

Подробнее
01-08-2013 дата публикации

Low-Power Multi-Standard Cryptography Processing Units with Common Flip-Flop/Register Banks

Номер: US20130198530A1
Принадлежит: Intel Mobile Communications GmbH

A method, system, and apparatus for managing a plurality of cipher processor units. A cipher module may receive a cipher instruction indicating a cipher algorithm to be used. The cipher module may identify a cipher processing unit of the plurality of cipher processing units associated with the cipher algorithm. The cipher module may execute the cipher instruction using the cipher processing unit and the common register array. The cipher module may store a state of a common register array to be used by the cipher processing unit of the plurality of cipher processing units.

Подробнее
05-09-2013 дата публикации

Cache performance prediction and scheduling on commodity processors with shared caches

Номер: US20130232500A1
Принадлежит: VMware LLC

A method is described for scheduling in an intelligent manner a plurality of threads on a processor having a plurality of cores and a shared last level cache (LLC). In the method, a first and second scenario having a corresponding first and second combination of threads are identified. The cache occupancies of each of the threads for each of the scenarios are predicted. The predicted cache occupancies being a representation of an amount of the LLC that each of the threads would occupy when running with the other threads on the processor according to the particular scenario. One of the scenarios is identified that results in the least objectionable impacts on all threads, the least objectionable impacts taking into account the impact resulting from the predicted cache occupancies. Finally, a scheduling decision is made according to the one of the scenarios that results in the least objectionable impacts.

Подробнее
19-09-2013 дата публикации

SYSTEM AND METHOD OF CO-ALLOCATING A RESERVATION SPANNING DIFFERENT COMPUTE RESOURCES TYPES

Номер: US20130247064A1
Автор: Jackson David Brian
Принадлежит: Adaptive Computing Enterprises, Inc.

Co-allocating resources within a compute environment includes. Receiving a request for a reservation for a first type of resource, analyzing constraints and guarantees associated with the first type of resource, identifying a first group of resources that meet the request for the first type of resource and storing in a first list, receiving a request for a reservation for a second type of resource, analyzing constraints and guarantees associated with the second type of resource, identifying a second group of resources that meet the request for the second type of resource and storing in a second list, calculating a co-allocation parameter between the first group of resources and the second group of resources and reserving resources according to the calculated co-allocation parameter of the first group of resources and the second group of resources. The request may also request exclusivity of the reservation. 1. A method of co-allocating resources within a compute environment , the method comprising:receiving, via a processor, a first request for a first reservation for a first type of resource in a compute environment comprising a plurality of nodes;identifying a first group of resources that meets the first request for the first reservation;receiving a second request for a second reservation for a second type of resource in the compute environment;identifying a second group of resources that meets the second request for the second reservation, wherein the first type of resource and second type of resource span one or more servers in the compute environment, each having a homogeneous processor architecture; andgenerating a set of resources exclusive to at least one of the first request and the second request.2. The method of claim 1 , wherein the first request specifies exclusivity of the first group of resources for the first request.3. The method of claim 2 , wherein if exclusivity is requested claim 2 , the method comprises guaranteeing that the first request will ...

Подробнее
26-09-2013 дата публикации

METHOD TO REDUCE QUEUE SYNCHRONIZATION OF MULTIPLE WORK ITEMS IN A SYSTEM WITH HIGH MEMORY LATENCY BETWEEN PROCESSING NODES

Номер: US20130254776A1
Принадлежит: IBM CORPORATION

A method efficiently dispatches/completes a work element within a multi-node, data processing system that has a global command queue (GCQ) and at least one high latency node. The method comprises: at the high latency processor node, work scheduling logic establishing a local command/work queue (LCQ) in which multiple work items for execution by local processing units can be staged prior to execution; a first local processing unit retrieving via a work request a larger chunk size of work than can be completed in a normal work completion/execution cycle by the local processing unit; storing the larger chunk size of work retrieved in a local command/work queue (LCQ); enabling the first local processing unit to locally schedule and complete portions of the work stored within the LCQ; and transmitting a next work request to the GCQ only when all the work within the LCQ has been dispatched by the local processing units. 1. In a multi-node data processing system having at least one high latency processor node that exhibits high access latency to a global command queue (GCQ) , a method for efficient dispatch of a work element within the GCQ , said method comprising:at the at least one high latency processor node, work scheduling logic establishing a local command/work queue (LCQ) in which multiple work items for execution by local processing units are staged prior to execution;a first local processing unit generating a work request for retrieval of work from the GCQ;retrieving via the work request a larger chunk size of work than can be completed in a normal work completion/execution cycle by the local processing unit, wherein the larger chunk size is larger than a standard chunk size that is retrieved when the processing node is a low latency processing node;storing the larger chunk size of work retrieved in a local command/work queue (LCQ);enabling the first local processing unit to locally schedule and complete portions of the work stored within the LCQ; andgenerating a ...

Подробнее
31-10-2013 дата публикации

Method of managing computing tasks in a wind farm

Номер: US20130289786A1
Автор: Hun Yi Lock
Принадлежит: Vestas Wind Systems AS

A method of managing computing tasks in a wind farm is provided. The method comprises determining the status of a plurality of wind turbines in the wind farm, determining available computing resources based on the status of the plurality of wind turbines, and allocating a portion of the computing tasks to the available computing resources for processing.

Подробнее
31-10-2013 дата публикации

DATA TRANSFER CONTROL METHOD OF PARALLEL DISTRIBUTED PROCESSING SYSTEM, PARALLEL DISTRIBUTED PROCESSING SYSTEM, AND RECORDING MEDIUM

Номер: US20130290979A1
Принадлежит: Hitachi, Ltd.

A parallel distributed processing system includes multiple parallel distributed processing execution servers which stores data blocks pre-divided in a storage device and executes tasks processing the data blocks in parallel, and a management computer controlling the multiple parallel distributed processing execution servers. The management computer collects resource use amounts of the multiple parallel distributed processing execution servers, acquires states of data blocks and tasks held by the multiple parallel distributed processing execution servers, selects a second parallel distributed processing execution server transferring a data block to the first parallel distributed processing execution server, based on processing progress situations of the data blocks held by the multiple parallel distributed processing execution servers and the resource use amounts of the multiple parallel distributed processing execution servers, and transmits a command to transfer the data block to the first parallel distributed processing execution server, to the selected second parallel distributed processing execution server. 1. A data transmission control method of a parallel distributed processing system in which a management computer selects a second parallel distributed processing execution server which is a transmission source of a data block allocated to a task of a first parallel distributed processing execution server , in the parallel distributed processing system including a plurality of parallel distributed processing execution servers including a processor and a storage device , in which the storage device stores data blocks divided in advance as processing target data and the processor executes tasks processing the data blocks in parallel , and a management computer controlling the plurality of parallel distributed processing execution servers , the method comprising:a first step of receiving, by the management computer, a completion notification indicating completion ...

Подробнее
21-11-2013 дата публикации

TASK ALLOCATION OPTIMIZATION SYSTEM, TASK ALLOCATION OPTIMIZATION METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING TASK ALLOCATION OPTIMIZATION PROGRAM

Номер: US20130312001A1
Автор: SUZUKI Noriaki
Принадлежит:

A state evaluation function value generation unit generates a state evaluation function value for each operating state based on a state/task-set correspondence table indicating a list of an operating state of a system including a plurality of processor-cores and correspondence of a task set to be operated in each operating state, and a task set parameter indicating a characteristic of each task constituting the task set. An integrated evaluation function value generation unit generates an integrated evaluation function value in which the state evaluation function value of each operating state is integrated. An optimal allocation search unit optimizes allocation of a task to be allocated to each of the plurality of processor-cores based on the integrated evaluation function value. 1. A task allocation optimization system comprising:state evaluation function value generation means of generating a state evaluation function value based on a state/task-set correspondence table indicating a list of an operating state that changes according to an operating state of a system comprising a plurality of processor-cores and correspondence of a task set to be operated in each operating state, and a task set parameter that indicates a characteristic of each task constituting the task set and serves as reference information when a task is allocated, a task for each operating state being an evaluation value indicating a degree of quality of a position;integrated evaluation function value generation means of generating an integrated evaluation function value in which the state evaluation function value of each operating state is integrated, the integrated evaluation function value being an evaluation value indicating a degree of quality for a whole multi-core system; andoptimal allocation search means of optimizing allocation of a task to be allocated to each of the plurality of processor-cores by searching for an allocation that maximizes the degree of quality of the integrated ...

Подробнее
21-11-2013 дата публикации

SCHEDULING METHOD AND SCHEDULING SYSTEM

Номер: US20130312002A1
Принадлежит: FUJITSU LIMITED

A scheduling method executed by a scheduler that manages multiple processors, includes detecting based on an application information table when a first application is started up, a processor that executes a second application that is not executed concurrently with the first application; and assigning the first application to the processor. 1. A scheduling method executed by a scheduler that manages a plurality of processors , the scheduling method comprising:detecting based on an application information table when a first application is started up, a processor that executes a second application that is not executed concurrently with the first application; andassigning the first application to the processor.2. The scheduling method according to claim 1 , whereinthe application information table includes, for each of application, context information of the application and information concerning another application that has a potential of being executed concurrently with the application.3. The scheduling method according to claim 1 , whereinthe assigning includes when the processor that executes a second application that is not executed concurrently with the first application is detected in plural at the detecting, assigning the first application to a processor having a load that is lowest among the detected processors.4. A scheduling method executed by a scheduler that manages a plurality of processors claim 1 , the scheduling method comprising:calculating based on an application information table and when a first application is started up, a load consequent to switching a second application executed by at least one processor among the processors to the first application; andselecting based on the load, a first processor to execute the first application.5. The scheduling method according to claim 4 , whereinthe second application has a potential of being executed concurrently with the first application.6. The scheduling method according to claim 4 , whereinthe ...

Подробнее
02-01-2014 дата публикации

Allocating instantiated resources to an it-service

Номер: US20140006626A1
Принадлежит: International Business Machines Corp

Allocating an instance of a resource to an IT-service includes: analyzing a service model specifying the structure of an IT-service and comprising nodes and resource management rules specifying the management of said node's resource. For each node, the method includes: determining a resource type indicated by said node; determining one or more resource management rules assigned to said node; evaluating the resource management rules assigned to said node on a resource instance catalog and the determined resource type for computing selection criteria; applying the selection criteria on a service provider catalog for selecting one of the one or more resource managers, the service provider catalog being indicative of one or more of the resource managers respectively being operable to provide a resource instance of a given resource type to the IT-service; creating an instance of the resource provided by the selected resource manager; and allocating said instance to the IT-service.

Подробнее
30-01-2014 дата публикации

PROCESSOR SCHEDULING METHOD AND SYSTEM USING DOMAINS

Номер: US20140033221A1
Принадлежит: NETAPP, INC.

Aspects of the present invention concern a method and system for scheduling a request for execution on multiple processors. This scheduler divides processes from the request into a set of domains. Instructions in the same domain are capable of executing the instructions associated with the request in a serial manner on a processor without conflicts. A relative processor utilization for each domain in the set of the domains is based upon a workload corresponding to an execution of the request. If there are processors available then the present invention provisions a subset of available processors to fulfill an aggregate processor utilization. The aggregate processor utilization is created from a combination of the relative processor utilization associated with each domain in the set of domains. If processors are not needed then some processors may be shut down. Shutting down processors in accordance with the schedule saves energy without sacrificing performing. 1. A method of scheduling a request for execution on one or more processors , comprising:dividing processes from the request into a set of domains where processes in the same domain are executable in a serial manner on a processor without conflict;identifying a relative processor utilization for each domain from the set of the domains based upon a workload corresponding to an execution of the request;provisioning a subset of available processors to fulfill an aggregate processor utilization created from a combination of the relative processor utilization associated with each domain from the set of domains; andshutting down any remaining processors from the one or more processors not provisioned in the subset of available processors in order to reduce power consumption while the processes in the set of domains are scheduled for execution.2. The method of further comprising:bringing online any processors from the one or more processors that have been provisioned in the subset of available processors but ...

Подробнее
13-02-2014 дата публикации

Methods and Systems for Scalable Computing on Commodity Hardware for Irregular Applications

Номер: US20140047452A1
Принадлежит:

A computing system for scalable computing on commodity hardware is provided. The computing system includes a first computing device communicatively connected to a second computing device. The first computing device includes a processor, a physical computer-readable medium, and program instructions stored on the physical computer-readable medium and executable by the processor to perform functions. The functions include determining a first task associated with the second computing device and a second task associated with the second computing device are to be executed, assigning execution of the first task and the second task to the processor of the first computing device, generating an aggregated message that includes (i) a first message including an indication corresponding to the execution of the first task and (ii) a second message including an indication corresponding to the execution of the second task, and sending the aggregated message to the second computing device. 1. A computing system comprising a first computing device that is communicatively connected to a second computing device , wherein the first computing device comprises:at least one processor;a physical computer-readable medium; and determining that a first task associated with the second computing device and a second task associated with the second computing device are to be executed;', 'assigning execution of the first task and the second task to the at least one processor of the first computing device;', 'generating an aggregated message that comprises (i) a first message that comprises an indication corresponding to the execution of the first task and (ii) a second message that comprises an indication corresponding to the execution of the second task; and', 'sending the aggregated message to the second computing device., 'program instructions stored on the physical computer-readable medium and executable by the at least one processor to perform functions comprising2. The computing system of ...

Подробнее
13-02-2014 дата публикации

METHOD OF PROCESSING DATA IN AN SAP SYSTEM

Номер: US20140047453A1
Автор: Oliver Craig
Принадлежит:

A method of processing data in an SAP system comprising dividing data to be processed following a request from a user endpoint into a number of intervals, providing the intervals consecutively to one or more data processors selected to service the request and storing the output of a data processor when it has processed the interval. 1. A method of data processing in a data-processing an SAP system , the data-processing SAP system comprising a memory , and a plurality of data processors , the method comprising the steps of:partitioning the or a part of the data in the memory into a plurality of data intervals;allocating a data processor one of the plurality of intervals to process;the data processor processing its allocated interval to produce a result;the data processor outputting the result; andstoring the result in a persistent memory.2. The method as claimed in wherein the data-processing system further comprises a central processor and the method further comprises the step of selecting a set of the plurality of data processors to service the request and the step of allocating a data processor comprises each of the data processors in the set of data processors being allocated a data interval.3. The method as claimed in further comprising the step of allocating the data processor a further one of the plurality of data intervals when the data processor has output the processed data interval.4. The method as claimed in further comprising the step of collating the results.5. The method of further comprising the step of storing the collated results in a memory.6. The method as claimed in further comprising the step of outputting the collated data7. The method as claimed in further comprising the step of generating a report in response to a request from a user.8. The method as claimed in wherein the step of collating the results occurs prior to all the data intervals being processed.9. The method as claimed in wherein the method further comprises claim 3 , receiving a ...

Подробнее
27-02-2014 дата публикации

SUPPORT SERVER FOR REDIRECTING TASK RESULTS TO A WAKE-UP SERVER

Номер: US20140059152A1
Автор: Chu Thomas P.
Принадлежит: ALCATEL-LUCENT

Various exemplary embodiments relate to a method and related network node including one or more of the following: receiving, at a wake-up server, an indication that an agent device will be suspended, including at least one criterion for reestablishing the agent device; determining that the at least one criterion has been met; and, in response, reestablishing the agent device. Various exemplary embodiments relate to a method and related network node including one or more of the following: transmitting, by an agent device to a support server, a request message; transmitting, to a wake-up server, an indication that resources associated with the agent device will be released, including at least one criterion for reestablishing the agent device; transmitting, to the support server, an instruction to transmit a result message associated with the request message to the wake-up device; and releasing the system resources associated with the agent device. 1. A method performed by a support server for redirecting results , the method comprising:receiving, by the support server from an agent device, a request message;receiving, by the support server from the agent device, an instruction to transmit a result message associated with the request message to a wake-up server;processing the request message to generate result data;generating a result message based on the result data; andtransmitting the result data to the wake-up server.2. The method of claim 1 , wherein the result message includes at least a portion of the result data.3. The method of claim 1 , wherein the result message does not include all of the result data claim 1 , the method further comprising transmitting at least a portion of the result data to the agent device.4. The method of claim 1 , wherein the request message includes a request for at least one of the performance of a database query and the performance of a processing task.5. The method of claim 1 , wherein at least two of the support server claim 1 , ...

Подробнее
06-03-2014 дата публикации

Task execution & management in a clustered computing environment

Номер: US20140068620A1
Принадлежит: International Business Machines Corp

Machines, systems and methods for task management in a computer implemented system. The method comprises registering a task with brokers residing on one or more nodes to manage the execution of a task to completion, wherein a first broker is accompanied by a first set of worker threads co-located on the node on which the first broker is executed, wherein the first broker assigns responsibility of execution for the task to the one or more worker threads in the first set of co-located worker threads, wherein in response to a failure associated with a first worker thread in the first set, the first broker reassigns the responsibility of execution for the task to a second worker thread in the first set, wherein in response to a failure associated with the first broker, a second broker assigns responsibility of execution for the task to one or more co-located worker threads.

Подробнее
06-03-2014 дата публикации

DATA PROCESSING SYSTEMS

Номер: US20140068625A1
Автор: Winser Paul
Принадлежит:

A data processing system is described in which a hardware unit is added to a cluster of processors for explicitly handling assignment of available tasks and sub-tasks to available processors. 119-. (canceled)20. A data processing system for processing data items in a wireless communications system , the data processing system comprising:a plurality of processing resources operable to process an incoming data stream in accordance with received task information, such task information relating to tasks concerned with wireless signal processing; a first list unit comprising a plurality of task registers operable to store first list items relating to respective allocatable tasks, each first list item including information relating to at least one characteristic of a processing resource suitable for carrying out the task concerned, and containing information relating to task timing information for the task concerned;', 'a second list unit operable to store second list items relating to available processing resources,, 'a hardware task assignment unit includingthe hardware task assignment unit being operable to cause an allocatable task to be transferred to an available processing resource in dependence upon such first and second list items,wherein at least one of the processing resources is operable to store such first list items in the first list unit in dependence upon a processing result generated by the processing resource concerned, andwherein each of the processing resources is operable to store such second items in the second list unit in order to indicate the availability of the processing resource concerned, andwherein the hardware task assignment unit is operable to cause a processing resource to move from a dormant state to a processing state by allocation of a task to that processing resource.21. The data processing system as claimed in claim 20 , wherein the tasks are selected from a group including extracting signal quality characteristics from such a data ...

Подробнее
13-03-2014 дата публикации

System and method for implementing application functionality within a network infrastructure

Номер: US20140074981A1
Принадлежит: Circadence Corp

A system and method for implementing functionality within a network on behalf of first and second devices communicating with each other through the network. A front-end device is provided within the network that communicates data traffic with the first device. A back-end device is also implemented within the network and communicates data traffic with the second device. A communication channel couples the front-end device and the back-end device. Data traffic may be encoded into a different type or protocol for transport through the communication channel by the front-end device and back-end device. The front-end device and back-end device exchange quality of service information and may alter characteristics of the data traffic through the communication channel according to the quality of service information.

Подробнее
13-03-2014 дата публикации

METHOD, DEVICE, AND SYSTEM FOR IMPLEMENTING COMMUNICATION AFTER VIRTUAL MACHINE MIGRATION

Номер: US20140074997A1
Автор: Zhu Guojun
Принадлежит: Huawei Technologies Co., Ltd.

The present disclosure provides a method, a device, and a system for implementing communication after virtual machine migration. The method includes: constructing, after migration of a virtual machine, a dynamic host configuration protocol request message carrying address information of the virtual machine after the migration; and sending the dynamic host configuration protocol request message to a switch, so that the switch establishes a binding relationship between the address information of the virtual machine after the migration and a port accessed by the virtual machine. 1. A method for implementing communication after virtual machine migration , comprising:constructing, after migration of a virtual machine, a dynamic host configuration protocol request message including address information of the virtual machine after the migration; andsending the dynamic host configuration protocol request message to a switch, to enable the switch to establish a binding relationship between the address information of the virtual machine after the migration and a port accessed by the virtual machine; wherein the address information comprises an IP address and a MAC address.2. The method according to claim 1 , wherein the dynamic host configuration protocol request message is constructed after a virtual machine server or a virtual machine monitor detects the migration of the virtual machine.3. The method according to claim 1 , wherein the method further comprises:receiving a dynamic host configuration protocol response message from the switch and which includes the address information of the virtual machine after the migration; andupdating, according to the dynamic host configuration protocol response message, a lease time of the IP address of the virtual machine after the migration.4. The method according to claim 1 , wherein the dynamic host configuration protocol request message is a unicast renewal request message.5. A method for implementing communication after virtual ...

Подробнее
13-03-2014 дата публикации

Modifying memory space allocation for inactive tasks

Номер: US20140075139A1
Принадлежит: International Business Machines Corp

Provided are a computer program product, system, and method for modifying memory space allocation for inactive tasks. Information is maintained on computational resources consumed by tasks running in the computer system allocated memory space in the memory. The information on the computational resources consumed by the tasks is used to determine inactive tasks of the tasks. The allocation of the memory space allocated to at least one of the determined inactive tasks is modified.

Подробнее
20-03-2014 дата публикации

PARALLEL COMPUTE FRAMEWORK

Номер: US20140082627A1
Автор: MANJAREKAR CHETAN
Принадлежит: SYNTEL, INC.

A computerized system, method and program product for executing tasks in parallel, including but not limited to executing tasks in combination on multiple processors of multiple computers and/or multiple cores of a processor on a single computer and/or combinations thereof. The framework utilizes parallel computing design principles, but hides the complexities of multi-threading and multi-core programming from the programmer. 1. A computerized system comprising:a non-transitory computer-readable medium having a computer program code stored thereon;a database having stored thereon one or more records that establish a parallel compute framework configuration; receiving a request to execute a computing task in parallel by invoking a parallel computer framework (“PCF”) task launcher, wherein the request passes one or more parameters about the computing task to the PCF task launcher;', 'validating whether the parameters passed to the PCF task launcher are valid based, at least in part, on the parallel compute framework configuration;', 'responsive to determining the parameters passed to the PCF task launcher are invalid, invoking exception handling to halt execution of the PCF task launcher;', partitioning the computing task into a plurality of discrete sub-tasks;', 'distributing the plurality of discrete sub-tasks to a plurality of processors for execution; and', 'returning result data from executing the computing task., 'responsive to determining the parameters passed to the PCF task launcher are valid], 'a processor in communication with the computer-readable memory configured to carry out instructions in accordance with the computer program code, wherein the computer program code, when executed by the processor, causes the processor to perform operations comprising2. The computerized system as recited in claim 1 , wherein distribution to the plurality of processors is handled based on the parallel compute framework configuration in the database.3. The computerized ...

Подробнее
03-04-2014 дата публикации

Allocating instantiated resources to an it-service

Номер: US20140095720A1
Принадлежит: International Business Machines Corp

Allocating an instance of a resource to an IT-service includes: analyzing a service model specifying the structure of an IT-service and comprising nodes and resource management rules specifying the management of said node's resource. For each node, the method includes: determining a resource type indicated by said node; determining one or more resource management rules assigned to said node; evaluating the resource management rules assigned to said node on a resource instance catalog and the determined resource type for computing selection criteria; applying the selection criteria on a service provider catalog for selecting one of the one or more resource managers, the service provider catalog being indicative of one or more of the resource managers respectively being operable to provide a resource instance of a given resource type to the IT-service; creating an instance of the resource provided by the selected resource manager; and allocating said instance to the IT-service.

Подробнее
06-01-2022 дата публикации

OPTIMIZATION OF VIRTUAL AGENT UTILIZATION

Номер: US20220004440A1
Принадлежит:

An approach to optimizing utilization of virtual agents within a virtual agent system. The approach may include monitoring the processing loads of virtual agents and identifying highly utilized virtual agents. The approach may also include configuring a pathway which directs a user query to the identified highly utilized virtual agent and allowing the highly utilized virtual agent to respond to the user query if the highly utilized virtual agent is capable of generating a satisfactory response. Additionally, the approach may include sending the user query to one or more other virtual agents if the highly utilized virtual agent is unable to generate a response above a confidence threshold. 1. A computer-implemented method for optimizing virtual agent utilization , the method comprising:monitoring, by one or more processors, processing loads of virtual agents;determining, by the one or more processors, if a processing load of a first virtual agent of the virtual agents is above a first threshold; andresponsive to determining the processing load of the first virtual agent is above the first threshold, configuring, by the one or more processors, a pathway to direct an incoming user query to the first virtual agent.2. The computer-implemented method of claim 1 , further comprising:receiving, by the one or more processors, a user query; anddetermining, by the one or more processors, if the first virtual agent can generate a response to the user query above a predetermined confidence threshold.3. The computer-implemented method of claim 2 , further comprising:responsive to determining the first virtual agent can generate a response to the user query above a predetermined confidence threshold, generating, by the one or more processors, a response to the user query by the first virtual agent; andsending, by the one or more processors, the response to the user.4. The computer-implemented method of claim 2 , further comprising:responsive to determining the virtual agent with ...

Подробнее
06-01-2022 дата публикации

TRANSACTION-ENABLED METHODS FOR PROVIDING PROVABLE ACCESS TO A DISTRIBUTED LEDGER WITH A TOKENIZED INSTRUCTION SET

Номер: US20220004927A1
Автор: Cella Charles Howard
Принадлежит:

Transaction-enabled methods for providing provable access to a distributed ledger with a tokenized instruction set for polymer production processes are described. A method may include accessing a distributed ledger comprising an instruction set for a polymer production process and tokenizing the instruction set. The method may further include interpreting an instruction set access request and providing a provable access to the instruction set. The method may further include providing commands to a production tool of the polymer production process and recording the transaction on the distributed ledger. 1. A method , comprising:accessing a distributed ledger comprising an instruction set, wherein the instruction set comprises an instruction set for a polymer production process;tokenizing the instruction set;interpreting an instruction set access request;in response to the instruction set access request, providing a provable access to the instruction set;providing commands to a production tool of the polymer production process in response to the instruction set access request; andrecording a transaction on the distributed ledger in response to the providing commands to the production tool.2. The method of claim 1 , wherein the instruction set comprises an instruction set for a chemical synthesis subprocess of the polymer production process.3. The method of claim 2 , further comprising providing commands to a production tool of the chemical synthesis subprocess of the polymer production process in response to the instruction set access request and recording a transaction on the distributed ledger in response to the providing commands to the production tool of the chemical synthesis subprocess of the polymer production process.4. The method of claim 1 , wherein the instruction set comprises a field programmable gate array (FPGA) instruction set.5. The method of claim 1 , wherein the instruction set further includes an application programming interface (API).6. The ...

Подробнее
05-01-2017 дата публикации

SYSTEM AND METHOD FOR ASSOCIATION AWARE EXECUTOR SERVICE IN A DISTRIBUTED COMPUTING ENVIRONMENT

Номер: US20170004015A1
Принадлежит:

A system and method for supporting an association-aware executor service in a distributed computing environment comprising. The system can provide an executor service associated with a thread pool, the thread pool containing a plurality of threads. The system can receive, at the executor service, a plurality of work requests, each work request being associated with a key of a plurality of keys. The system can define groups of work requests, each group of work requests comprising one or more work requests having a same key. The system can queue, on the plurality of threads in the thread pool, the groups of work requests, each group of work requests being queued on a different thread. All work requests in a particular group are executed on the same thread. 1. A method for supporting execution of tasks in a distributed computing environment , the method comprising:providing an executor service associated with a thread pool, the thread pool containing a plurality of threads;receiving, at the executor service, a plurality of work requests, each work request being associated with a key of a plurality of keys;defining a plurality of groups of work requests, each group of work requests comprising one or more work requests having a same key; andqueueing, on the plurality of threads in the thread pool, the groups of work requests, each group of work requests being queued on a different thread.2. The method of claim 1 , further comprising:receiving the plurality of work requests in a plurality of messages each comprising one of the plurality of work requests and one of the plurality of keys.3. The method of claim 1 , wherein each of the plurality of keys represents a particular datum in the distributed data grid.4. The method of claim 1 , wherein:each of the plurality of keys represents a particular datum in the distributed data grid; andwherein the executor service ensures that each of said plurality of work requests associated with a particular datum are processed on a same ...

Подробнее
05-01-2017 дата публикации

SEMANTIC WEB TECHNOLOGIES IN SYSTEM AUTOMATION

Номер: US20170004016A1
Принадлежит:

Descriptions of a plurality of information technology resources are maintained in a computer-readable storage medium. A plurality of evaluation strategies a maintained, wherein the evaluation strategies associate a plurality of rules with forms of changes to the plurality of information technology resources. Responsive to detecting a command to change a first property of the set of properties of a first information technology resource of the plurality of information technology resources, the method determines that a first of the evaluation strategies associates at least one of the plurality of rules with a form of the change to the first property of the first information technology resource. Also, responsive to detecting the command, at least one of the plurality of rules is evaluated and the operation of the at least one rule is performed. 1. A computer program product for managing a plurality of information technology resources , the computer program product comprising a computer readable storage medium having program instructions embodied therewith , the program instructions executable by a processor to cause the processor to:maintain descriptions of a plurality of information technology resources in a computer readable storage medium, wherein the descriptions indicate a set of one or more properties and settings of the set of one or more properties for the plurality of information technology resources, wherein at least a first of the set of one or more properties for each of the plurality of information technology resources is set to indicate at least one resource class that defines resource behavior;maintain a plurality of evaluation strategies, wherein the plurality of evaluation strategies associate rules with forms of changes to the plurality of information technology resources, wherein each of the plurality of rules comprises a Boolean statement and an operation to perform based on outcome of the Boolean statement;responsive to detection of a first command ...

Подробнее
05-01-2017 дата публикации

Concurrent Program Execution Optimization

Номер: US20170004017A1
Автор: Mark Henrik Sandstrom
Принадлежит: Individual

An architecture for a load-balanced groups of multi-stage manycore processors shared dynamically among a set of software applications, with capabilities for destination task defined intra-application prioritization of inter-task communications (ITC), for architecture-based ITC performance isolation between the applications, as well as for prioritizing application task instances for execution on cores of manycore processors based at least in part on which of the task instances have available for them the input data, such as ITC data, that they need for executing.

Подробнее
07-01-2021 дата публикации

Software Acceleration Platform for Supporting Decomposed, On-Demand Network Services

Номер: US20210004214A1
Принадлежит: Hyperblox Inc

An example embodiment may involve obtaining one or more blueprint files. The blueprint files may collectively define a system of processing nodes, a call flow involving a sequence of messages exchanged by the processing nodes, and message formats of the messages exchanged by the processing nodes. The example embodiment may also involve compiling the blueprint files into machine executable code. The machine executable code may be capable of: representing the processing nodes as decomposed, dynamically invoked units of logic, and transmitting the sequence of messages between the units of logic in accordance with the message formats. The units of logic may include a respective controller and one or more respective workers for each type of processing node.

Подробнее
07-01-2021 дата публикации

DEPLOYING SERVICE CONTAINERS IN AN ADAPTER DEVICE

Номер: US20210004245A1
Принадлежит:

In one implementation, an adapter device includes a processor and a storage medium including instructions. The instructions are executable by the processor to: deploy a composer container in the adapter device, wherein the adapter device is coupled to a host device; receive, by the composer container, a plurality of adapter service requests from the host device; and in response to the plurality of service requests, deploy, by the composer container, a plurality of service containers in the adapter device, wherein each service container is to provide a particular adapter service to the host device, and wherein each service container is allocated a subset of the plurality of computing resources of the adapter device. 1. An adapter device comprising:a processor; and deploy a composer container in the adapter device, wherein the adapter device is coupled to a host device;', 'receive, by the composer container, a plurality of adapter service requests from the host device; and', 'in response to the plurality of service requests, deploy, by the composer container, a plurality of service containers in the adapter device, wherein each service container is to provide a particular adapter service to the host device, and wherein each service container is allocated a subset of the plurality of computing resources of the adapter device., 'a storage medium including instructions executable by the processor to2. The adapter device of claim 1 , wherein the composer container is to claim 1 , for each service container of each of the plurality of service containers:determine requirements of the service container;determine available computing resources of the adapter device; andin response to a determination that the available computing resources of the adapter device can support the requirements of the service container, deploy the service container on the adapter device.3. The adapter device of claim 1 , wherein the composer container is to configure a service container type of each ...

Подробнее
07-01-2021 дата публикации

Method and Apparatus for Creating Virtual Machine

Номер: US20210004258A1
Автор: Guan Yanjie, Liu Tiesheng
Принадлежит:

A method for creating a virtual machine includes receiving a virtual machine creation request comprising parameter information of a virtual network interface card occupied by a to-be-created virtual machine, obtaining current resource usage information of the network interface card resource pools of at least one computing node, wherein the at least one computing node is deployed on a cloud platform, each computing node comprises a network interface card resource pool comprising physical network interface cards, determining a target computing node, in the at least one computing node based on the parameter information and the current resource usage information, and invoking the target computing node to create the virtual machine. 1. A method for creating a virtual machine , comprising:receiving a virtual machine creation request comprising parameter information of a virtual network interface card, wherein the virtual network interface card comprises a to-be-created virtual machine;obtaining current resource usage information of a network interface card resource pool of at least one computing node on a cloud platform, wherein the network interface card resource pool comprises one or more physical network interface cards on the computing node;determining, from the at least one computing node based on the parameter information and the current resource usage information of the network interface card resource pool, a target computing node that is used to create a virtual machine according to the virtual machine creation request; andinvoking the target computing node to create the virtual machine.2. The method of claim 1 , wherein the parameter information comprises at least one of a quantity of virtual network interface cards claim 1 , a bandwidth of the virtual network interface card claim 1 , and affinity information of the virtual network interface card claim 1 , and wherein the affinity information indicates whether different virtual network interface cards for a same ...

Подробнее
02-01-2020 дата публикации

APPARATUSES, METHODS, AND SYSTEMS FOR CONDITIONAL OPERATIONS IN A CONFIGURABLE SPATIAL ACCELERATOR

Номер: US20200004538A1
Принадлежит:

Systems, methods, and apparatuses relating to conditional operations in a configurable spatial accelerator are described. In one embodiment, a hardware accelerator includes an output buffer of a first processing element coupled to an input buffer of a second processing element via a first data path that is to send a first dataflow token from the output buffer of the first processing element to the input buffer of the second processing element when the first dataflow token is received in the output buffer of the first processing element; an output buffer of a third processing element coupled to the input buffer of the second processing element via a second data path that is to send a second dataflow token from the output buffer of the third processing element to the input buffer of the second processing element when the second dataflow token is received in the output buffer of the third processing element; a first backpressure path from the input buffer of the second processing element to the first processing element to indicate to the first processing element when storage is not available in the input buffer of the second processing element; a second backpres sure path from the input buffer of the second processing element to the third processing element to indicate to the third processing element when storage is not available in the input buffer of the second processing element; and a scheduler of the second processing element to cause storage of the first dataflow token from the first data path into the input buffer of the second processing element when both the first backpres sure path indicates storage is available in the input buffer of the second processing element and a conditional token received in a conditional queue of the second processing element from another processing element is a first value. 1. An apparatus comprising:an output buffer of a first processing element coupled to an input buffer of a second processing element via a first data path that is ...

Подробнее
02-01-2020 дата публикации

Shared local memory tiling mechanism

Номер: US20200004548A1
Принадлежит: Intel Corp

An apparatus to facilitate memory tiling is disclosed. The apparatus includes a memory, one or more execution units (EUs) to execute a plurality of processing threads via access to the memory and tiling logic to apply a tiling pattern to memory addresses for data stored in the memory.

Подробнее
07-01-2021 дата публикации

METHODS, APPARATUSES AND COMPUTER READABLE MEDIUMS FOR NETWORK BASED MEDIA PROCESSING

Номер: US20210004273A1
Принадлежит: NOKIA TECHNOLOGIES OY

A network apparatus distributes a first plurality of tasks for processing first media content among at least a first slicing window and a second slicing window based on a connection map included in a workflow description from a network based media processing (NBMP) source. The first slicing window includes at least one first task from among the first plurality of tasks and the second slicing window includes at least one second task from among the first plurality of tasks. The network apparatus provisions the first media content to at least a first of the one or more media sinks by deploying the at least one first task to one or more first media processing entities, and deploying the at least one second task to one or more of the first media processing entities in response to receiving an indication that the at least one first task has been deployed successfully. 120.-. (canceled)21. A method for provisioning media content to one or more media sinks , the method comprising:distributing a first plurality of tasks for processing first media content among at least a first slicing window and a second slicing window based on a connection map included in a workflow description from a network based media processing (NBMP) source, the first slicing window including at least one first task from among the first plurality of tasks and the second slicing window including at least one second task from among the first plurality of tasks; and deploying the at least one first task to one or more first media processing entities, and', 'deploying the at least one second task to one or more of the first media processing entities in response to receiving an indication that the at least one first task has been deployed successfully., 'provisioning the first media content to at least a first of the one or more media sinks by'}22. The method of claim 21 , whereinthe connection map includes a flow control parameter indicating whether a respective pair of tasks among the first plurality of ...

Подробнее
07-01-2021 дата публикации

High Availability Multi-Single-Tenant Services

Номер: US20210004275A1
Принадлежит: Google LLC

A method () of maintaining availability of service instances () on a distributed system () includes executing a pool of primary virtual machine (VM) instances (P), each primary VM instance executing a corresponding individual service instance and including a rate of unavailability. The method also includes determining a number of secondary VM instances (S) required to maintain availability of the individual service instances when one or more of the primary VM instances are unavailable based on the number of primary VM instances in the pool of primary VM instances and the rate of unavailability. The method also includes instantiating a pool of secondary VM instances based on the number of secondary VM instances required to maintain availability of the individual service instances. 1. A method of maintaining availability of on a distributed system , the method comprising:executing, by data processing hardware of the distributed system, a pool of primary virtual machine (VM) instances, each primary VM instance executing a corresponding individual service instance and comprising a rate of unavailability;determining, by the data processing hardware, a number of secondary VM instances required to maintain availability of the individual service instances when one or more of the primary VM instances are unavailable based on the number of primary VM instances in the pool of primary VM instances and the respective rate of unavailability; andinstantiating, by the data processing hardware, a pool of secondary VM instances based on the number of secondary VM instances required to maintain availability of the individual service instances.2. The method of claim 1 , wherein a number of secondary VM instances in the pool of secondary VM instances is less than the number of primary VM instances in the pool of primary VM instances.3. The method of claim 1 , further comprising:identifying, by the data processing hardware, unavailability of one of the primary VM instances in the pool of ...

Подробнее
07-01-2021 дата публикации

LOCALIZED DEVICE COORDINATOR WITH MUTABLE ROUTING INFORMATION

Номер: US20210004281A1
Принадлежит:

Systems and methods are described for implementing a coordinator within a coordinated environment, which environment includes set of coordinated devices managed by the coordinator. The coordinator can be provisioned with a set of tasks, each task corresponding to a segment of code that can be executed by the coordinator, such as to manage the coordinated devices. The coordinator can further be provisioned with event flow information designating a routing of inputs to the coordinator computing device to destinations, such as task executions or coordinated devices. On receiving input, the coordinator can reference the event flow information to pass the input to an appropriate destination. 1. A coordinator computing device configured to manage one or more coordinated devices within a coordinated environment , the coordinator computing device distinct from the one or more coordinated devices and comprising: one or more tasks to manage operation of the one or more coordinated devices, individual tasks corresponding to executable computer code executable by the coordinator computing device, and', 'event flow information designating a routing of inputs to the coordinator computing device to destinations, wherein the event flow information comprises a set of routes, each route identifying one or more criteria and a destination to which input obtained at the coordinator computing device matching the one or more criteria of the route is to be transmitted; and, 'a non-transitory data store including obtain a configuration package for the coordinator computing device, the configuration package identifying the one or more coordinated devices, the one or more tasks, and the event flow information;', 'retrieve the one or more tasks, as identified in the configuration package, from a network-accessible data store;', generate an isolated execution environment on the coordinator computing device in which the computer code of the at least one task will be executed;', 'provision the ...

Подробнее
02-01-2020 дата публикации

MANAGING GLOBAL AND LOCAL EXECUTION PHASES

Номер: US20200004577A1
Принадлежит:

A method of running a computer program comprising concurrent threads, wherein: at any time, the program is in a current global execution phase, GEP, each thread is divided into a sequence of local execution phases, LEPs, each corresponding to a different GEP, wherein the thread is in a current LEP that cannot progress beyond the LEP corresponding to the current GEP; any of the threads is able to advance the GEP if the current LEP of all threads has reached the LEP corresponding to the current GEP; one thread comprises code to perform an internal acquire to acquire a lock on its respective LEP; and at least one other threads comprises code to perform an external release to force advancement of the current LEP of said one thread, but wherein the external release will be blocked if said thread has performed the internal acquire. 1. A method of running a program comprising a plurality of concurrent threads on a computer , wherein:at any given time the program is in a current one of a sequence of global execution phases;each of the threads is divided into a respective sequence of local execution phases each corresponding to a different corresponding one in the sequence of global execution phases, wherein at any given time the thread is in a current one of the respective sequence of local execution phases, and the current local execution phase is not allowed to progress beyond the local execution phase in the respective sequence that corresponds to the current global execution phase;any of the threads is able to advance the global execution phase to the next in the sequence of global execution phases on condition that the current local execution phase of all of the threads has reached the local execution phase in the respective sequence that corresponds to the current global execution phase;one of the threads comprises code to perform an internal acquire to acquire a lock on its respective local execution phase; andat least one other of the threads comprises code to ...

Подробнее
02-01-2020 дата публикации

OPPORTUNISTIC DATA ANALYTICS USING MEMORY BANDWIDTH IN DISAGGREGATED COMPUTING SYSTEMS

Номер: US20200004593A1

Respective memory devices are assigned to respective processor devices in a disaggregated computing system, the disaggregated computing system having at least a pool of the memory devices and a pool of the processor devices. An iterative learning algorithm is used to define data boundaries of a dataset for performing an analytic function on the dataset simultaneous to a primary compute task, unrelated to the analytic function, being performed on the dataset in the pool of memory devices using memory bandwidth not currently committed to the primary compute task, thereby efficiently employing the unused memory bandwidth to prevent underutilization of the pool of memory devices. 1. A method for optimizing memory bandwidth in a disaggregated computing system , by a processor device , comprising:assigning respective memory devices to respective processor devices in the disaggregated computing system, the disaggregated computing system having at least a pool of the memory devices and a pool of the processor devices; andusing an iterative learning algorithm to define data boundaries of a dataset for performing an analytic function on the dataset simultaneous to a primary compute task, unrelated to the analytic function, being performed on the dataset in the pool of memory devices using memory bandwidth not currently committed to the primary compute task such that the analytic function is performed on the dataset being resident in the pool of memory devices using unused memory bandwidth otherwise allocated to the primary compute task associated with the dataset resident in the pool of memory devices to extract information to aide in processing the data resident in the pool of memory devices, thereby efficiently employing the unused memory bandwidth to prevent underutilization of the pool of memory devices.2. The method of claim 1 , further including receiving user input for designating certain processor devices of the pool of processor devices to be used to perform ...

Подробнее
02-01-2020 дата публикации

Client-Server Architecture for Multicore Computer System to Realize Single-Core-Equivalent View

Номер: US20200004594A1
Автор: WANG Qixin, Wang Zhu
Принадлежит:

A client-server architecture is used in a multicore computer system to realize a single-core-equivalent (SCE) view. In the system, plural stacks, each having a core and a local cache subsystem coupled thereto, are divided into a client stack for running client threads, and server stacks each for running server threads. A shared cache having shared cache blocks, each coupled to the client stack and to one or more server stacks, is also used. The core of an individual server stack is configured such that computing resources utilizable in executing the server thread(s) are confined to the individual server stack and the shared cache block coupled thereto, isolating an inter-core interference caused by the server thread(s) to the client thread(s) to within the individual server stack, the shared cache block coupled thereto, any server stack coupled to this shared cache block, and the client stack to thereby realize the SCE view. 2. The multicore computer system of further comprising:an additional cache subsystem coupled to the client stack, the additional cache subsystem being configured to provide next one or more levels of cache memory to the client stack.3. The multicore computer system of claim 1 , wherein the local cache subsystem of each stack is configured to provide only one level of cache memory.4. The multicore computer system of claim 1 , wherein a scratchpad memory and/or other fast-storage is used to serve the purpose of the one or more shared cache blocks.5. The multicore computer system of claim 1 , wherein each core is a general-purpose processing unit or a specialized processing unit.6. The multicore computer system of claim 5 , wherein the specialized processing unit is a graphics processing unit (GPU) or a secure crypto processor.7. The multicore computer system of further comprising:one or more additional computing resources coupled to the client stack, wherein the one or more additional computing resources may include one or more buses, main memory, ...

Подробнее
02-01-2020 дата публикации

FAULT-TOLERANT ACCELERATOR BASED INFERENCE SERVICE

Номер: US20200004595A1
Принадлежит:

Implementations detailed herein include description of a computer-implemented method. In an implementation, the method at least includes attaching a first set of one or more accelerator slots of an accelerator appliance to an application instance of a multi-tenant provider network according to an application instance configuration, the application instance configuration to define per accelerator slot capabilities to be used by an application of the application instance, wherein the multi-tenant provider network comprises a plurality of computing devices configured to implement a plurality of virtual compute instances, and wherein the first set of one or more accelerator slots is implemented using physical accelerator resources accessible to the application instance; while performing inference using the loaded machine learning model of the application using the first set of one or more accelerator slots on the attached accelerator appliance, managing resources of the accelerator appliance using an accelerator appliance manager of the accelerator appliance. 1. A computer-implemented method , comprising:attaching a first set of one or more graphical processing unit (GPU) slots of an accelerator appliance to an application instance of a multi-tenant provider network according to an application instance configuration, the application instance configuration to define per GPU slot capabilities to be used by an application of the application instance, wherein the multi-tenant provider network comprises a plurality of computing devices configured to implement a plurality of virtual compute instances, and wherein the first set of one or more GPU slots is implemented using physical GPU resources accessible to the application instance over a network;loading the machine learning model onto the first set of one or more GPU slots; andwhile performing inference using the loaded machine learning model of the application using the first set of one or more GPU slots on the attached ...

Подробнее
13-01-2022 дата публикации

SYSTEM FOR COMPUTATIONAL RESOURCE PREDICTION AND SUBSEQUENT WORKLOAD PROVISIONING

Номер: US20220012089A1
Принадлежит:

The present disclosure describes a system, a method, and a product for computational resource prediction of user tasks and subsequent workload provisioning. The computational resource predictions for a user task is achieved using a twin machine learning and AI system based on probabilistic programing. The workload scheduling and assignment of the user task in a computing cluster with components having diverse hardware architectures are further managed by an automatic and intelligent assignment/provisioning engine based on various machine learning and AI models and reinforcement learning. The automatic workload scheduling and assignment engine is further configured to handle unpredicted uncertainty and adapt to constantly evolving system queues of the tasks submitted by the users to generate queuing/re-queuing, running/termination, and resource allocation/reallocation actions for user tasks. 1. A system comprising:a computer cluster including a set of computing platforms each corresponding to one of a plurality of different computing architectures; and receive input data, the input data comprising platform-independent task parameters and target metrics associated with a user task;', 'automatically generate a computing architecture and computing platform selection and computing resource prediction based on the input data and a hardware profile of the computer cluster using a computing resource prediction engine;', 'automatically map the user task to one or more of the set of computing platforms and to a scheduling action among a set of scheduling actions using a trained intelligent computing resource assignment engine;', 'automatically schedule the user task according to the scheduling action; and', 'automatically adjust the trained intelligent computing resource assignment engine by applying reinforcement learning based on newly acquired data associated with performing the scheduling action., 'an automatic and adaptive resource prediction and assignment circuitry for ...

Подробнее
13-01-2022 дата публикации

CLOUD ACCESS METHOD FOR IOT DEVICES, AND DEVICES PERFORMING THE SAME

Номер: US20220012101A1
Автор: LEE Joo Chul
Принадлежит:

A cloud access method of an internet of things (IoT) device and devices performing the cloud access method are disclosed. The cloud access method using a cloud proxy function includes receiving a first resource retrieval request of a client device from a cloud, extracting, from the first resource retrieval request, a device identification (ID) of a device including a resource for which a resource retrieval is requested, and transmitting a second resource retrieval request of the client device to the device based on the device ID. 1. A cloud access method using a cloud proxy function , comprising:receiving, from a cloud, a first resource retrieval request of a client device;extracting, from the first resource retrieval request, a device identification (ID) of a device comprising a resource for which a resource retrieval is requested; andtransmitting, to the device, a second resource retrieval request of the client device based on the device ID.2. The cloud access method of claim 1 , wherein:the first resource retrieval request comprises a proxy-uniform resource identifier (URI) option header, andthe proxy-URI option header is removed from the second resource retrieval request.3. The cloud access method of claim 2 , wherein the proxy-URI option header comprises the device ID.4. The cloud access method of claim 1 , wherein the device is a device that has not a cloud interworking function.5. The cloud access method of claim 1 , wherein the transmitting comprises:searching for the device ID and endpoint information of the device from a mapping list that is used for managing device-to-device (D2D) devices that have not a cloud interworking function for access to the cloud.6. The cloud access method of claim 1 , further comprising:requesting the cloud for registration of a cloud proxy device that supports the cloud proxy function; andreceiving an access token for the cloud proxy device that is issued by the requesting for the registration.7. The cloud access method of ...

Подробнее
02-01-2020 дата публикации

CORE MAPPING

Номер: US20200004721A1
Принадлежит:

The disclosed technology is generally directed to peripheral access. In one example of the technology, stored configuration information is read. The stored configuration information is associated with mapping a plurality of independent execution environments to a plurality of peripherals such that the peripherals of the plurality of peripherals have corresponding independent execution environments of the plurality of independent execution environments. A configurable interrupt routing table is programmed based on the configuration information. An interrupt is received from a peripheral. The interrupt is routed to the corresponding independent execution environment based on the configurable interrupt routing table. 120-. (canceled)21. An apparatus , comprising:a plurality of processing cores;a plurality of peripherals; anda configurable interrupt routing table that selectively maps each of the plurality of peripherals to an individual processing core of the plurality of processing cores, wherein the mapping of each of the plurality of peripherals to the individual processing core is configurable while a lock bit of the apparatus is not set, wherein the mapping of each of the plurality of peripherals to the individual processing core is locked in response to the lock bit of the apparatus being set, and wherein, once locked, the mapping of each of the plurality of peripherals to the individual processing core remains locked until a reboot of the apparatus.22. The apparatus of claim 21 , wherein the configurable interrupt routing table includes a plurality of configuration registers.23. The apparatus of claim 21 , wherein a first processing core of the plurality of processing cores is associated with at least two independent execution environments.24. The apparatus of claim 23 , wherein a first independent execution environment associated with the first processing core is a Secure World operating environment of the first processing core claim 23 , and wherein a second ...

Подробнее
13-01-2022 дата публикации

Scatter and Gather Streaming Data through a Circular FIFO

Номер: US20220012201A1
Принадлежит:

Systems, apparatuses, and methods for performing scatter and gather direct memory access (DMA) streaming through a circular buffer are described. A system includes a circular buffer, producer DMA engine, and consumer DMA engine. After the producer DMA engine writes or skips over a given data chunk of a first frame to the buffer, the producer DMA engine sends an updated write pointer to the consumer DMA engine indicating that a data credit has been committed to the buffer and that the data credit is ready to be consumed. After the consumer DMA engine reads or skips over the given data chunk of the first frame from the buffer, the consumer DMA engine sends an updated read pointer to the producer DMA engine indicating that the data credit has been consumed and that space has been freed up in the buffer to be reused by the producer DMA engine. 1. An apparatus comprising:a shared buffer;a producer direct memory access (DMA) engine; anda consumer DMA engine;wherein responsive to producing a given data chunk of a dataset, the producer DMA engine is configured to send an updated write pointer to the consumer DMA engine that identifies a location in the buffer and indicates that data is ready to be consumed; andwherein responsive to consuming the given data chunk of the dataset from the buffer, the consumer DMA engine is configured to send an updated read pointer to the producer DMA engine that identifies a location in the buffer and indicates that space has been freed up in the buffer.2. The apparatus as recited in claim 1 , wherein the dataset is a first frame of a video sequence and the buffer is a circular buffer with a size that is smaller than a size of the first frame claim 1 , and wherein the producer DMA engine is configured to:write data to a first buffer location one or more data credits in advance of a current location of the updated write pointer; andafter writing data to the first buffer location, increment the write pointer by multiple data credits.3. The ...

Подробнее
13-01-2022 дата публикации

APPARATUS AND METHOD FOR MATRIX MULTIPLICATION USING PROCESSING-IN-MEMORY

Номер: US20220012303A1
Автор: Zheng Qilin
Принадлежит:

Embodiments of apparatus and method for matrix multiplication using processing-in-memory (PIM) are disclosed. In an example, an apparatus for matrix multiplication includes an array of PIM blocks in rows and columns, a controller, and an accumulator. Each PIM block is configured into a computing mode or a memory mode. The controller is configured to divide the array of PIM blocks into a first set of PIM blocks each configured into the memory mode and a second set of PIM blocks each configured into the computing mode. The first set of PIM blocks are configured to store a first matrix, and the second set of PIM blocks are configured to store a second matrix and calculate partial sums of a third matrix based on the first and second matrices. The accumulator is configured to output the third matrix based on the partial sums of the third matrix. 1. An apparatus for matrix multiplication , comprising:an array of processing-in-memory (PIM) blocks in rows and columns, each of which is configured into a computing mode or a memory mode;a controller configured to divide the array of PIM blocks into a first set of PIM blocks each configured into the memory mode and a second set of PIM blocks each configured into the computing mode, wherein the first set of PIM blocks are configured to store a first matrix, and the second set of PIM blocks are configured to store a second matrix and calculate partial sums of a third matrix based on the first and second matrices; andan accumulator configured to output the third matrix based on the partial sums of the third matrix.2. The apparatus of claim 1 , wherein the first set of PIM blocks consists of a row of the array of PIM blocks claim 1 , and the second set of PIM blocks consists of a remainder of the array of PIM blocks.3. The apparatus of claim 2 , wherein dimensions of the first set of PIM blocks are smaller than dimensions of the first matrix claim 2 , and the controller is configured to map the first matrix to the first set of PIM ...

Подробнее
02-01-2020 дата публикации

METHOD AND SYSTEM FOR IMPLEMENTING PARALLEL DATABASE QUERIES

Номер: US20200004861A1
Принадлежит: ORACLE INTERNATIONAL CORPORATION

Described is an improved approach to implement parallel queries where session states are saved for parallelization resources. When work needs to be performed in the parallel query system for a given session, a search can be performed to identify a resource (from among the pool of available resources) that had previously been used by that session, and which had saved a session state object for that previous connection to the session. Instead of incurring the entirety of setup costs each time workload is assigned to a resource, the saved session state can be used to re-set the context for the resource to the configuration requirements for that session. 1. A method for implementing parallelization of queries in a database system , comprising:maintaining saved session states for one or more parallelization resources in a database system;maintaining a lookup structure that maps sessions to the parallelization resources;receiving a parallelization work request to assign work to one or more of the parallelization resources;accessing the lookup structure to identify a matching parallelization resource that has a matching saved session state corresponding to the work request;assigning the matching parallelization resource to process the work request; andconfiguring the matching parallelization resource using the matching saved session state instead of initializing a new session state for the work request.2. The method of claim 1 , wherein the lookup structure comprises a mapping of session IDs to slave IDs.3. The method of claim 2 , further comprising a mapping of slave IDs to database sub-portion IDs.4. The method of claim 1 , further comprising:determining whether the work request corresponds to a full match of the saved session states, a partial match of the saved session states, or zero match of the saved session states;initializing a partial set of the new session state upon identification of the partial match; andinitializing a full set of the new session state upon ...

Подробнее
13-01-2022 дата публикации

NEURAL NETWORK ACCELERATOR AND NEURAL NETWORK ACCELERATION METHOD BASED ON STRUCTURED PRUNING AND LOW-BIT QUANTIZATION

Номер: US20220012593A1
Принадлежит:

The present invention discloses a neural network accelerator and a neural network acceleration method based on structured pruning and low-bit quantization. The neural network accelerator includes a master controller, an activations selection unit, an extensible calculation array, a multifunctional processing element, a DMA, a DRAM and a buffer. The present invention makes full use of the data reusability during inference operation of a neural network, reduces the power consumption of selecting input activation and weights of effective calculations, and relieves the high transmission bandwidth pressure between the activations selection unit and the extensible calculation array through structured pruning and data sharing on the extensible calculation array, reduces the number of weight parameters and the storage bit width by combining the low-bit quantization technology, and further improves the throughput rate and energy efficiency of the convolutional neural network accelerator. 1. A neural network accelerator based on structured pruning and low-bit quantization , comprising:a master controller, an activations selection unit, an extensible calculation array, a multifunctional processing element, a DMA (Direct Memory Access), a DRAM (Dynamic Random Access Memory) and a buffer, wherein the master controller is respectively connected with the activations selection unit, the extensible calculation array and the DMA; the DMA is respectively connected with the buffer and the DRAM; the buffer is respectively connected with the multifunctional processing element and the activations selection unit; and the extensible calculation array is respectively connected with the activations selection unit and the buffer;the master controller is used for parsing an instruction set to generate a first storage address of input activation and weights, a storage address of output activation and control signals;the buffer is used for storing the input activation, the output activation and ...

Подробнее
03-01-2019 дата публикации

Method to Optimize Core Count for Concurrent Single and Multi-Thread Application Performance

Номер: US20190004861A1
Принадлежит: Dell Products LP

A system, method, and computer-readable medium are disclosed for performing a core optimization operation, comprising: enabling all of a plurality of processor cores of a processor; selectively turning off at least one of the plurality of processor cores, the selectively turning off the at least one of the plurality of processor cores being based upon an application to be executed by the processor, the selectively turning off being performed dynamically during runtime of the processor; and, controlling process thread distribution to the plurality of processor cores via an operating system executing on the processor, the process thread distribution not distributing threads to the turned off at least one of the plurality of processor cores.

Подробнее
03-01-2019 дата публикации

METHODS AND APPARATUS FOR DEPLOYING A DISTRIBUTED SYSTEM USING OPERATING SYSTEM VIRTUALIZATION

Номер: US20190004865A1
Принадлежит:

Methods and apparatus are disclosed to deploying a distributed system using operating system or container virtualization. An example apparatus includes a management container including a configuration manager and a container manager. The example configuration manager to is receive an instruction for a desired deployment state and is to apply a first change to a first current deployment state of the management container based on the desired deployment state. The example container manager is to apply a second change to a second current deployment state of a deployed container based on the desired deployment state. The container manager is to return information indicative of the desired deployment state to the configuration manager when the second change from the second current deployment state to the desired deployment state is achieved. 1. An apparatus comprising: the configuration manager to receive an instruction for a desired deployment state and to apply a first change to a first current deployment state of the management container based on the desired deployment state, and', 'the container manager to apply a second change to a second current deployment state of a deployed container based on the desired deployment state, the container manager to return information indicative of the desired deployment state to the configuration manager when the second change from the second current deployment state to the desired deployment state is achieved., 'a management container including a configuration manager and a container manager,'}, 'a processor and a memory to implement at least2. The apparatus of claim 1 , wherein the configuration manager is to determine a runtime environment based on applying the first change.3. The apparatus of claim 2 , wherein the container manager is to execute a lifecycle command with respect to the deployed container based on the runtime environment.4. The apparatus of claim 3 , wherein the container manager is to report runtime information ...

Подробнее
03-01-2019 дата публикации

HIERARCHICAL PROCESS GROUP MANAGEMENT

Номер: US20190004870A1
Принадлежит:

Management of hierarchical process groups is provided. Aspects include creating a group identifier having an associated set of resource limits for shared resources of a processing system. A process is associated with the group identifier. A hierarchical process group is created including the process as a parent process and at least one child process spawned from the parent process, where the at least one child process inherits the group identifier. A container is created to store resource usage of the hierarchical process group and the set of resource limits of the group identifier. The set of resources associated with the hierarchical process group is used to collectively monitor resource usage of processes. A resource allocation adjustment action is performed in the processing system based on determining that an existing process exceeds a process resource limit or the hierarchical process group exceeds at least one of the set of resource limits. 1. A computer-implemented method for managing hierarchical process groups , the method comprising:creating, by a hierarchical process group service of a processing system, a group identifier having an associated set of resource limits for shared resources of a processing system;associating, by the hierarchical process group service, a process with the group identifier;creating, by the hierarchical process group service, a hierarchical process group comprising the process as a parent process and at least one child process spawned from the parent process, wherein the at least one child process inherits the group identifier;creating, by the hierarchical process group service, a container to store resource usage of the hierarchical process group and the set of resource limits of the group identifier;using the set of resources limits associated with the hierarchical process group to collectively monitor resource usage of a plurality of processes in the hierarchical process group;monitoring, by the hierarchical process group ...

Подробнее
03-01-2019 дата публикации

Multi-tenant data service in distributed file systems for big data analysis

Номер: US20190005066A1
Принадлежит: International Business Machines Corp

Configuration of a multi-tenant distributed file system on a node. Various tenants and tenant clusters are correlated to a distributed file systems, and the distributed file system communicates with various tenants through a connector service. The entire distributed file system exists on a physical node.

Подробнее
07-01-2016 дата публикации

RESOURCE SERVER PROVIDING A RAPIDLY CHANGING RESOURCE

Номер: US20160006673A1
Принадлежит:

A computer-readable medium is provided that causes a computing device to serve data resources. A nozzle is instantiated for a resource based on a media type associated with both the nozzle and the resource and starts a subscriber thread and a rendering thread. The subscriber thread receives a block of streamed data from a publishing device, stores the block in a queue, and receives a request to drain the queue. The block includes a unique identifier of an event associated with the media type. The rendering thread reads the block from the queue, renders the block, and stores the rendered block in a pre-allocated block of memory based on the unique identifier. A reference to the pre-allocated block of memory is stored in a tree map based on the unique identifier. The instantiated nozzle sends the rendered block to a requesting event client system. 1. A non-transitory computer-readable medium having stored thereon computer-readable instructions that when executed by a computing device cause the computing device to:instantiate a nozzle for a resource based on a media type associated with both the nozzle and the resource;start, by the instantiated nozzle, a subscriber thread and a rendering thread;receive, by the started subscriber thread, a block of streamed data from a publishing device, wherein the block includes a unique identifier of an event associated with the media type;store, by the started subscriber thread, the received block in a queue;receive, by the started subscriber thread, a request to drain the queue;read, by the started rendering thread, the received block from the queue;render, by the started rendering thread, the read, received block;store, by the started rendering thread, the rendered block in a pre-allocated block of memory based on the unique identifier, wherein a reference to the pre-allocated block of memory is stored in a tree map based on the unique identifier;receive, by the instantiated nozzle, a request for an update for the resource based ...

Подробнее
03-01-2019 дата публикации

Data plane interface network quality of service in multi-tenant data centers

Номер: US20190007280A1
Принадлежит: Intel Corp

Methods, apparatus, and systems for data plane interface network Quality of Service (QoS) in multi-tenant data centers. Data plane operations including packet generation and encapsulation are performed in software running in virtual machines (VMs) or containers hosted by a compute platform. Control plane operations, including QoS traffic classification, are implemented in hardware by a network controller. Work submission and work completion queues are implemented in software for each VM or container. Work elements (WEs) defining work to be completed by the network controller are generated by software and processed by the network controller to classify packets associated with the WEs into QoS traffic classes, wherein packets belonging to a give traffic flow are classified to the same QoS traffic class. The network controller is also configured to perform scheduling of packet egress as a function of the packet's QoS traffic classifications, to transmit packets that are scheduled for egress onto the network, and to DMA indicia to the work completion queues to indicate the work associated with WEs has been completed.

Подробнее
20-01-2022 дата публикации

METHOD OF STORING ELECTRONIC DATA, RESOURCE RESERVATION SYSTEM, AND TERMINAL APPARATUS

Номер: US20220019472A1
Автор: NOROTA Ken
Принадлежит: RICOH COMPANY, LTD.

A method of storing electronic data performed by a terminal apparatus communicable with an information processing terminal is provided. The method includes: receiving, during a use of a first resource, a notification indicating that reservation of a second resource selected by a user is completed, from the information processing terminal; and in response to receiving the notification indicating that the reservation of the second resource is completed, starting a storing process of storing electronic data output by an electronic device during the use of the first resource. 1. A method of storing electronic data performed by a terminal apparatus communicable with an information processing terminal , the method comprising:receiving, during a use of a first resource, a notification indicating that reservation of a second resource selected by a user is completed, from the information processing terminal; andin response to receiving the notification indicating that the reservation of the second resource is completed, starting a storing process of storing electronic data output by an electronic device during the use of the first resource.2. The method of claim 1 , further comprisingin response to receiving, from the information processing terminal provided at the second resource, a notification indicating that a use of the second resource is started, transmitting the electronic data stored in association with reservation information of the first resource to the information processing terminal provided at the second resource.3. The method of claim 2 , further comprising:acquiring the reservation information from an information processing apparatus in which the reservation information is registered via a network; andstoring in a memory of the terminal apparatus the electronic data in association with the acquired reservation information including an event name.4. The method of claim 3 , further comprisingdisplaying, together with the event name included in the acquired ...

Подробнее
20-01-2022 дата публикации

METHODS AND APPARATUS TO MANAGE WORKLOAD DOMAINS IN VIRTUAL SERVER RACKS

Номер: US20220019474A1
Принадлежит:

Methods and apparatus to manage workload domains in virtual server racks are disclosed. An example apparatus includes processor circuitry to, in response to detecting that a number of available physical racks satisfies a threshold number of physical racks, apply a first resource allocation technique by reserving requested resources by exhausting first available resources of a first physical rack before using second available resources of a second physical rack; in response to detecting that the number of available physical racks does not satisfy the threshold number of physical racks, apply a second resource allocation technique by reserving the requested resources using a portion of the first available resources without exhausting the first available resources and using a portion of the second available resources without exhausting the second available resources; and execute one or more workload domains associated with a number of requested resources. 1. A non-transitory computer readable storage medium comprising instructions which , when executed , cause one or more processors to at least:in response to detecting that a number of available physical racks satisfies a threshold number of physical racks, apply a first resource allocation technique by reserving requested resources by exhausting first available resources of a first physical rack before using second available resources of a second physical rack;in response to detecting that the number of available physical racks does not satisfy the threshold number of physical racks, apply a second resource allocation technique by reserving the requested resources using a portion of the first available resources of the first physical rack without exhausting the first available resources and using a portion of the second available resources of the second physical rack without exhausting the second available resources; andexecute one or more workload domains associated with a number of requested resources in accordance ...

Подробнее
20-01-2022 дата публикации

AUXILIARY RESOURCE EXCHANGE PLATFORM USING A REAL TIME EXCHANGE NETWORK

Номер: US20220019483A1
Принадлежит: Bank of America Corporation

Real-time interaction processing may allow for the exchange of resources and/or supplemental resources. The flexibility of the real-time resource exchange network allows supplemental resources to be transferred between resources pools of a single entity or between resource pools of different entities, either within a single organization or located at different organizations. The type of supplemental resources and number of supplemental resources may be converted based on the types of resource pools from which and to which the supplemental resources are being transferred. 1. A system for resource exchanges over a real-time exchange network , the system comprising:one or more memory devices with computer-readable program code stored thereon; and receive a request to enter into an interaction between a first resource pool and a second resource pool;', 'receive an indication from a first entity to utilize supplemental resources for the interaction;', 'transfer the supplemental resources from the first resource pool to the second resource pool; and', 'settle the transfer of the supplemental resources between the first resource pool and the second resource pool;', 'wherein the supplemental resources are transferred through the real-time exchange network directly between the first resource pool and the second resource pool., 'one or more processing devices operatively coupled to the one or more memory devices, wherein the one or more processing devices are configured to execute the computer-readable program code to2. The system of claim 1 , wherein the first resource pool and the second resource pool are both resource pools of the first entity.3. The system of claim 1 , wherein the request to enter into the interaction is made by the first entity through a first entity computer system.4. The system of claim 1 , wherein the first resource pool is a first entity resource pool of the first entity and the second resource pool is a second entity resource pool of a second entity ...

Подробнее
02-01-2020 дата публикации

COMPUTERIZED METHODS AND SYSTEMS FOR MAINTAINING AND MODIFYING CLOUD COMPUTER SERVICES

Номер: US20200007418A1
Принадлежит:

Systems, methods, and other embodiments associated with modifying a computer-implemented service are described. In one embodiment, a method includes constructing pre-provisioned instances of a service within a first pool and constructing pre-orchestrated instances of the service within a second pool. In response to receiving a request to modify the service, a POM instance of the service is created with a modified version of executable code and assigned to a third pool. The pre-orchestrated instances within the second pool are then replaced using the POM instances from the third pool. 1. A non-transitory computer-readable medium storing computer-executable instructions that when executed by a processor of a computing device causes the processor to:construct pre-provisioned instances of a service within a first pool of a zone of computing resources, wherein the service executes executable code using the computing resources, and wherein a pre-provisioned instance comprises a computing environment of computing resources configured for subsequent installation and execution of the executable code of the service;construct pre-orchestrated instances of the service within a second pool of the zone, wherein a pre-orchestrated instance comprises a pre-provisioned instance within which the executable code of the service is installed in a non-executing state; and (i) create pre-orchestrated maintenance (POM) instances of the service by applying a modified version of the executable code for the service to one or more pre-provisioned instances from the first pool;', '(ii) assign the POM instances to a third pool and remove the one or more pre-provisioned instances from the first pool; and', '(iii) replace the pre-orchestrated instances within the second pool using the POM instances from the third pool making the POM instances available for provisioning., 'in response to receiving a request to modify the service2. The non-transitory computer-readable medium of claim 1 , wherein the ...

Подробнее
20-01-2022 дата публикации

DEEP NEURAL NETWORK TRAINING ACCELERATOR AND OPERATION METHOD THEREOF

Номер: US20220019897A1

A deep neural network training accelerator includes an operational unit sequentially performing first and second operations on a plurality of input data of a sub-set according to a mini-batch gradient descent, a determination unit determining each of the input data as one of skip data and training data based on a confidence matrix obtained by the first operation, and a control unit controlling the operational unit to skip the second operation with respect to the skip data. 1. A deep neural network training accelerator comprising:an operational unit sequentially performing first and second operations on a plurality of input data of a sub-set according to a mini-batch gradient descent;a determination unit determining each of the input data as one of skip data and training data based on a confidence matrix obtained by the first operation; anda control unit controlling the operational unit to skip the second operation with respect to the skip data.2. The deep neural network training accelerator of claim 1 , wherein the operational unit performs the second operation with respect to the training data after a predetermined time lapses from a time point at which the first operation is performed.3. The deep neural network training accelerator of claim 1 , wherein the first operation is a first training stage of the mini-batch gradient descent claim 1 , which uses a forward propagation algorithm.4. The deep neural network training accelerator of claim 1 , wherein the second operation is a second training stage of the mini-batch gradient descent claim 1 , which sequentially uses a backward propagation algorithm and a weight update algorithm.5. The deep neural network training accelerator of claim 1 , wherein the determination unit is implemented as a comparator that compares a largest element of the confidence matrix with a predetermined threshold value.6. The deep neural network training accelerator of claim 5 , wherein the comparator outputs a low signal corresponding to the ...

Подробнее
27-01-2022 дата публикации

ASSIGNING COMPUTING RESOURCES TO EXECUTION OF EVALUATORS FOR IMAGE ANIMATION

Номер: US20220027172A1
Принадлежит: Weta Digital Limited

An example method facilitates adjusting or enhancing performance of a process of as graphics program or animation software package, and includes providing a first User Interface (UI) control for allowing user assignment of or selection of one or more processor types, e.g., Graphics Processing Unit (GPU) or Central Processing Unit (CPU), and/or associated memory types, e.g., GPU memory and/or CPU memory, to one or more computing resources, such as variables and/or associated functions or evaluators. A drop-down menu or other control may be provided in a first UI to allow for user specification of or assignment of one or more computing resources, e.g., CPU or GPU processors and/or memory to one or more variables, data structures, associated functions or other executable code. In a specific implementation, one UI control facilitates user specification of one or more evaluators of a plugin, wherein the one or more evaluators are usable by a host application of the plugin. A second UI control enables user specification of whether or not the one or more evaluators should use data on a CPU or GPU. 1. A method for executing a function to animate an animation control rig in computer graphics application , the method comprising:accepting a first signal from a user input device to select a data structure displayed in a Graphical User Interface (GUI);accepting a second signal from the user input device to assign the data structure to one or more computing resources;indicating in the GUI that the data structure has been assigned to the one or more computing resources; andusing the one or more computing resources to implement function's accessing of the data structure by using the one or more computing resources, wherein the function pertains to a process used by a host application, such as the computer graphics application.2. The method of claim 1 , further including:accepting a third signal from a user input device to compile the function, resulting in compiled code; ...

Подробнее
27-01-2022 дата публикации

CONCURRENT MEMORY MANAGEMENT IN A COMPUTING SYSTEM

Номер: US20220027264A1
Принадлежит:

An example method of memory management in a computing system having a plurality of processors includes: receiving a first memory allocation request at a memory manager from a process executing on a processor of the plurality of processors in the computing system; allocating a local memory pool for the processor from a global memory pool for the plurality of processors in response to the first memory allocation request; and allocating memory from the local memory pool for the processor in response to the first memory allocation request without locking the local memory pool. 1. A method of memory management in a computing system having a plurality of processors , the method comprising:receiving a first memory allocation request at a memory manager from a process executing on a processor of the plurality of processors in the computing system;allocating a local memory pool for the processor from a global memory pool for the plurality of processors in response to the first memory allocation request; andallocating memory from the local memory pool for the processor in response to the first memory allocation request without locking the local memory pool.2. The method of claim 1 , wherein the step of allocating the local memory pool comprises:locking the global memory pool;allocating an amount of memory from the global memory pool to the local memory pool; andreducing the global memory pool by the amount.3. The method of claim 1 , wherein the step of allocating the local memory pool comprises:determining insufficient memory in the global memory pool to satisfy allocation of the local memory pool;adding a request for allocation of the local memory pool to a global wait queue; andallocating an amount of memory from the global memory pool to the local memory pool in response to the request in the global wait queue and in response to sufficient memory becoming available in the global memory pool.4. The method of claim 1 , further comprising:receiving a second memory allocation ...

Подробнее
12-01-2017 дата публикации

Adaptive Resource Management in a Pipelined Arbiter

Номер: US20170010986A1
Автор: Coddington John Deane
Принадлежит:

A resource arbiter in a system with multiple shared resources and multiple requestors may implement an adaptive resource management approach that takes advantage of time-varying requirements for granting access to at least some of the shared resources. For example, due to pipelining, signal timing issues, or a lack of information, more resources than are required to perform a task may need to be available for allocation to a requestor before its request for the needed resources is granted. The requestor may request only the resources it needs, relying on the arbiter to determine whether additional resources are required in order to grant the request. The arbiter may park a high priority requestor on idle resources, thus allowing requests for those resources by the high priority requestor to be granted on the first clock cycle of a request. Other requests may not be granted until at least a second clock cycle. 1. A system , comprising:one or more shared resources; anda resource arbiter; receive a request from one of a plurality of requestors for a given portion of the one or more shared resources;', 'determine whether or not an amount of the one or more shared resources whose availability is required in order to grant the request during a given one of multiple clock cycles is available, wherein the amount of the one or more shared resources whose availability is required in order to grant the request during the given one of the multiple clock cycles is different than an amount of the one or more shared resources whose availability is required in order to grant the request during another one of the multiple clock cycles; and', 'grant the request during the given clock cycle in response to determining that the amount of the one or more shared resources whose availability is required in order to grant the request during the given one of the multiple clock cycles is available during the given clock cycle., 'wherein the resource arbiter is configured to2. The system of ...

Подробнее
08-01-2015 дата публикации

Performance Interference Model for Managing Consolidated Workloads In Qos-Aware Clouds

Номер: US20150012634A1
Автор: Qian Zhu, Teresa TUNG
Принадлежит: Accenture Global Services Ltd

The workload profiler and performance interference (WPPI) system uses a test suite of recognized workloads, a resource estimation profiler and influence matrix to characterize un-profiled workloads, and affiliation rules to identify optimal and sub-optimal workload assignments to achieve consumer Quality of Service (QoS) guarantees and/or provider revenue goals. The WPPI system uses a performance interference model to forecast the performance impact to workloads of various consolidation schemes usable to achieve cloud provider and/or cloud consumer goals, and uses the test suite of recognized workloads, the resource estimation profiler and influence matrix, affiliation rules, and performance interference model to perform modeling to determine the initial assignment selections and consolidation strategy to use to deploy the workloads. The WPPI system uses an online consolidation algorithm, the offline models, and online monitoring to determine virtual machine to physical host assignments responsive to real-time conditions to meet cloud provider and/or cloud consumer goals.

Подробнее
14-01-2016 дата публикации

Configurable Per-Task State Counters For Processing Cores In Multi-Tasking Processing Systems

Номер: US20160011907A1
Принадлежит:

Configurable per-task state counters for processing cores in multi-tasking processing systems are disclosed along with related methods. In part, the disclosed embodiments include a work scheduler and a plurality of processing cores. The work scheduler assigns tasks to the processing cores, and the processing cores concurrently process multiple assigned tasks using a plurality of processing states. Further, task state counters are provided for each assigned task, and these task state counters are incremented for each cycle that the task stays within selected processing states to generate per-task state count values for the assigned tasks. These per-task state count values are reported back to the work scheduler when processing for the task ends. The work scheduler can then use one or more of the per-task state count values to adjust how new tasks are assigned to the processing cores. 1. A method for operating a multi-tasking processing system , comprising:assigning tasks to a plurality of processing cores for a multi-tasking processing system, each processing core being configured to process multiple assigned tasks using a plurality of processing states; and concurrently processing the multiple assigned tasks using the plurality of processing states; and', 'for each task, counting processing cycles that the task stays within a selected set of the plurality of processing states to generate a per-task state count value for the task., 'at each processing core2. The method of claim 1 , wherein at each processing core claim 1 , the method further comprises reporting the per-task state count values to a work scheduler for the multi-tasking processing system.3. The method of claim 1 , further comprising selecting the set of the plurality of processing states for each processing core.4. The method of claim 3 , further comprising adjusting the set of the plurality of processing states for each processing core based upon at least one of an operational state of the multi- ...

Подробнее