Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 11670. Отображено 100.
26-01-2012 дата публикации

Determining whether a given diagram is a conceptual model

Номер: US20120023499A1
Принадлежит: International Business Machines Corp

Systems and methods for scheduling events in a virtualized computing environment are provided. In one embodiment, the method comprises scheduling one or more events in a first event queue implemented in a computing environment, in response to determining that number of events in the first event queue is greater than a first threshold value, wherein the first event queue comprises a first set of events received for purpose of scheduling, wherein said first set of events remain unscheduled; mapping the one or more events in the first event queue to one or more server resources in a virtualized computing environment; receiving a second set of events included in a second event queue, wherein one more events in the second set of event are defined as having a higher priority than one or more events in the first event queue that have or have not yet been scheduled.

Подробнее
29-03-2012 дата публикации

Energy efficient heterogeneous systems

Номер: US20120079298A1
Принадлежит: NEC Laboratories America Inc

Low-power systems and methods are disclosed for executing an application software on a general purpose processor and a plurality of accelerators with a runtime controller. The runtime controller splits a workload across the processor and the accelerators to minimize energy. The system includes building one or more performance models in an application-agnostic manner; and monitoring system performance in real-time and adjusting the workload splitting to minimize energy while conforming to a target quality of service (QoS).

Подробнее
12-04-2012 дата публикации

Distributed processing system, operation device, operation control device, operation control method, method of calculating completion probability of operation task, and program

Номер: US20120089430A1
Принадлежит: Sony Corp

A distributed processing system includes a plurality of operation devices that perform an operation using power derived from natural energy; and an operation control device that includes a task assigning unit that assigns the same operation task to the plurality of operation devices, and an operation control unit that controls the plurality of operation devices to perform the operation task assigned by the task assigning unit.

Подробнее
19-04-2012 дата публикации

Availability management for reference data services

Номер: US20120096093A1
Принадлежит: Microsoft Corp

Various aspects for scaling an availability of information are disclosed. In one aspect, a response performance associated with responding to data consumption requests is monitored. A characterization of the response performance is ascertained, and a scaling of resources is facilitated based on the characterization. In another aspect, a data consumption status indicative of data consumed is ascertained. Here, a scalability interface is provided, which displays aspects of the status, and receives an input from a content provider. An allocation of resources is then modified in response to the input. In yet another aspect, a response performance associated with responding to data consumption requests is monitored. An application programming interface (API) call is generated based on a characterization of the response performance, and transmitted to a content provider. An API response is then received from the content provider indicating whether a scaling of resources for hosting the data was performed.

Подробнее
10-05-2012 дата публикации

Parallel Processing Of Data Sets

Номер: US20120117008A1
Принадлежит: Microsoft Corp

Systems, methods, and devices are described for implementing learning algorithms on data sets. A data set may be partitioned into a plurality of data partitions that may be distributed to two or more processors, such as a graphics processing unit. The data partitions may be processed in parallel by each of the processors to determine local counts associated with the data partitions. The local counts may then be aggregated to form a global count that reflects the local counts for the data set. The partitioning may be performed by a data partition algorithm and the processing and the aggregating may be performed by a parallel collapsed Gibbs sampling (CGS) algorithm and/or a parallel collapsed variational Bayesian (CVB) algorithm. In addition, the CGS and/or the CVB algorithms may be associated with the data partition algorithm and may be parallelized to train a latent Dirichlet allocation model.

Подробнее
05-07-2012 дата публикации

Seamless scaling of enterprise applications

Номер: US20120173709A1
Автор: Li Li, Thomas Woo
Принадлежит: Alcatel Lucent SAS

Various exemplary embodiments relate to a method of scaling resources of a computing system, the method comprising. The method may include: setting a threshold value for a metric of system performance; determining an ideal resource load for at least one resource based on the threshold value for the metric; distributing a system work load among the computing system resources; and adjusting the number of resources based on the system work load, the ideal resource load, and a current number of resources. Various exemplary embodiments also relate to a computing system for scaling cloud resources. The computing system may include: internal resources; a load balancer; a performance monitor; a communication module; a job dispatching module; and a controller. Various exemplary embodiments also relate to a method of detecting dynamic bottlenecks during resource scaling using a resource performance metric and a method of detecting scaling choke points using historical system performance metric.

Подробнее
05-07-2012 дата публикации

Dynamic Application Placement Under Service and Memory Constraints

Номер: US20120173734A1
Принадлежит: International Business Machines Corp

An optimization problem models the dynamic placement of applications on servers under two types of simultaneous resource requirements, those that are dependent on the loads placed on the applications and those that are independent. The demand (load) for applications changes over time and the goal is to satisfy all the demand while changing the solution (assignment of applications to servers) as little as possible.

Подробнее
19-07-2012 дата публикации

Optimizing The Deployment Of A Workload On A Distributed Processing System

Номер: US20120185867A1
Принадлежит: International Business Machines Corp

Optimizing the deployment of a workload on a distributed processing system, the distributed processing system having a plurality of nodes, each node having a plurality of attributes, including: profiling during operations on the distributed processing system attributes of the nodes of the distributed processing system; selecting a workload for deployment on a subset of the nodes of the distributed processing system; determining specific resource requirements for the workload to be deployed; determining a required geometry of the nodes to run the workload; selecting a set of nodes having attributes that meet the specific resource requirements and arranged to meet the required geometry; deploying the workload on the selected nodes.

Подробнее
02-08-2012 дата публикации

System and Method for Enforcing Future Policies in a Compute Environment

Номер: US20120198467A1
Автор: David B. Jackson
Принадлежит: Adaptive Computing Enterprises Inc

A disclosed system receives a request for resources, generates a credential map for each credential associated with the request, the credential map including a first type of resource mapping and a second type of resource mapping. The system generates a resource availability map, generates a first composite intersecting map that intersects the resource availability map with a first type of resource mapping of all the generated credential maps and generates a second composite intersecting map that intersects the resource availability map and a second type of resource mapping of all the generated credential maps. With the first and second composite intersecting maps, the system can allocate resources within the compute environment for the request based on at least one of the first composite intersecting map and the second composite intersecting map.

Подробнее
02-08-2012 дата публикации

Compact node ordered application placement in a multiprocessor computer

Номер: US20120198470A1
Принадлежит: Cray Inc

A multiprocessor computer system comprises a plurality of nodes, wherein the nodes are ordered using a snaking dimension-ordered numbering. An application placement module is operable to place an application in nodes with preference given to nodes ordered near one another.

Подробнее
27-09-2012 дата публикации

Parallel computer system, control device, and controlling method

Номер: US20120246512A1
Автор: Hidetoshi Iwashita
Принадлежит: Fujitsu Ltd

The control device detects a failed node in which a failure has occurred from a plurality of computation nodes included in a plurality of computation units included in the parallel computer. The control device chooses execution nodes for executing the program from the computation nodes of the parallel computer except the detected failed nodes based on the number of computation nodes needed to execute the program. The control device selects a paths to connect the computation nodes from a plurality of links each connecting two computation units adjacent to each other through a plurality of paths configured to connect computation nodes included in two computation units adjacent to each other in a one-to-one manner included in the links connecting two computation units adjacent to each other in the plurality of computation units including the choosed execution nodes except the path connected to the detected failed node.

Подробнее
11-10-2012 дата публикации

System and method for fast server consolidation

Номер: US20120259963A1
Принадлежит: Infosys Ltd

System and computer-implemented method for determining optimal combinations of elements having multiple dimensions, including determining the optimal number of destination servers for server consolidation, wherein existing servers are evaluated across multiple dimensions.

Подробнее
08-11-2012 дата публикации

Workload-aware placement in private heterogeneous clouds

Номер: US20120284408A1
Принадлежит: International Business Machines Corp

Systems determine workload resource usage patterns of a computerized workload, using a computerized device. Such systems use the computerized device to place the computerized workload with a computer server cluster within a private cloud computing environment. Also, systems herein place the computerized workload on a selected computer server within the computer server cluster that has a resource usage pattern complementary to the workload resource usage profile, also using the computerized device. The complementary resource usage pattern peaks at different times from the workload resource usage patterns.

Подробнее
08-11-2012 дата публикации

Scheduling for Parallel Processing of Regionally-Constrained Placement Problem

Номер: US20120284733A1
Принадлежит: International Business Machines Corp

Scheduling of parallel processing for regionally-constrained object placement selects between different balancing schemes. For a small number of movebounds, computations are assigned by balancing the placeable objects. For a small number of objects per movebound, computations are assigned by balancing the movebounds. If there are large numbers of movebounds and objects per movebound, both objects and movebounds are balanced amongst the processors. For object balancing, movebounds are assigned to a processor until an amortized number of objects for the processor exceeds a first limit above an ideal number, or the next movebound would raise the amortized number of objects above a second, greater limit. For object and movebound balancing, movebounds are sorted into descending order, then assigned in the descending order to host processors in successive rounds while reversing the processor order after each round. The invention provides a schedule in polynomial-time while retaining high quality of results.

Подробнее
22-11-2012 дата публикации

Utilizing signatures to discover and manage derelict assets of an information technology environment

Номер: US20120297053A1
Принадлежит: International Business Machines Corp

A set of asset signatures can be analyzed. Each asset signature can be associated with an asset. Derelict assets can be discovered based on the asset signatures. The asset can represent a fundamental structural unit of an information technology (IT) environment. A multi-stage screening process can be performed to discover derelict assets. In a first stage, assets having a normal state are able to be changed to a suspect state based on results of analyzing the corresponding asset signature. In a second stage, assets having a suspect state are able to be selectively changed in state to a normal state or to a derelict state. An asset management system record can be maintained for each of the set of assets. Each record of the asset management system can be a configuration item (CI), which indicates whether each of the set of assets is in a normal state, a suspect state, or a derelict state. The asset management system can periodically reclaim resources consumed by derelict assets.

Подробнее
22-11-2012 дата публикации

Combining profiles based on priorities

Номер: US20120297380A1
Принадлежит: VMware LLC

Combining profiles based on priorities associated therewith to create an effective profile are provided. A plurality of profiles defining one or more rules that are applicable to a functional computing object are identified. A priority corresponding to each applicable profile is determined. The applicable profiles are combined by the computing device based on the corresponding priorities to create an effective profile that includes no conflicting rules.

Подробнее
13-12-2012 дата публикации

Processor bridging in heterogeneous computer system

Номер: US20120317321A1
Автор: Teng-Chang Chang
Принадлежит: INSTITUTE FOR INFORMATION INDUSTRY

A bridge logic device for a heterogeneous computer system that has at least one performance processor, a processor supporting logic supporting the at least one performance processor to execute tasks of the software, and a hypervisor processor consuming less power than the at least one performance processor is disclosed. The bridge logic device comprises a hypervisor operation logic that maintains status of the system under the at least one performance processor; a processor language translator logic that translates between processor languages of the at least one performance and the hypervisor processors; and a high-speed bus switch that has first, second and third ports for relaying data across any two of the three ports bidirectionally. The switch is connected to the at least one performance processor, the hypervisor processor via the processor language translator logic, and to the processor supporting logic respectively at the first, second, and third port.

Подробнее
20-12-2012 дата публикации

Software virtual machine for content delivery

Номер: US20120324448A1
Принадлежит: Ucirrus Corp

In general, this disclosure is directed to a software virtual machine that provides high-performance transactional data acceleration optimized for multi-core computing platforms. The virtual machine utilizes an underlying parallelization engine that seeks to maximize the efficiencies of multi-core computing platforms to provide a highly scalable, high performance (lowest latency), virtual machine. In some embodiments, the virtual machine may be viewed as an in-memory virtual machine with an ability in its operational state to self organize and self seek, in real time, available memory work boundaries to automatically optimize maximum available throughput for data processing acceleration and content delivery of massive amounts of data.

Подробнее
10-01-2013 дата публикации

Reducing cross queue synchronization on systems with low memory latency across distributed processing nodes

Номер: US20130014124A1
Принадлежит: International Business Machines Corp

A method for efficient dispatch/completion of a work element within a multi-node data processing system. The method comprises: selecting specific processing units from among the processing nodes to complete execution of a work element that has multiple individual work items that may be independently executed by different ones of the processing units; generating an allocated processor unit (APU) bit mask that identifies at least one of the processing units that has been selected; placing the work element in a first entry of a global command queue (GCQ); associating the APU mask with the work element in the GCQ; and responsive to receipt at the GCQ of work requests from each of the multiple processing nodes or the processing units, enabling only the selected specific ones of the processing nodes or the processing units to be able to retrieve work from the work element in the GCQ.

Подробнее
17-01-2013 дата публикации

Multi-core processor system, memory controller control method, and computer product

Номер: US20130019069A1
Принадлежит: Fujitsu Ltd

A multi-core processor system includes a memory controller that includes multiple ports and shared memory that includes physical address spaces divided among the ports. A CPU acquires from a parallel degree information table, the number of CPUs to which software that is to be executed by the multi-core processor system, is to be assigned. After this acquisition, the CPU determines the CPUs to which the software to be executed is to be assigned and sets for each CPU, physical address spaces corresponding to logical address spaces defined by the software to be executed. After this setting, the CPU notifies an address converter of the addresses and notifies the software to be executed of the start of execution.

Подробнее
24-01-2013 дата публикации

Automatic Zone-Based Management of a Data Center

Номер: US20130024559A1
Принадлежит: Hewlett Packard Development Co LP

Automatic zone-based management of a data center. Nodes are assigned to a first zone. One of the nodes is selected as zone leader. A load ratio of the zone leader is monitored, nodes are identified for shedding if the load ratio exceeds a predetermined maximum, and the identified nodes are assigned to a new zone. One of the nodes in the new zone is selected as zone leader. The load ratio of each zone leader is monitored, nodes are identified for shedding if the load ratio exceeds a predetermined maximum, and the identified nodes are assigned to an additional new zone, the zone leaders negotiate for reassignment of loads.

Подробнее
21-02-2013 дата публикации

Dynamically migrating computer networks

Номер: US20130046874A1
Автор: Daniel T. Cohn
Принадлежит: Amazon Technologies Inc

Techniques are described for providing capabilities to dynamically migrate computing nodes between two or more computer networks while the computer networks are in use, such as to dynamically and incrementally migrate an entire originating first computer network to a destination second computer network at a remote location. For example, the first computer network may include one or more physically connected computer networks, while the second computer network may be a virtual computer network at a remote geographical location (e.g., under control of a network-accessible service available to remote users). The provided capabilities may further include facilitating the ongoing operations of the originating first computer network while a subset of the first computer network computing nodes have been migrated to the remote destination second computer network, such as by forwarding communications between the first and second computer networks in a manner that is transparent to the various computing nodes.

Подробнее
28-02-2013 дата публикации

System and methods for performing medical physics calculation

Номер: US20130054670A1
Принадлежит: STC UNM

A method of calculating radiation fluence and energy deposition distributions on a networked virtual computational cluster is presented. With this method, complex Monte Carlo simulations that require expansive equipment, personnel, and financial resources can be done efficiently and inexpensively by hospitals and clinics requiring radiation therapy dose calculations.

Подробнее
11-04-2013 дата публикации

Method and apparatus for device dynamic addition processing, and method and apparatus for device dynamic removal processing

Номер: US20130091313A1
Автор: Hanjun GUO, JIANG Liu, Wei Wang
Принадлежит: Huawei Technologies Co Ltd

A method and an apparatus for device dynamic addition processing, and a method and an apparatus for device dynamic removal processing. A dynamic addition dependency relationship list may be obtained from a BIOS, and dynamic addition processing is performed on a certain device to be dynamically added, according to the dynamic addition dependency relationship list; a user is prompted to dynamically add the target device, and when there is a certain device to be dynamically removed, a dynamic removal dependency relationship list and a dynamic addition dependency relationship list of a corresponding device may be obtained from the BIOS as needed, and dynamic removal analysis and processing are performed according to the combination of the dynamic removal dependency relationship list and dynamic addition dependency relationship list of the corresponding device, so as to prompt the user to dynamically remove the target device.

Подробнее
23-05-2013 дата публикации

Optimizing distributed data analytics for shared storage

Номер: US20130132967A1
Принадлежит: NetApp Inc

Methods, systems, and computer executable instructions for performing distributed data analytics are provided. In one exemplary embodiment, a method of performing a distributed data analytics job includes collecting application-specific information in a processing node assigned to perform a task to identify data necessary to perform the task. The method also includes requesting a chunk of the necessary data from a storage server based on location information indicating one or more locations of the data chunk and prioritizing the request relative to other data requests associated with the job. The method also includes receiving the data chunk from the storage server in response to the request and storing the data chunk in a memory cache of the processing node which uses a same file system as the storage server.

Подробнее
27-06-2013 дата публикации

Job scheduling based on map stage and reduce stage duration

Номер: US20130167151A1
Принадлежит: Hewlett Packard Development Co LP

A plurality of job profiles is received. Each job profile describes a job to be executed, and each job includes map tasks and reduce tasks. An execution duration for a map stage including the map tasks and an execution duration for a reduce stage including the reduce tasks of each job is estimated. The jobs are scheduled for execution based on the estimated execution duration of the map stage and the estimated execution duration of the reduce stage of each job.

Подробнее
01-08-2013 дата публикации

Full exploitation of parallel processors for data processing

Номер: US20130198753A1
Принадлежит: International Business Machines Corp

For full exploitation of parallel processors for data processing, a set of parallel processors is partitioned into disjoint subsets according to indices of the set of the parallel processors. The size of each of the disjoint subsets corresponds to a number of processors assigned to the processing of the data chunks at one of the layers. Each of the processors are assigned to different layers in different data chunks such that each of processors are busy and the data chunks are fully processed within a number of the time steps equal to the number of the layers. A transition function is devised from the indices of the set of the parallel processors at one time steps to the indices of the set of the parallel processors at a following time step.

Подробнее
22-08-2013 дата публикации

Parallel distributed processing method and computer system

Номер: US20130218943A1
Автор: Ryo Kawai
Принадлежит: Hitachi, Ltd.

Provided is a parallel distributed processing method executed by a computer system comprising a parallel-distributed-processing control server, a plurality of extraction processing servers and a plurality of aggregation processing servers. The managed data includes at least a first and a second data items, the plurality of data items each including a value. The method includes a step of extracting data from one of the plurality of chunks according to a value in the second data item, to thereby group the data, a step of merging groups having the same value in the second data item based on an order of a value in the first data item of data contained in a group among groups, and a step of processing data in a group obtained through the merging by focusing on the order of the value in the first data item.

Подробнее
05-09-2013 дата публикации

METHODOLOGY FOR SECURE APPLICATION PARTITIONING ENABLEMENT

Номер: US20130232502A1

A computer implemented method, data processing system, and computer program product for configuring a partition with needed system resources to enable an application to run and process in a secure environment. Upon receiving a command to create a short lived secure partition for a secure application, a short lived secure partition is created in the data processing system. This short lived secure partition is inaccessible by superusers or other applications. System resources comprising physical resources and virtual allocations of the physical resources are allocated to the short lived secure partition. Hardware and software components needed to run the secure application are loaded into the short lived secure partition. 1. A computer implemented method for configuring a short lived secure partition in a data processing system , the computer implemented method comprising:receiving a command to create a short lived secure partition for a secure application;creating the short lived secure partition in the data processing system, the short lived secure partition being inaccessible by superusers or other applications;allocating system resources comprising physical resources and virtual allocations of the physical resources to the short lived secure partition; andloading hardware and software components needed to run the secure application in the short lived secure partition.2. The computer implemented method of claim 1 , further comprising:responsive to the short lived secure application completing its processing, freeing up space allocated to the short lived secure partition and system resources.3. The computer implemented method of claim 2 , wherein the receiving claim 2 , creating claim 2 , allocating claim 2 , loading claim 2 , and freeing steps are performed by a hardware management console.4. The computer implemented method of claim 3 , wherein freeing up space allocated to the short lived secure partition includes removing the short lived secure partition from a ...

Подробнее
03-10-2013 дата публикации

Managing distributed analytics on device groups

Номер: US20130262645A1
Принадлежит: Microsoft Corp

Methods of managing distributed analytics on device groups are described. In an embodiment, a management service within a distributed analytics system provides an interface to allow a user to define a group of devices based on a property of the devices. When the property of a device in the system satisfies the criterion specified by the user, the device is added to the group and the device may subsequently be removed from the group if the device no longer satisfies the criterion. Once a group has been defined, the management service enables users to specify management operations, such as creating, starting, stopping or deleting queries or management operations relating to other entities of end devices, which are to be implemented on all the devices in the group and the management service propagates the operation to all devices in the group, irrespective of their current connectivity status.

Подробнее
17-10-2013 дата публикации

Control of java resource runtime usage

Номер: US20130275965A1
Принадлежит: International Business Machines Corp

A method for providing control of Java resource runtime usage may include establishing communication with one or more Java virtual machines (JVMs) forming a hive via a hive communication channel where the hive comprises a plurality of JVMs configured to enable utilization of at least one shared resource, receiving, via the hive communication channel, environmental information indicative of hive activity relative to the at least one shared resource from at least one of the one or more JVMs, and adapting, via processing circuitry, operations associated with use of the at least one shared resource based on the environmental information.

Подробнее
31-10-2013 дата публикации

Method and device for determining parallelism of tasks of a program

Номер: US20130290975A1
Принадлежит: Intel Corp

A method and device for determining parallelism of tasks of a program comprises generating a task data structure to track the tasks and assigning a node of the task data structure to each executing task. Each node includes a task identification number and a wait number. The task identification number uniquely identifies the corresponding task from other currently executing tasks and the wait number corresponds to the task identification number of a node corresponding to the last descendant task of the corresponding task that was executed prior to a wait command. The parallelism of the tasks is determined by comparing the relationship between the tasks.

Подробнее
31-10-2013 дата публикации

Computational Resource Allocation System And A Method For Allocating Computational Resources For Executing A Scene Graph Based Application At Run Time

Номер: US20130290977A1
Принадлежит: SIEMENS AG

A computational resource allocation system for allocating computational resources to various modules of a scene graph based application while the application is being executed may include a module mapper for receiving a set of modules to be used in the scene graph based application from a module repository and a set of computational resources available to process the modules from a computational resource repository, and mapping the set of modules onto the set of computational resources to generate a mapping, and an allocation manager configured to allocate the modules to the set of computational resources based on the mapping.

Подробнее
21-11-2013 дата публикации

Apparatus for enhancing performance of a parallel processing environment, and associated methods

Номер: US20130311543A1
Автор: Kevin D. Howard
Принадлежит: Massively Parallel Technologies Inc

Parallel Processing Communication Accelerator (PPCA) systems and methods for enhancing performance of a Parallel Processing Environment (PPE). In an embodiment, a Message Passing Interface (MPI) devolver enabled PPCA is in communication with the PPE and a host node. The host node executes at least a parallel processing application and an MPI process. The MPI devolver communicates with the MPI process and the PPE to improve the performance of the PPE by offloading MPI process functionality to the PPCA. Offloading MPI processing to the PPCA frees the host node for other processing tasks, for example, executing the parallel processing application, thereby improving the performance of the PPE.

Подробнее
05-12-2013 дата публикации

System and method for shared execution of mixed data flows

Номер: US20130326538A1
Принадлежит: International Business Machines Corp

A method, computer program product, and computer system for shared execution of mixed data flows, performed by one or more computing devices, comprises identifying one or more resource sharing opportunities across a plurality of parallel tasks. The plurality of parallel tasks includes zero or more relational operations and at least one non-relational operation. The plurality of parallel tasks relative to the relational operations and the at least one non-relational operation are executed. In response to executing the plurality of parallel tasks, one or more resources of the identified resource sharing opportunities is shared across the relational operations and the at least one non-relational operation.

Подробнее
12-12-2013 дата публикации

Efficient partitioning techniques for massively distributed computation

Номер: US20130332446A1
Принадлежит: Microsoft Corp

A repartitioning optimizer identifies alternative repartitioning strategies and selects optimal ones, accounting for network transfer utilization and partition sizes in addition to traditional metrics. If prior partitioning was hash-based, the repartitioning optimizer can determine whether a hash-based repartitioning can result in not every computing device providing data to every other computing device. If prior partitioning was range-based, the repartitioning optimizer can determine whether a range-based repartitioning can generate similarly sized output partitions while aligning input and output partition boundaries, increasing the number of computing devices that do not provide data to every other computing device. Individual computing devices, as they are performing a repartitioning, assign a repartitioning index to each individual data element, which represents the computing device to which such a data element is destined. The indexed data is sorted by such repartitioning indices, thereby grouping together all like data, and then stored in a sequential manner.

Подробнее
26-12-2013 дата публикации

System and Method for Enforcing Future Policies in a Compute Environment

Номер: US20130346995A1
Автор: Jackson David Brian
Принадлежит:

A disclosed system receives a request for resources, generates a credential map for each credential associated with the request, the credential map including a first type of resource mapping and a second type of resource mapping. The system generates a resource availability map, generates a first composite intersecting map that intersects the resource availability map with a first type of resource mapping of all the generated credential maps and generates a second composite intersecting map that intersects the resource availability map and a second type of resource mapping of all the generated credential maps. With the first and second composite intersecting maps, the system can allocate resources within the compute environment for the request based on at least one of the first composite intersecting map and the second composite intersecting map. 1. A method comprising:establishing a standing reservation via a workload manager for compute resources within a compute environment, wherein the standing reservation is periodic and comprises a first reservation of a group of compute resources at a first time to yield a first group of reserved compute resources and a second reservation of the group of compute resources at a second time to yield a second group of reserved compute resources;receiving a request for compute resources to process a job from a user;receiving, with the request, an optimization request for the job;modifying one of the first group of reserved compute resources and the second group of reserved compute resources according to the optimization request to yield a modified group of compute resources; andinserting the job into the modified group of compute resources for processing.2. The method of claim 1 , wherein the workload manager receives requests for the compute resources in the compute environment and makes reservations for the compute resources to accommodate the requests.3. The method of claim 1 , wherein the standing reservation is created and ...

Подробнее
02-01-2014 дата публикации

Model for managing hosted resources using logical scopes

Номер: US20140007178A1
Принадлежит: Microsoft Corp

A hosted resource management system is described herein that provides systems and methods whereby a cloud-based tenant can define a logical model that allows the tenant to work with cloud-based entities in a manner that aligns with the tenant's own purpose and thinking. The system then reflects this model in a set of management tools and access paradigms that are provided to the cloud-based tenant. Each division in the logical model is termed a scope, and can include various types of cloud-based entities. Each of these scopes may contain similar cloud-based entity types, but because of the organization provided by scopes the tenant can manage these cloud-based entities according to the view and model that the tenant defines. Thus, the hosted resource management system provides a way of managing cloud-based entities that is intuitive for cloud-based tenants and facilities easier management of large-scale applications with many cloud-based entities.

Подробнее
09-01-2014 дата публикации

Method and system for distributed task dispatch in a multi-application environment based on consensus

Номер: US20140012808A1
Принадлежит: International Business Machines Corp

A method and system for distributing tasks from an external application among concurrent database application server instances in a database system for optimum load balancing, based on consensus among the instances. Each application instance identifies a task partition ownership by those in a membership group based on a time window and generates a new membership group and partition ownership based on the current partition ownership. The instance makes the new membership group and partition ownership known to other members by recoding them in the membership table and partition map. Each participation by an instance in the membership group is identified by a random number. The new membership group and partition ownership are generated and adjusted based on an average partition allocation to achieve consensus among the instances.

Подробнее
16-01-2014 дата публикации

Method to determine patterns represented in closed sequences

Номер: US20140019569A1
Принадлежит: RAM TECHLINEAGE SOFTWARE PVT Ltd

Embodiments herein disclose a process to find patterns represented by closed sequences with temporal ordering in time series data by converting the time series data into transactions. A distributed transaction handling unit continuously finds closed sequences with mutual confidence and lowest possible support thresholds from the data. The transaction handling unit distributes the data to be processed on multiple slave computers and uses data structures to store the statistics of the discovered patterns, which are kept up to date in real time. The transaction handling unit partitions the work into independent tasks so that the overhead of inter process and inter thread communication is kept at minimal. The transaction handling unit creates multiple check-points at user defined time interval or on demand or at the time of shutdown and is capable of using any of the available checkpoints and to be ready to process further data in an incremental manner.

Подробнее
23-01-2014 дата публикации

Varying a characteristic of a job profile relating to map and reduce tasks according to a data size

Номер: US20140026147A1
Принадлежит: Hewlett Packard Development Co LP

A job profile is received that includes characteristics of a job to be executed, where the characteristics of the job profile relate to map tasks and reduce tasks of the job. The map tasks produce intermediate results based on input data, and the reduce tasks produce an output based on the intermediate results. The characteristics of the job profile include at least one particular characteristic that varies according to a size of data to be processed. The at least one particular characteristic of the job profile is set based on the size of the data to be processed.

Подробнее
30-01-2014 дата публикации

Scheduling for Parallel Processing of Regionally-Constrained Placement Problem

Номер: US20140033154A1
Принадлежит: International Business Machines Corp

Scheduling of parallel processing for regionally-constrained object placement selects between different balancing schemes. For a small number of movebounds, computations are assigned by balancing the placeable objects. For a small number of objects per movebound, computations are assigned by balancing the movebounds. If there are large numbers of movebounds and objects per movebound, both objects and movebounds are balanced amongst the processors. For object balancing, movebounds are assigned to a processor until an amortized number of objects for the processor exceeds a first limit above an ideal number, or the next movebound would raise the amortized number of objects above a second, greater limit. For object and movebound balancing, movebounds are sorted into descending order, then assigned in the descending order to host processors in successive rounds while reversing the processor order after each round. The invention provides a schedule in polynomial-time while retaining high quality of results.

Подробнее
30-01-2014 дата публикации

Managing array computations during programmatic run-time in a distributed computing environment

Номер: US20140033214A1
Принадлежит: Hewlett Packard Development Co LP

A plurality of array partitions are defined for use by a set of tasks of the program run-time. The array partitions can be determined from one or more arrays that are utilized by the program at run-time. Each of the plurality of computing devices are assigned to perform one or more tasks in the set of tasks. By assigning each of the plurality of computing devices to perform one or more tasks, an objective to reduce data transfer amongst the plurality of computing devices can be implemented.

Подробнее
13-03-2014 дата публикации

Execution allocation cost assessment for computing systems and environments including elastic computing systems and environments

Номер: US20140074763A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

Techniques for allocating individually executable portions of executable code for execution in an Elastic computing environment are disclosed. In an Elastic computing environment, scalable and dynamic external computing resources can be used in order to effectively extend the computing capabilities beyond that which can be provided by internal computing resources of a computing system or environment. Machine learning can be used to automatically determine whether to allocate each individual portion of executable code (e.g., a Weblet) for execution to either internal computing resources of a computing system (e.g., a computing device) or external resources of an dynamically scalable computing resource (e.g., a Cloud). By way of example, status and preference data can be used to train a supervised learning mechanism to allow a computing device to automatically allocate executable code to internal and external computing resources of an Elastic computing environment.

Подробнее
20-03-2014 дата публикации

Service Process Integration Systems and Methods

Номер: US20140081698A1
Принадлежит: SolveDirect Service Management GmbH

In some embodiments, multiple heterogeneous information technology service management (ITSM) applications of different IT service partners (customers and service providers) are integrated via a service process integration grid employing a set of standard workflows and associated standard transaction types and data structures. Once a service partner's workflows and data structures have been mapped to the standard grid workflows and data structures, integration with a first and new service partners is relatively fast and convenient. Analysis of real-life ITSM applications led to the development of particular standardized workflows classified according to whether they are initiated by service provider or customer, and according to whether they do or do not include ownership-transfer transactions allowing a service partner (customer or provider) to transfer ownership of the service process to its counterpart for further action by the counterpart.

Подробнее
06-01-2022 дата публикации

TECHNIQUES FOR CONTAINER SCHEDULING IN A VIRTUAL ENVIRONMENT

Номер: US20220004431A1
Принадлежит: VMWARE, INC.

The present disclosure relates generally to virtualization, and more particularly to techniques for deploying containers in a virtual environment. The container scheduling can be based on information determined by a virtual machine scheduler. For example, a container scheduler can receive a request to deploy a container. The container scheduler can send container information to the virtual machine scheduler. The virtual machine scheduler can use the container information along with resource utilization of one or more virtual machines to determine an optimal virtual machine for the container. The virtual machine scheduler can send an identification of the optimal virtual machine back to the container scheduler so that the container scheduler can deploy the container on the optimal virtual machine. 1. A method , comprising:transmitting, to a first scheduling process from a second scheduling process, first information identifying a plurality of virtual machines executing on a plurality of physical hosts, wherein the first scheduling process has access to resource utilization data for only virtual machines of the plurality of virtual machines that are executing at least one container;receiving, from the first scheduling process by the second scheduling process, second information identifying one or more virtual machine of the plurality of virtual machines as a virtual machine candidate on which to deploy a first container; anddeploying the first container on a virtual machine of the one or more virtual machines of the plurality of virtual machines, wherein the second scheduling process has access to the resource utilization data for each physical host of the plurality of physical hosts.2. The method of claim 1 , wherein the second scheduling process is a virtual machine scheduling process claim 1 , and wherein the first scheduling process is a container scheduling process.3. The method of claim 1 , wherein the first information includes a resource requirement of a ...

Подробнее
06-01-2022 дата публикации

CLUSTER IDENTIFIER REMAPPING FOR ASYMMETRIC TOPOLOGIES

Номер: US20220004439A1
Принадлежит: Intel Corporation

A first plurality of integrated circuit blocks of a first chip are connected to a second plurality of integrated circuit blocks of a second chip. A cluster remapping table is provided on the second chip and is to be programmed to identify a desired asymmetric topology of the connections between the first plurality of integrated circuit blocks and the second plurality of integrated circuit blocks. Logic is to discover the actual topology of the connections between the first plurality of integrated circuit blocks and the second plurality of integrated circuit blocks and determine whether the actual topology matches the desired topology as described in the cluster remapping table. 1. A non-transitory machine-readable storage medium with instructions stored thereon , the instructions executable by a machine to cause the machine to:access a cluster remapping register stored in computer memory;determine, from the cluster remapping register, a mapping of a first integrated circuit block in a first chip to a first cluster identifier, wherein the first cluster identifier is different than an assigned cluster identifier for the first integrated circuity block;determine, from the cluster remapping register, a mapping of a second integrated circuit block in the first chip to a second cluster identifier from the cluster remapping register;identify a first interconnect link to couple the first integrated circuit block in the first chip to a third integrated circuit block in a second chip;identify a second interconnect link to couple the second integrated circuit block in the first chip to a fourth integrated circuit block in the second chip; anddetermine whether connections made by the first and second interconnect links match connections defined in the cluster remapping register.2. The storage medium of claim 1 , wherein the instructions are further executable to:identify a mapping of the third integrated circuit block to a third cluster identifier;identify a mapping of the ...

Подробнее
07-01-2016 дата публикации

Method of projecting a workspace and system using the same

Номер: US20160004512A1
Принадлежит: U3D Ltd

A method of projecting a workspace includes the following steps. Firstly, a projectable space instance which is instantiated from a unified script is provided through a URI (uniform resource identifier). The unified script is defined to configure at least one of an matterizer, information and tool to model a workspace. The projectable space instance is used for building a projected workspace corresponding to the workspace so as to provide an interface for operating at least one of the matterizer, the information and the tool to perform a task. Then, a projector is used to parse the projectable space instance and build a working environment to configure at least one of the matterizer, the information and the tool. Consequently, the projected workspace is executed for providing interaction between at least one user and the projected workspace.

Подробнее
02-01-2020 дата публикации

Method and defragmentation module for defragmenting resources

Номер: US20200004421A1
Принадлежит: Telefonaktiebolaget LM Ericsson AB

A method and a defragmentation module for defragmenting resources of a hardware system. The defragmentation module identifies a set of structures. Each structure of the set of structures partially hosts a respective set of host machines. Respective resources of each host machine of the respective set of host machines are allocated in at least two structures of the set of structures. The defragmentation module selects, from the respective resources of a host machine of the respective set of host machine, a remote resource of a first structure being different from a second structure partially hosting the host machine. A remote amount of the remote resource is less than an amount of available resources of the second structure. The defragmentation module assigns the remote amount of the available resources of the second structure to the host machine instead of the remote resource.

Подробнее
02-01-2020 дата публикации

DYNAMIC DISTRIBUTED DATA CLUSTERING

Номер: US20200004449A1
Принадлежит:

Techniques are described for clustering data at the point of ingestion for storage using scalable storage resources. The clustering techniques described herein are used to cluster time series data in a manner such that data that is likely to be queried together is localized to a same partition, or to a minimal set of partitions if the data set is large, where the partitions are mapped to physical storage resources where the data is to be stored for subsequent processing. Among other benefits, the clustered storage of the data at the physical storage resources can reduce an amount of data that needs to be filtered by many types of queries, thereby improving the performance of any applications or processes that rely on querying the data. 1. A computer-implemented method comprising:generating a partition table used to cluster a plurality of time series data points during ingestion of the plurality of time series data points, the partition table including a first set of partitions defining a first segmentation of a total range of ordered values, wherein each partition of the first set of partitions can be mapped to a single attribute value of a plurality of attribute values associated with the plurality of time series data points;receiving a time series data point generated by a computing device;identifying an attribute value associated with the time series data point;determining that the first set of partitions does not include a partition mapped to the attribute value and that there is not an available partition of the first set of partitions to which the attribute value can be mapped;generating a second set of partitions by bifurcating a subrange associated with each partition of the first set of partitions, the second set of partitions defining a second segmentation of the total range of ordered values;mapping the attribute value to an available partition of the second set of partitions;mapping the available partition of the second set of partitions to one or more ...

Подробнее
04-01-2018 дата публикации

TECHNIQUES FOR DISTRIBUTED PROCESSING TASK PORTION ASSIGNMENT

Номер: US20180004578A1
Автор: JIN Jun, Qiao Nan, You Liang
Принадлежит: Intel Corporation

Various embodiments are generally directed to techniques for assigning portions of a task among individual cores of one or more processor components of each processing device of a distributed processing system. An apparatus to assign processor component cores to perform task portions includes a processor component; an interface to couple the processor component to a network to receive data that indicates available cores of base and subsystem processor components of processing devices of a distributed processing system, the subsystem processor components made accessible on the network through the base processor components; and a core selection component for execution by the processor component to select cores from among the available cores to execute instances of task portion routines of a task based on a selected balance point between compute time and power consumption needed to execute the instances of the task portion routines. Other embodiments are described and claimed. 1. An apparatus to assign processor component cores to perform task portions comprising:a processor component;an interface to couple the processor component to a network to receive data that indicates available cores of base and subsystem processor components of processing devices of a distributed processing system, the subsystem processor components made accessible on the network through the base processor components; anda core selection component for execution by the processor component to select cores from among the available cores to execute instances of task portion routines of a task based on a selected balance between compute time and power consumption needed to execute the instances of the task portion routines.2. The apparatus of claim 1 , comprising a resource component for execution by the processor component to receive resource data that indicates a quantity of cores present in at least one base processor component and a quantity of cores present in at least one subsystem processor ...

Подробнее
04-01-2018 дата публикации

Adaptive rebuilding rates based on sampling and inference

Номер: US20180004604A1
Принадлежит: International Business Machines Corp

A method for execution by one or more processing modules of a dispersed storage network (DSN), the method begins by monitoring an encoded data slice access rate to produce an encoded data slice access rate for an associated rebuilding rate of a set of rebuilding rates. The method continues by applying a learning function to the encoded data slice access rate based on a previous encoded data slice access rate associated with the rebuilding rate to produce an updated previous encoded data slice access rate of a set of previous encoded data slice access rates. The method continues by updating a score value associated with the updated previous encoded data slice access rate and the rebuilding rate and selecting a slice access scheme based on the updated score value where a rebuild rate selection will maximize a score value associated with an expected slice access rate.

Подробнее
13-01-2022 дата публикации

APPARATUS AND METHOD FOR MATRIX MULTIPLICATION USING PROCESSING-IN-MEMORY

Номер: US20220012303A1
Автор: Zheng Qilin
Принадлежит:

Embodiments of apparatus and method for matrix multiplication using processing-in-memory (PIM) are disclosed. In an example, an apparatus for matrix multiplication includes an array of PIM blocks in rows and columns, a controller, and an accumulator. Each PIM block is configured into a computing mode or a memory mode. The controller is configured to divide the array of PIM blocks into a first set of PIM blocks each configured into the memory mode and a second set of PIM blocks each configured into the computing mode. The first set of PIM blocks are configured to store a first matrix, and the second set of PIM blocks are configured to store a second matrix and calculate partial sums of a third matrix based on the first and second matrices. The accumulator is configured to output the third matrix based on the partial sums of the third matrix. 1. An apparatus for matrix multiplication , comprising:an array of processing-in-memory (PIM) blocks in rows and columns, each of which is configured into a computing mode or a memory mode;a controller configured to divide the array of PIM blocks into a first set of PIM blocks each configured into the memory mode and a second set of PIM blocks each configured into the computing mode, wherein the first set of PIM blocks are configured to store a first matrix, and the second set of PIM blocks are configured to store a second matrix and calculate partial sums of a third matrix based on the first and second matrices; andan accumulator configured to output the third matrix based on the partial sums of the third matrix.2. The apparatus of claim 1 , wherein the first set of PIM blocks consists of a row of the array of PIM blocks claim 1 , and the second set of PIM blocks consists of a remainder of the array of PIM blocks.3. The apparatus of claim 2 , wherein dimensions of the first set of PIM blocks are smaller than dimensions of the first matrix claim 2 , and the controller is configured to map the first matrix to the first set of PIM ...

Подробнее
07-01-2021 дата публикации

System and method for provisioning of artificial intelligence accelerator (aia) resources

Номер: US20210004658A1
Принадлежит: Solidrun Ltd

A system and method for provisioning of artificial intelligence accelerator (AIA) resources. The method includes receiving a request for an NPU allocation from a client device; determining an available NPU based on a scanning of a network to discover NPU resources; and allocating the available NPU to the client device.

Подробнее
03-01-2019 дата публикации

HASH-BASED PARTITIONING SYSTEM

Номер: US20190004863A1
Принадлежит:

In various embodiments, methods and systems for implementing hash-based partitioning in distributed computing systems are provided. At a high level, a distributed computing system having an underlying range-based partitioning architecture for storage may be configured as a hash-based partitioning system, for example, a hybrid range-hash table storage. An operations engine of the hash-based partitioning system receives a tenant request to provision input/output operations per second (IOPS). The tenant request comprises a requested number of IOPS. Based on the tenant request, a provisioning operation to provision IOPS in a hybrid range-hash table storage with hash-based partitioning is determined. The provisioning operation is selected from one of the following: a table creation provisioning operation, an IOPS increase provisioning operation, and an IOPS decrease provisioning operation. The selected provisioning operation is executed for a corresponding table. A user request for data is processed using the table associated with the requested number of IOPS. 2. The system of claim 1 , wherein the table creation provisioning operation comprises:determining that the tenant request is associated with a tenant account that operates based on a hash partitioning model;creating the table associated with the tenant request, wherein the table operates based on a hash partitioning model in a hybrid range-hash table storage;identifying hash algorithm metadata for processing user requests; andstoring the hash algorithm metadata in a table container.3. The system of claim 1 , further comprising a front end of the operations engine configured to:determine that a second user request corresponds to a table operating based on a range partitioning model; andbased on the second user request, accessing data associated with the table operating based on the range partitioning model.4. The system of claim 1 , further comprising a table master of the operation engine configured to:track a ...

Подробнее
03-01-2019 дата публикации

METHOD AND SYSTEM FOR SUPPORTING STREAM PROCESSING FRAMEWORK FUNCTIONALITY

Номер: US20190004864A1
Принадлежит:

A method for supporting stream processing framework functionality in a stream processing system, the stream processing system including one or more input modules, a stream processing platform, and computing nodes, includes deploying, by the stream processing platform using the input modules, tasks of at least one stream processing topology on the computing nodes based on both stream processing topology-related information and stream processing topology-external information. The method additionally includes preparing and executing, by the stream processing platform, the tasks of the at least one stream processing topology on the computing nodes. 1: A method for supporting stream processing framework functionality in a stream processing system , the stream processing system including one or more input modules , a stream processing platform , and computing nodes , the method comprising:deploying, by the stream processing platform using the input modules, tasks of at least one stream processing topology on the computing nodes based on both stream processing topology-related information and stream processing topology-external information; andpreparing and executing, by the stream processing platform using the input modules, the tasks of the at least one stream processing topology on the computing nodes.2: The method according to claim 1 , wherein the stream processing topology-related information includes stream processing topology settings claim 1 , system characteristics claim 1 , network link characteristics claim 1 , and/or computing node characteristics.3: The method according to claim 1 , wherein the stream processing topology-external information includes information on interactions between the tasks and topology-external entities.4: The method according to claim 1 , wherein the stream processing topology-external information includes information about characteristics and/or requirements of topology-external entities.5: The method according to claim 4 , wherein ...

Подробнее
03-01-2019 дата публикации

VIRTUAL CPU CONSOLIDATION TO AVOID PHYSICAL CPU CONTENTION BETWEEN VIRTUAL MACHINES

Номер: US20190004866A1
Принадлежит:

Various systems and methods for virtual CPU consolidation to avoid physical CPU contention between virtual machines are described herein. A processor system that includes multiple physical processors (PCPUs) includes a first virtual machine (VM) that includes multiple first virtual processors (VCPUs); a second VM that includes multiple second VCPUs; and a virtual machine monitor (VMM) to map individual ones of the first VCPUs to run on at least one of, individual PCPUs of a first subset of the PCPUs and individual PCPUs of a set of PCPUs that includes the first subset of the PCPUs and a second subset of the PCPUs, based at least in part upon compute capacity of the first subset of the PCPUs to run the first VCPUs, and to map individual ones of the second VCPUs to run on individual ones of the second subset of the PCPUs. 125-. (canceled)26. A processor system that includes multiple physical processors (PCPUs) comprising:a first virtual machine (VM) that includes multiple first virtual processors (VCPUs);a second VM that includes multiple second VCPUs; anda virtual machine monitor (VMM) to map individual ones of the first VCPUs to run on at least one of, individual PCPUs of a first subset of the PCPUs and individual PCPUs of a set of PCPUs that includes the first subset of the PCPUs and a second subset of the PCPUs, based at least in part upon compute capacity of the first subset of the PCPUs to run the first VCPUs, and to map individual ones of the second VCPUs to run on individual ones of the second subset of the PCPUs.27. The system of claim 26 ,wherein the VMM determines the compute capacity of the first subset of the PCPUs to run the first VCPUs based at least in part upon workload of the first VCPUs.28. The system of claim 26 ,wherein the VMM determines work load of the first VCPUs and determines the compute capacity of the first subset of the PCPUs to run the first VCPUs based at least in part upon the determined workload of the first VCPUs.29. The system of ...

Подробнее
03-01-2019 дата публикации

HIERARCHICAL PROCESS GROUP MANAGEMENT

Номер: US20190004870A1
Принадлежит:

Management of hierarchical process groups is provided. Aspects include creating a group identifier having an associated set of resource limits for shared resources of a processing system. A process is associated with the group identifier. A hierarchical process group is created including the process as a parent process and at least one child process spawned from the parent process, where the at least one child process inherits the group identifier. A container is created to store resource usage of the hierarchical process group and the set of resource limits of the group identifier. The set of resources associated with the hierarchical process group is used to collectively monitor resource usage of processes. A resource allocation adjustment action is performed in the processing system based on determining that an existing process exceeds a process resource limit or the hierarchical process group exceeds at least one of the set of resource limits. 1. A computer-implemented method for managing hierarchical process groups , the method comprising:creating, by a hierarchical process group service of a processing system, a group identifier having an associated set of resource limits for shared resources of a processing system;associating, by the hierarchical process group service, a process with the group identifier;creating, by the hierarchical process group service, a hierarchical process group comprising the process as a parent process and at least one child process spawned from the parent process, wherein the at least one child process inherits the group identifier;creating, by the hierarchical process group service, a container to store resource usage of the hierarchical process group and the set of resource limits of the group identifier;using the set of resources limits associated with the hierarchical process group to collectively monitor resource usage of a plurality of processes in the hierarchical process group;monitoring, by the hierarchical process group ...

Подробнее
01-01-2015 дата публикации

ESTABLISHING CONNECTIVITY OF MODULAR NODES IN A PRE-BOOT ENVIRONMENT

Номер: US20150006700A1
Автор: Wanner Christopher C.
Принадлежит:

Establishing connectivity of nodes. A first data is received at a resource manager from a first base management controller (BMC) associated with a first node, wherein the resource manager is associated with a server computer system. A second data is received at the resource manager from a second BMC associated with a second node. A classification of the first node and the second node are determined and a compatibility of the first node with the second node based on the first data and the second. A topology is generated, at the resource manager, of the first node and the second node. 1. A method for establishing connectivity of nodes , said method comprising:receiving a first data at a resource manager from a first base management controller (BMC) associated with a first node, wherein said resource manager is associated with a server computer system;receiving a second data at said resource manager from a second BMC associated with a second node;determining a classification of said first node and said second node and a compatibility of said first node with said second node based on said first data and said second; andgenerating a topology, at said resource manager, of said first node and said second node.2. The method of wherein said first node and said second node each comprise an optical transceiver and are connected via an optical connection.3. The method of wherein said first node and said second node each comprise a main component and said receiving said first data and said receiving said second data occur before said main component of said first node and said second node boot up.4. The method of claim 1 , further comprising:sending said first node and said second node authorization to boot up a main component, wherein said authorization is based on said compatibility.5. The method of claim 1 , further comprising:providing said classification of said first node and said second node, said compatibility, and said topology to another entity.6. The method of claim 1 ...

Подробнее
20-01-2022 дата публикации

MODIFYING COMPUTATIONAL GRAPHS

Номер: US20220019896A1
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for modifying a computational graph to include send and receive nodes. Communication between unique devices performing operations of different subgraphs of the computational graph can be handled efficiently by inserting send and receive nodes into each subgraph. When executed, the operations that these send and receive nodes represent may enable pairs of unique devices to conduct communication with each other in a self-sufficient manner. This shifts the burden of coordinating communication away from the backend, which affords the system that processes this computational graph representation the opportunity to perform one or more other processes while devices are executing subgraphs. 118-. (canceled)19. A method , comprising:obtaining a computational graph for a neural network, the computational graph comprising a plurality of nodes representing respective operations of the neural network and a plurality of directed edges, each directed edge connecting a respective pair of nodes of the plurality of nodes and representing that an output of the operation represented by one node in the pair is processed as input by the operation represented by the other node in the pair;partitioning the computational graph into at least a first subgraph and a second subgraph, the first subgraph comprising a first subset of the plurality of nodes, the second subgraph comprising a different, second subset of the plurality of nodes;identifying that the first subgraph includes a first node representing an operation that produces output for processing as input by an operation represented by a second node in the second subgraph;inserting a send node into the first subgraph and a receive node into the second subgraph, the send node configured to receive the output from the first node and point to the receive node, the receive node configured to receive the output from the first node via the send ...

Подробнее
08-01-2015 дата публикации

Optimized multi-component co-allocation scheduling with advanced reservations for data transfers and distributed jobs

Номер: US20150012659A1
Автор: David Brian Jackson
Принадлежит: Adaptive Computing Enterprises Inc

Disclosed are systems, methods, computer readable media, and compute environments for establishing a schedule for processing a job in a distributed compute environment. The method embodiment comprises converting a topology of a compute environment to a plurality of endpoint-to-endpoint paths, based on the plurality of endpoint-to-endpoint paths, mapping each replica resource of a plurality of resources to one or more endpoints where each respective resource is available, iteratively identifying schedule costs associated with a relationship between endpoints and resources, and committing a selected schedule cost from the identified schedule costs for processing a job in the compute environment.

Подробнее
14-01-2016 дата публикации

SYSTEM AND METHOD FOR CONTROLLING DIVIDED SCREENS INDEPENDENTLY THROUGH MAPPING WITH INPUT DEVICES

Номер: US20160011776A1
Автор: SONG Dong Seop
Принадлежит:

Disclosed herein is a system and method for controlling divided screens, on which applications corresponding to input signals received from input devices are executed, independently through mapping with the input devices of different input types. If a first input signal for controlling a part of the divided screens is received from a first input device, or if a second input signal for controlling the screen is received from a second input device during reception of the first input signal, the system controls the divided screens mapped with the respective signals based on the received input signals. Accordingly, efficient function control can be performed through mapping of the divided screens with the input devices to meet the purposes and functions of the respective applications, and it becomes possible to enable a plurality of users to control the divided screens of the display device independently. 1. A system for controlling divided screens independently through mapping with input devices , comprising:a signal reception unit configured to receive different input signals from the input devices; anda control unit configured to determine applications corresponding to respective input signals and to control the divided screens related to the corresponding applications according to the input signals.2. The system according to claim 1 , wherein the control unit is further configured to control a first divided screen related to an application corresponding to a first input device according to a first input signal from the first input device and to control a second divided screen related to an application corresponding to a second input device that is different from the first input device according to a second input signal from the second input device.3. The system according to claim 1 , further comprising a data storage unit configured to store a program for controlling the applications corresponding to the respective input signals.4. The system according to claim 2 , ...

Подробнее
14-01-2016 дата публикации

MANAGING PARALLEL PROCESSES FOR APPLICATION-LEVEL PARTITIONS

Номер: US20160011911A1
Принадлежит: ORACLE INTERNATIONAL CORPORATION

Various techniques are described herein for creating data partition process schedules and executing such partition schedules using multiple parallel process instances. Data processing tasks initiated by or for applications may be executed by creating and executing partition schedules, in which a number of different process instances are created and each assigned a subset of data to process. Partition schedules may be used to determine a number of process instances to be created, and each process instance may be assigned a unique set of run-time data values corresponding to a unique set of parameters within the data set to be processed by the application. The process instances may operate independently and in parallel to retrieve and process separate partitions of the data required for the overall data processing task initiated by/for the application. 1. A process scheduling and management system comprising:a processing unit comprising one or more processors; and identify a plurality of parameters within a data set comprising one or more data tables stored in a backend data store;', 'for each parameter of the identified parameters, determine a number of unique values for the parameter within the data set;', 'determine a number of process instances to create of a data processing executable component, said determining comprising multiplying the number of unique values for each of the identified parameters;', 'create the determined number of process instances of the data processing executable component; and', 'provide to each of the process instances data corresponding to a unique combination of values of the identified parameters within the data set, wherein the unique combinations of values for the process instances are determined independently of the backend data store storing the data tables, and wherein each of the process instances is configured to retrieve a unique set of target data from the data tables, based on the unique combination of values provided to the ...

Подробнее
10-01-2019 дата публикации

USER REQUEST PROCESSING METHOD AND DEVICE

Номер: US20190012365A1
Принадлежит:

The present disclosure provides user request processing methods and devices. One exemplary method includes: determining a first container corresponding to a user request after the user request is received; determining a logical container corresponding to the first container by using a preset relationship between the first container and the logical container; acquiring a container cluster corresponding to the logical container by using a logical address corresponding to the logical container, wherein the container cluster includes at least two second containers; and processing the user request by using the second containers. The user request can be simultaneously processed by the first container and the at least two second containers. The first container can call the second containers, so that different containers can share data and handle the same application together. Further, scale-out of second containers in the container cluster can be implemented, thus improving user request processing capability. 1. A user request processing method , comprising:determining a first container corresponding to a user request;determining a logical container corresponding to the first container based on a preset relationship between the first container and the logical container;acquiring a container cluster corresponding to the logical container based on a logical address corresponding to the logical container, wherein the container cluster includes at least two second containers; andprocessing the user request by using at least one of the second containers.2. The method according to claim 1 , wherein determining the first container corresponding to the user request comprises:parsing out at least one of domain name information and IP address information from the user request, anddetermining a first container matching the domain name information or the IP address information.3. The method according to claim 1 , wherein determining the logical container corresponding to the first ...

Подробнее
14-01-2021 дата публикации

DATA PROCESSING

Номер: US20210011951A1
Автор: MORAN Brendan James
Принадлежит:

A computer-processor-implemented data processing method comprises: a computer processor executing instances of one or more processing functions, each instance of a processing function having an associated function-call identifier; and in response to initiation of execution by the computer processor of a given processing function instance configured to modify one or more pointers of a partitioned acyclic data structure: the computer processor storing the function-call identifier for that processing function instance in a memory at a storage location associated with the partitioned acyclic data structure; for a memory location which stores data representing a given pointer of the partitioned acyclic data structure, the computer processor defining a period of exclusive access to at least that memory location by applying and subsequently releasing an exclusive tag for at least that memory location; and the computer processor selectively processing the given pointer during the period of exclusive access in dependence upon whether the function-call identifier of the prevailing processing function instance is identical to the function-call identifier stored in association with the partitioned acyclic data structure. 1. A computer-processor-implemented data processing method comprising:a computer processor executing instances of one or more processing functions, each instance of a processing function having an associated function-call identifier; and the computer processor storing the function-call identifier for that processing function instance in a memory at a storage location associated with the partitioned acyclic data structure;', 'for a memory location which stores data representing a given pointer of the partitioned acyclic data structure, the computer processor defining a period of exclusive access to at least that memory location by applying and subsequently releasing an exclusive tag for at least that memory location; and', 'the computer processor selectively ...

Подробнее
12-01-2017 дата публикации

Method and device for computing resource scheduling

Номер: US20170012892A1
Принадлежит: Alibaba Group Holding Ltd

A method, apparatus and device for scheduling resources of a cluster comprising a plurality of hosts, each running at least one instance, including acquiring a resource parameter of the cluster; calculating the number of predicted hosts in the cluster according to the resource parameter; determining to-be-migrated hosts and target hosts from the current hosts in the cluster when the number of current hosts in the cluster is greater than the number of predicted hosts; and migrating instances running on the to-be-migrated host to the target host.

Подробнее
10-01-2019 дата публикации

DYNAMIC OPTIMIZATION OF SIMULATION RESOURCES

Номер: US20190014012A1
Принадлежит:

The present invention dynamically optimizes computing resources allocated to a simulation task while it is running. It satisfies application-imposed constraints and enables the simulation application performing the simulation task to resolve inter-instance (including inter-server) dependencies inherent in executing the simulation task in a parallel processing or other HPC environment. An intermediary server platform, between the user of the simulation task and the hardware providers on which the simulation task is executed, includes a cluster service that provisions computing resources on hardware provider platforms, an application service that configures the simulation application in accordance with application-imposed constraints, an application monitoring service that monitors execution of the simulation task for computing resource change indicators (including computing resource utilization and application-specific information extracted from output files generated by the simulation application) as well as restart files, and a computing resource evaluation engine that determines when a change in computing resources is warranted. 1. A method for dynamically optimizing the provisioning of computing resources for execution of an application that performs a task having inter-instance dependencies , wherein the application , in order to execute properly and resolve the inter-instance dependencies , includes one or more application-imposed constraints requiring a pre-configuration specification of at least one of the computing resources allocated to the application , the method comprising the following steps:(a) provisioning a cluster of computing resources on a hardware provider platform for executing the application;(b) configuring the application in accordance with the application-imposed constraints, and initiating execution of the application on the provisioned cluster;(c) monitoring execution of the application for (i) a plurality of computing resource change ...

Подробнее
10-01-2019 дата публикации

DYNAMIC OPTIMIZATION OF SIMULATION RESOURCES

Номер: US20190014013A1
Принадлежит:

The present invention dynamically optimizes computing resources allocated to a simulation task while it is running. It satisfies application-imposed constraints and enables the simulation application performing the simulation task to resolve inter-instance (including inter-server) dependencies inherent in executing the simulation task in a parallel processing or other HPC environment. An intermediary server platform, between the user of the simulation task and the hardware providers on which the simulation task is executed, includes a cluster service that provisions computing resources on hardware provider platforms, an application service that configures the simulation application in accordance with application-imposed constraints, an application monitoring service that monitors execution of the simulation task for computing resource change indicators (including computing resource utilization and application-specific information extracted from output files generated by the simulation application) as well as restart files, and a computing resource evaluation engine that determines when a change in computing resources is warranted. 1. A method for facilitating hardware and software metering of tasks performed by a plurality of applications on behalf of a plurality of users , the method comprising the following steps:(a) provisioning a first cluster of computing resources on one of a plurality of hardware provider platforms for executing an application on behalf of a user and, in the event a change in computing resources is warranted while the application is running, provisioning a second cluster of computing resources and terminating the application on the first cluster;(b) configuring the application and initiating execution of the application on the provisioned first cluster and, in the event the second cluster is provisioned by the cluster service, reconfigures the application and resumes execution of the application on the second cluster;(c) authenticating each ...

Подробнее
10-01-2019 дата публикации

METHOD FOR PROVIDING SCHEDULERS IN A DISTRIBUTED STORAGE NETWORK

Номер: US20190014192A1
Принадлежит:

A method for optimizing scheduler selection by a distributed storage (DS) unit of a dispersed storage network (DSN) begins with a DS unit receiving a dispersed storage error function from a DSN user and queries DS elements to determine measured throughput and measured latency. The method resumes when the DS unit receives measured throughput and measured latency from the DSN elements and selects a scheduler based on the measured throughput and measured latency. The method continues with the DS unit receiving a different updated measured throughput and measured latency from the DSN elements and selecting a different scheduler. 1. A method for execution by a distributed storage (DS) unit of a dispersed storage network (DSN) , the method comprises:receiving, from a user of the DSN, a dispersed storage error function for execution by the DS unit;querying, by the DS unit, one or more computing devices of a set of computing devices associated with the DSN for a measured throughput of the DSN;receiving, by the DS unit, information sufficient to determine the measured throughput of the DSN;querying, by the DS unit, the one or more computing devices of the set of computing devices associated with the DSN for a measured latency of the DSN;receiving, by the DS unit, information sufficient to determine the measured latency of the DSN;based on the measured throughput and the measured latency, determining, by the DS unit, whether to use a scheduler to execute the dispersed storage error function;in response to determining to use a scheduler to execute the dispersed storage error function, selecting a first scheduler of a plurality of schedulers, wherein the first scheduler is based on both the measured throughput and the measured latency;querying, by the DS unit, one or more computing devices of a set of computing devices associated with the DSN for an update measured throughput of the DSN;receiving, by the DS unit, the updated measured throughput of the DSN;querying, by the DS ...

Подробнее
03-02-2022 дата публикации

TASK INTEGRATION

Номер: US20220035653A1
Принадлежит: FUJITSU LIMITED

A method may include obtaining a set of successful responses from one or more tasks, where the tasks include a specific implementation of an API call. The method may also include generating a first schema of a first response of the set of successful responses by extracting information from the first response, the first response responsive to a first task of the one or more tasks. The method may additionally include clustering the first schema with a second schema of a second response of the set of successful responses by applying a learning model to the first and second schemas, where the clustering is based on inputs to the first task and the first response, and creating an integrated task that includes the first task and a second task to which the second response is responsive based on the clustering. 1. A method comprising:obtaining a set of successful responses from one or more tasks, the one or more tasks including a specific implementation of an application programming interface (API) call;generating a first schema of a first response of the set of successful responses by extracting information from the first response, the first response responsive to a first task of the one or more tasks;clustering the first schema with a second schema of a second response of the set of successful responses by applying a learning model to the first schema and the second schema, the clustering based at least on inputs to the first task and the first response; andcreating an integrated task that includes the first task and a second task to which the second response is responsive based on the clustering of the first scheme and the second schema.2. The method of claim 1 , whereinthe second task is configured to receive second user input to replace a second placeholder in the second task; andthe integrated task utilizes the second response based on the second user input as a first input in the first task to replace a first placeholder in the first task.3. The method of claim 2 , ...

Подробнее
19-01-2017 дата публикации

Systems and methods relating to host configuration optimization in a data center environment

Номер: US20170017513A1
Принадлежит: Virtustream IP Holding Co Inc

Systems and methods are disclosed for calculating and utilizing a variable CPU weighting factor for host configuration optimization in a data center environment. According to one illustrative embodiment, implementations may utilize actual workload profiles to generate variable CPU weighting factor(s) to optimize host configurations.

Подробнее
19-01-2017 дата публикации

Methods and systems for assigning resources to a task

Номер: US20170017522A1
Принадлежит: Conduent Business Services LLC

According to embodiments illustrated herein there is provided a method for assigning one or more resources to a task. The method includes determining one or more workflows, comprising one or more sub-tasks in a sequence, utilizable to process the task. The method further includes determining a set of scores for each sub-task associated with each workflow based on at least a set of performance attributes of a set of resources who are available for processing each sub-task. The disclosed method further includes assigning at least a resource from the set of resources, available for processing each sub-task, based on at least one of the determined set of scores and one or more predefined requisites associated with each sub-task.

Подробнее
21-01-2016 дата публикации

System and method for electronic work prediction and dynamically adjusting server resources

Номер: US20160019094A1
Принадлежит: Thomson Reuters Global Resources ULC

A computer-implemented system and method facilitate dynamically allocating server resources. The system and method include determining a current queue distribution, referencing historical information associated with execution of at least one task, and predicting, based on the current queue distribution and the historical information, a total number of tasks of various task types that are to be executed during the time period in the future. Based on this prediction, a resource manager determines a number of servers that should be instantiated for use during the time period in the future.

Подробнее
21-01-2016 дата публикации

SINGLE, LOGICAL, MULTI-TIER APPLICATION BLUEPRINT USED FOR DEPLOYMENT AND MANAGEMENT OF MULTIPLE PHYSICAL APPLICATIONS IN A CLOUD INFRASTRUCTURE

Номер: US20160019096A1
Принадлежит:

A deployment system enables a developer to define a logical, multi-tier application blueprint that can be used to create and manage (e.g., redeploy, upgrade, backup, patch) multiple applications in a cloud infrastructure. In the application blueprint, the developer models an overall application architecture, or topology, that includes individual and clustered nodes (e.g., VMs), logical templates, cloud providers, deployment environments, software services, application-specific code, properties, and dependencies between top-tier and second-tier components. The application can be deployed according to the application blueprint, which means any needed VMs are provisioned from the cloud infrastructure, and application components and software services are installed.

Подробнее
03-02-2022 дата публикации

METHODS AND APPARATUS FOR MAPPING SOURCE LOCATION FOR INPUT DATA TO A GRAPHICS PROCESSING UNIT

Номер: US20220036498A1
Принадлежит:

The present disclosure relates to methods and apparatus for mapping a source location of input data for processing by a graphics processing unit. The apparatus can configure a processing element of the graphics processing unit with a predefined rule for decoding a data source parameter for executing a task by the graphics processing unit. Moreover, the apparatus can store the parameter in local storage of the processing element and configure the processing element to decode the parameter according to the at least one predefined rule to determine a source location of the input data and at least one relationship between invocations of the task. The apparatus can also load, to the local storage of the processing element, the input data from a plurality of memory addresses of the source location determined by the parameter. A one logic unit can then execute the task on the loaded input data. 1. A method for mapping a source location of input data for processing by a graphics processing unit , the method comprising:configuring a processing element of the graphics processing unit with at least one predefined rule for decoding a data source parameter for a task to be executed by the graphics processing unit;storing the data source parameter in local storage of the processing element;decoding, by the processing element, the data source parameter according to the at least one predefined rule to determine a source location of the input data and at least one relationship between invocations of the task;loading, to the local storage of the processing element, the input data from a plurality of memory addresses of the source location that are determined by the decoded data source parameter; andexecuting the task, by at least one logic unit of the processing element, on the loaded input data to generate output data.2. The method of claim 1 , further comprising configuring the at least one logic unit to generate the output data based on a matrix multiply operation.3. The method of ...

Подробнее
17-01-2019 дата публикации

Scheduling of Micro-Service Instances

Номер: US20190018709A1
Принадлежит: SAP SE

Embodiments facilitate the efficient handling of service requests by a Platform-as-a-Service (PaaS) infrastructure. The platform may comprise a central controller communicating with a plurality of execution agents on one or more hosts. The central controller may parse client requests manipulating application state (e.g., scale, start, stop app, clear) into a sequence of fine-grained instance requests (e.g., start, stop, stop all, clear) that are distributed to the application program interfaces (API) of execution agents on the platform. The central controller may assign a priority to the fine-grained requests. The priority may take into consideration one or more factors including but not limited to: request creator (user, system); operation type (start, stop, stop all, clear); instance number; sequence number of the fine grained request within the original received request; hierarchy level (organization, space); and application. Fine-grained requests may be distributed by a scheduler to a queue of the execution agent. 1. A computer-implemented method comprising:receiving, on a platform infrastructure comprising a host, a request to manipulate a state of an application running on the platform infrastructure;parsing the request into a fine-grained request;assigning a priority to the fine grained request;storing the priority; anddistributing the fine grained request and the priority to an execution agent of the host to affect an instance of the application.2. A method as in wherein the assigning the priority comprises affording a higher priority to the request manually received from an application user.3. A method as in wherein the assigning the priority comprises affording a higher priority based upon a request type.4. A method as in wherein the request type comprises stopping the instance.5. A method as in wherein the request type comprises starting the instance.6. A method as in wherein the assigning the priority comprises affording a higher priority based upon a ...

Подробнее
17-01-2019 дата публикации

PERFORMING HASH JOINS USING PARALLEL PROCESSING

Номер: US20190018855A1
Принадлежит:

Data records are joined using a computer. Data records in a first plurality of data records and a second plurality of data records are hashed. The data records in the first and second pluralities are respectively assigned to first and second groupings based on the hashes. Associated pairs of groupings from the first and second groupings are provided to a thread executing on a computer processor, and different pairs are provided to different threads. The threads operate on the pairs of groupings in parallel to determine whether to join the records in the groupings. A thread joins two data records under consideration if the hashes associated with the data records match. The joined data records are output. 1. A method of joining data records using a computer , the method comprising:identifying a first plurality of data records and a second plurality of data records;computing a hash for each data record in the first and second pluralities of data records;assigning data records of the first plurality of data records to groupings from a first set of groupings based on the computed hashes;assigning data records of the second plurality of records to groupings from a second set of groupings based on the computed hashes, each grouping in the second set of groupings associated with a respective grouping in the first set of groupings;determining, based on the hash values, whether to join respective data records of a grouping from the first set of groupings with respective data records of an associated grouping from the second set of groupings; andresponsive to determining to join respective data records of the grouping from the first set of groupings with respective data records of the associated grouping from the second set of groupings, joining the respective data records; andoutputting the joined data records.2. The method of claim 1 , wherein the data records comprise a plurality of fields having values and wherein computing a hash for each data record in the first and ...

Подробнее
17-01-2019 дата публикации

Simulation Systems and Methods Using Query-Based Interest

Номер: US20190018915A1
Принадлежит:

Methods, systems, computer-readable media, and apparatuses for query-based interest in a simulation are presented. Entities may be simulated on workers, and each entity may comprise one or more components. A simulation system may run bridges on one or more machines, and the bridges may be configured to facilitate data communications between the workers and one or more entity databases. Each worker may be assigned to a different bridge. The system may modify one or more entities to include an interest component, and the interest component may indicate a query subscription to the one or more entity databases, which may affect the communication between bridges and workers. The interest component may also or alternatively indicate a frequency for receiving, from the one or more entity databases, updates for the query subscription. 1. One or more non-transitory computer readable media storing computer executable instructions that , when executed , cause a system to perform a simulation by:simulating a plurality of entities on a plurality of workers, wherein each entity of the plurality of entities comprises one or more components;running, on one or more machines, a plurality of bridges, wherein the plurality of bridges are configured to facilitate data communications between the plurality of workers and one or more entity databases;assigning each worker of the plurality of workers to a different bridge of the plurality of bridges; and a query subscription to the one or more entity databases, and', 'a frequency for receiving, from the one or more entity databases, updates for the query subscription., 'modifying an entity of the plurality of entities to include an interest component, wherein the interest component indicates2. The one or more non-transitory computer readable media of claim 1 , wherein modifying the entity to include the interest component is performed at a time corresponding to a time that the entity is created.3. The one or more non-transitory computer ...

Подробнее
16-01-2020 дата публикации

INTELLIGENT THREAD DISPATCH AND VECTORIZATION OF ATOMIC OPERATIONS

Номер: US20200019401A1
Принадлежит: Intel Corporation

A mechanism is described for facilitating intelligent dispatching and vectorizing at autonomous machines. A method of embodiments, as described herein, includes detecting a plurality of threads corresponding to a plurality of workloads associated with tasks relating to a graphics processor. The method may further include determining a first set of threads of the plurality of threads that are similar to each other or have adjacent surfaces, and physically clustering the first set of threads close together using a first set of adjacent compute blocks. 21. An apparatus comprising:one or more processors including a graphics processor; anda memory for storage of cache data for threads to be processed by the one or more processors; schedule a plurality of threads corresponding to a plurality of workloads for the one or more processors,', 'load the plurality of threads for processing,', 'prefetch data for one or more threads of the plurality of threads during loading of the one or more threads by the one or more processors, and', 'store the prefetched data for the one or more threads in the memory., 'wherein the one or more processors are to22. The apparatus of claim 21 , wherein the prefetched data includes one or more values that are currently constant as the one or more threads are loaded.23. The apparatus of claim 21 , wherein the one or more processors are further to generate information regarding data to be used by the one or more threads claim 21 , wherein the prefetching of data is based at least in part on the information.24. The apparatus of claim 23 , wherein generating information regarding data to be used by the one or more threads includes obtaining addresses for a data block to be processed by the one or more threads.25. The apparatus of claim 21 , wherein the one or more threads are to be loaded into one or more shader cores.26. The apparatus of claim 21 , wherein the one or more threads are loaded in a streaming multiprocessor (SM) of a plurality of SMs of ...

Подробнее
21-01-2021 дата публикации

Reducing database system query transaction delay

Номер: US20210019321A1
Принадлежит: AT&T INTELLECTUAL PROPERTY I LP

A processing system including at least one processor may obtain a first set of performance records of a database system, train a machine learning model in accordance with the first set of performance records, where the machine learning model that is trained in accordance with the first set of performance records is configured to predict a latency of a query transaction for a designated time period, present a user interface with a plurality of settings of the database system that are user-adjustable, where the plurality of settings is associated with at least a portion of the first set of performance records, calculate a first predicted latency of a query transaction at the designated time period via the machine learning model in accordance with a set of values of the plurality of settings, and present the first predicted latency via the user interface.

Подробнее
21-01-2021 дата публикации

Distributing tensor computations across computing devices

Номер: US20210019626A1
Автор: Noam M. Shazeer
Принадлежит: Google LLC

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributing tensor computations across computing devices. One of the methods includes: receiving specification data that specifies a distribution of tensor computations among a plurality of computing devices, wherein each tensor computation (i) is defined to receive, as input, one or more respective input tensors each having one or more respective input dimensions, (ii) is defined to generate, as output, one or more respective output tensors each having one or more respective output dimensions, or both, wherein the specification data specifies a respective layout for each input and output tensor that assigns each dimension of the input or output tensor to one or more of the plurality of computing devices; assigning, based on the layouts for the input and output tensors, respective device-local operations to each of the computing devices; and causing the tensor computations to be executed.

Подробнее
17-01-2019 дата публикации

DYNAMIC OPTIMIZATION OF SIMULATION RESOURCES

Номер: US20190020552A1
Принадлежит:

The present invention dynamically optimizes computing resources allocated to a simulation task while it is running. It satisfies application-imposed constraints and enables the simulation application performing the simulation task to resolve inter-instance (including inter-server) dependencies inherent in executing the simulation task in a parallel processing or other HPC environment. An intermediary server platform, between the user of the simulation task and the hardware providers on which the simulation task is executed, includes a cluster service that provisions computing resources on hardware provider platforms, an application service that configures the simulation application in accordance with application-imposed constraints, an application monitoring service that monitors execution of the simulation task for computing resource change indicators (including computing resource utilization and application-specific information extracted from output files generated by the simulation application) as well as restart files, and a computing resource evaluation engine that determines when a change in computing resources is warranted. 1. A system that facilitates hardware and software metering of tasks performed by a plurality of applications on behalf of a plurality of users , the system comprising:(a) a cluster service that provisions a first cluster of computing resources on one of a plurality of hardware provider platforms for executing an application on behalf of a user and, in the event a change in computing resources is warranted while the application is running, provisions a second cluster of computing resources and terminates the application on the first cluster;(b) an application service that configures the application and initiates execution of the application on the provisioned first cluster and, in the event the second cluster is provisioned by the cluster service, reconfigures the application and resumes execution of the application on the second cluster;( ...

Подробнее
17-04-2014 дата публикации

Dynamically allocated computing method and system for distributed node-based interactive workflows

Номер: US20140108485A1
Принадлежит: Disney Enterprises Inc

A system and method for leveraging grid computing for node based interactive workflows is disclosed. A server system spawns a server process that receives node graph data and input attributes from a computing device, processes the data, caches the processed data, and transmits the processed data over a network to a computing device. The computing device runs a node graph application instance comprising proxy nodes configured to initiate a request to process node graph data at the server system. The server processed node graph data is displayed on the computing device. A plurality of computing devices may collaborate on a complex node graph where the node graph data processing is distributed over a plurality of servers.

Подробнее
25-01-2018 дата публикации

Task Scheduling and Resource Provisioning System and Method

Номер: US20180024863A1
Автор: Pradeep Jagadeesh
Принадлежит: Huawei Technologies Co Ltd

A computing system is provided for providing task schedules, comprising an agent manager, a schedule information database configured to store resource and/or task information, at least one configurable agent, a scheduler, wherein the agent manager is configured to submit configuration instructions to the at least one configurable agent based on configuration information received by the agent manager, wherein the at least one configurable agent is configured to monitor resources used and/or tasks executed in the computing system depending on the configuration instructions and to store resource and/or task information derived from the monitored resources and/or tasks in the schedule information database, and wherein the scheduler is configured to generate and output a task schedule based on the resource and/or task information stored in the schedule information database.

Подробнее
10-02-2022 дата публикации

FPGA ACCELERATION FOR SERVERLESS COMPUTING

Номер: US20220043673A1
Принадлежит:

In one embodiment, a method for FPGA accelerated serverless computing comprises receiving, from a user, a definition of a serverless computing task comprising one or more functions to be executed. A task scheduler performs an initial placement of the serverless computing task to a first host determined to be a first optimal host for executing the serverless computing task. The task scheduler determines a supplemental placement of a first function to a second host determined to be a second optimal host for accelerating execution of the first function, wherein the first function is not able to accelerated by one or more FPGAs in the first host. The serverless computing task is executed on the first host and the second host according to the initial placement and the supplemental placement. 1. A method for hardware-accelerated serverless computing , the method comprising:receiving a definition of a serverless computing task comprising a first portion and a second portion, at least the second portion able to be accelerated by a hardware accelerator;placing the serverless computing task entirely with a first host of a plurality of hosts;executing on the first host at least the first portion of the serverless computing task;identifying, during the executing of the first portion, a second host of the plurality of hosts having a hardware accelerator to execute the second portion;placing the second portion with the identified second host; andexecuting the second portion on the hardware accelerator on the identified second host.2. The method of claim 1 , wherein the hardware accelerator comprises a graphics processing unit.3. The method of claim 1 , wherein the hardware accelerator comprises a field programmable gate array.4. The method of claim 1 , wherein the second portion of the serverless computing task comprises a machine learning script.5. The method of claim 1 , wherein the first host lacks a hardware accelerator suitable for accelerating the second portion of the ...

Подробнее
10-02-2022 дата публикации

Heterogeneous Scheduling for Sequential Compute Dag

Номер: US20220043688A1
Автор: Guofang Jiao, Shouwen Lai
Принадлежит: Huawei Technologies Co Ltd

Embodiments of this disclosure provide techniques for splitting a DAG computation model and constructing sub-DAG computation models for inter-node parallel processing. In particular, a method is provided where a plurality of processors split the DAG computation into a plurality of non-interdependent sub-nodes within each respective node of the DAG computation model. The plurality of processors includes at least two different processing unit types. The plurality of processors construct a plurality of sub-DAG computations, each sub-DAG computation including at least a non-interdependent sub-node from different nodes of the DAG computation. The plurality of processors process each of the plurality of sub-DAG computations in parallel.

Подробнее
10-02-2022 дата публикации

PARALLELIZED SEGMENT GENERATION VIA KEY-BASED SUBDIVISION IN DATABASE SYSTEMS

Номер: US20220043690A1
Принадлежит: Ocient Holdings LLC

A method for execution by a record processing and storage system includes assigning each of a plurality of key space sub-intervals of a cluster key domain to a corresponding one of a plurality of processing core resources, and generating a plurality of segments from the set of records via the plurality of processing core resources. Each processing core resource in the plurality of processing core resources generates a subset of the plurality of segments by identifying a proper subset of the set of records based on having cluster key values included in a corresponding one of the plurality of key space sub-intervals, and by generating the subset of the plurality of segments to include the proper subset of the set of records. 1. A method for execution by a record processing and storage system , comprising:assigning each of a plurality of key space sub-intervals of a cluster key domain spanned by a plurality of cluster key values of a set of records to a corresponding one of a plurality of processing core resources; and identifying, via each processing core resource, a proper subset of the set of records based on having cluster key values included in a corresponding one of the plurality of key space sub-intervals; and', 'generating, via the each processing core resource, the subset of the plurality of segments to include the proper subset of the set of records., 'generating a plurality of segments from the set of records via the plurality of processing core resources, wherein each processing core resource in the plurality of processing core resources generates a subset of the plurality of segments by2. The method of claim 1 , further comprising segregating the cluster key domain into the plurality of key space sub-intervals.3. The method of claim 2 , further comprising:determining a selected number of key space sub-intervals to be generated based on a number of processing core resources in the plurality of processing core resources;wherein the cluster key domain is ...

Подробнее
10-02-2022 дата публикации

Intelligent scaling in microservice-based distributed systems

Номер: US20220043699A1
Принадлежит: Kyndryl Inc

In an approach to intelligent scaling in a cloud platform, an attribute template is stored for one or more target services based on one or more system data. One or more request metrics for each target service is stored, wherein the request metrics are based on an analysis of one or more incoming requests of one or more service call chains. Responsive to receiving a request for a target service in a service call chain, the target service is scaled based on the attribute template of the target service and the request metrics of the target service.

Подробнее
24-01-2019 дата публикации

Fpga acceleration for serverless computing

Номер: US20190026150A1
Принадлежит: Cisco Technology Inc

In one embodiment, a method for FPGA accelerated serverless computing comprises receiving, from a user, a definition of a serverless computing task comprising one or more functions to be executed. A task scheduler performs an initial placement of the serverless computing task to a first host determined to be a first optimal host for executing the serverless computing task. The task scheduler determines a supplemental placement of a first function to a second host determined to be a second optimal host for accelerating execution of the first function, wherein the first function is not able to accelerated by one or more FPGAs in the first host. The serverless computing task is executed on the first host and the second host according to the initial placement and the supplemental placement.

Подробнее
24-01-2019 дата публикации

SYSTEM AND METHOD FOR OBTAINING APPLICATION INSIGHTS THROUGH SEARCH

Номер: US20190026295A1
Принадлежит:

A system and method includes receiving, by a search computing system of a virtual computing system, a search query, converting the search query into a structured query, and identifying at least one of a configured metric, a learned metric, and a correlation from the structured query. The configured metric, learned metric, and correlation are based upon a particular metric associated with a component of the virtual computing system. The configured metric is obtained by applying filters to the particular metric, the learned metric is based upon a frequency of presence of the particular metric in the search query, and the correlation is based upon a pattern formed by the search query in conjunction with a subset of prior search queries. The system and method further include displaying data related to the particular metric, such that the data is based upon the configured metric, the learned metric, and the correlation. 1. A method comprising:receiving, by a search computing system of a virtual computing system, a search query via a search interface;converting, by the search computing system, the search query into a structured query;identifying, by the search computing system, at least one of a configured metric, a learned metric, and a correlation from the structured query, wherein the configured metric, the learned metric, and the correlation are based upon a particular metric associated with a component of the virtual computing system, andwherein the configured metric is obtained by applying one or more filters to the particular metric, the learned metric is based upon a frequency of presence of the particular metric in the search query, and the correlation is based upon a pattern formed by the search query in conjunction with a subset of prior search queries; anddisplaying, by the search computing system, data related to the particular metric on the search interface, wherein the data is based upon the configured metric, the learned metric, and the correlation ...

Подробнее
23-01-2020 дата публикации

ELECTRONIC APPARATUS AND CONTROL METHOD THEREOF

Номер: US20200026977A1
Автор: KIM Jaedeok, LEE Jongryul
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A method for controlling an electronic apparatus includes storing a plurality of artificial intelligence models in a first memory, based on receiving a control signal for loading a first artificial intelligence model among the plurality of stored artificial intelligence models into a second memory, identifying an available memory size of the second memory, and based on a size of the first artificial intelligence model being larger than the available memory size of the second memory, obtaining a first compression artificial intelligence model by compressing the first artificial intelligence model based om the available memory size of the second memory, and loading the first compression artificial intelligence model into the second memory. 1. A method for controlling an electronic apparatus , the method comprising:storing a plurality of artificial intelligence models in a first memory;based on receiving a control signal for loading a first artificial intelligence model among the plurality of stored artificial intelligence models into a second memory, identifying an available memory size of the second memory; andbased on a size of the first artificial intelligence model being larger than the available memory size of the second memory, obtaining a first compression artificial intelligence model by compressing the first artificial intelligence model based on the available memory size of the second memory, and loading the first compression artificial intelligence model into the second memory.2. The method as claimed in claim 1 , wherein the loading comprises:identifying whether a performance of the first compression artificial intelligence model satisfies a predetermined condition;based on the performance of the first compression artificial intelligence model satisfying the predetermined condition, loading the first compression artificial intelligence model into the second memory; andbased on the first compression artificial intelligence model not satisfying the ...

Подробнее
28-01-2021 дата публикации

SCHEDULING OF A PLURALITY OF GRAPHIC PROCESSING UNITS

Номер: US20210026696A1
Автор: CHEN Qingcha, Zhang Wenjin
Принадлежит:

The present disclosure provides a method and apparatus for scheduling a plurality of available graphics processing units (GPUs). Multiple GPU pools may be set, wherein each GPU pool is configured to serve one to serve one or more jobs requiring the same number of GPUs. Available GPUs may be assigned to each GPU pool. A job and job information related to the job may be received, wherein the job information indicates a number of GPUs required for performing the job. A corresponding GPU pool may be selected from the multiple GPU pools based at least on the job information. Available GPUs to be scheduled to the job in the selected GPU pool may be determined based at least on the job information. In addition, the determined available GPUs may be scheduled to the job. 1. A method for scheduling a plurality of available graphic processing units (GPUs) , the method comprising:setting multiple GPU pools, wherein each GPU pool is configured to serve one or more jobs requiring the same number of GPUs;assigning available GPUs to each GPU pool;receiving a job and job information related to the job, wherein the job information indicates a number of GPUs required for performing the job;selecting a corresponding GPU pool from the multiple GPU pools based at least on the job information;determining available GPUs to be scheduled to the job in the selected GPU pool based at least on the job information; andscheduling the determined available GPUs to the job.2. The method of claim 1 , whereinthe multiple GPU pools includes a reserved pool, andassigning available GPUs to each GPU pool further comprises: assigning at least one available GPU to the reserved pool as reserved GPUs, wherein the reserved GPUs are configured to be dedicated to serve jobs with high priority and/or configured to be shared by the reserved pool and other pools in the multiple GPU pools.3. The method of claim 2 , further comprising:when the reserved GPUs are configured to be shared by the reserved pool and the ...

Подробнее
28-01-2021 дата публикации

MANAGING A CONTAINERIZED APPLICATION IN A CLOUD SYSTEM BASED ON USAGE

Номер: US20210026700A1
Принадлежит:

Examples described relate to managing a containerized application in a cloud system based on usage. In an example, a determination may be made whether a containerized application in a cloud system has not been used for a pre-defined period of time. In response to the determination, the containerized application may be deleted. The information related to the containerized application in a database may be stored in a database. In response to a future user request to access the containerized application, a new containerized application may be created in the cloud system based on the information related to the containerized application in the database. A new public service endpoint may be associated with the new containerized application in the cloud system. User access to the new containerized application in the cloud system may be allowed via the new public service endpoint. 1. A method , comprising:determining whether a containerized application in a cloud system has not been used for a pre-defined period of time;in response to a determination that the containerized application in the cloud system has not been used for the pre-defined period of time, deleting the containerized application in the cloud system;storing information related to the containerized application in a database;in response to a future user request to access the containerized application, creating a new containerized application in the cloud system based on the information related to the containerized application in the database;associating a new public service endpoint with the new containerized application in the cloud system; andallowing user access to the new containerized application in the cloud system via the new public service endpoint.2. The method of claim 1 , wherein the information related to the containerized application comprises information related to a public service endpoint associated with the containerized application.3. The method of claim 1 , wherein the information related to ...

Подробнее
28-01-2021 дата публикации

COMPUTER-IMPLEMENTED METHOD AND APPARATUS FOR PLANNING RESOURCES

Номер: US20210026701A1
Принадлежит:

A computer-implemented method for planning resources, in particular computing-time resources, of a computing device having at least one computing core, for execution of tasks. the method includes the following steps: furnishing a plurality of containers, a priority being associatable or associated with each container; associating at least one task with at least one of the containers; and associating each container with the at least one computing core. 1. A computer-implemented method for planning computing-time resources of a computing device having at least one computing core or execution of tasks , comprising the following steps:furnishing a plurality of containers, a priority being associated with each of the containers;associating at least one of the tasks with at least one of the containers; andassociating each of the containers with the at least one computing core.2. The method as recited in claim 1 , wherein each of the containers has a respective static priority associated with it.3. The method as recited in claim 1 , wherein at least one of the containers has a resource budget associated with it claim 1 , the resource budget characterizing computing-time resources for tasks associated with the container.4. The method as recited in claim 1 , wherein at least one of the containers has a budget replenishment strategy associated with it.5. The method as recited in claim 3 , wherein the at least one of the containers has a budget replenishment strategy associated with it claim 3 , the budget replenishment strategy characterizing at least one of the following elements: a) a point in time of a replenishment of the resource budget associated with the container; b) an extent of the replenishment of the resource budget associated with the container.6. The method as recited in claim 3 , wherein the resource budget is replenished periodically and/or at static claim 3 , specified points in time and/or depending on a previous consumption of computing-time resources ...

Подробнее
10-02-2022 дата публикации

Simple Integration of an On-Demand Compute Environment

Номер: US20220045965A1
Автор: Jackson David B.
Принадлежит: III Holdings 12, LLC

Disclosed are a system and method of integrating an on-demand compute environment into a local compute environment. The method includes receiving a request from an administrator to integrate an on-demand compute environment into a local compute environment and, in response to the request, automatically integrating local compute environment information with on-demand compute environment information to make available resources from the on-demand compute environment to requesters of resources in the local compute environment such that policies of the local environment are maintained for workload that consumes on-demand compute resources. 117-. (canceled)18. A method comprising:detecting passing of a service-based threshold or a policy-based threshold in a local compute environment;upon the detecting of the passing of the service-based threshold or the policy-based threshold, causing local workload information to be routed to a remote compute environment to enable selection of compute resources of the remote compute environment that are compatible with the local workload information; andcausing workload associated with a job to be transferred from the local compute environment to the remote compute environment to yield transferred workload.19. The method of claim 18 , wherein the compute resources from the remote compute environment are utilized and appear locally.20. The method of claim 18 , further comprising providing notice to a requestor that the workload associated with the job has been transferred to the remote compute environment.21. The method of claim 18 , wherein the request is a one-click request received via an input device claim 18 , the request being for access to the remote compute environment.22. The method of claim 18 , further comprising: causing a plurality of options related to available modifications to the compute resources within the remote compute environment to be presented on an output device23. The method of claim 22 , further comprising ...

Подробнее
28-01-2021 дата публикации

Simulation Systems and Methods

Номер: US20210027000A1
Принадлежит:

Methods, systems, computer-readable media, and apparatuses for performing, providing, managing, executing, and/or running a spatially-optimized simulation are presented. In one or more embodiments, the spatially-optimized simulation may comprise a plurality of worker modules performing the simulation, a plurality of entities being simulated among the plurality of worker modules, a plurality of bridge modules facilitating communication between workers and an administrative layer including a plurality of chunk modules, at least one receptionist module, and at least one oracle module. The spatially-optimized simulation may be configured to provide a distributed, persistent, fault-tolerate and spatially-optimized simulation environment. In some embodiments, load balancing and fault tolerance may be performed using transfer scores and/or tensile energies determined among the candidates for transferring simulation entities among workers. In some embodiments, the plurality of bridge modules may expose an application programming interface (API) for communicating with the plurality of worker modules. 1. A spatially-optimized simulation system comprising:at least one computer processor controlling some operations of the system; and a plurality of entities being simulated, wherein each entity of the plurality of entities comprises one or more components, and wherein each of the one or more components comprises one or more properties;', 'a plurality of worker modules that each performs a subset of the spatially-optimized simulation, and wherein each worker module of the plurality of worker modules is configured to instantiate a subset of the plurality of entities and is further configured to update the one or more properties of at least one component of the one or more components of each entity in the subset of the plurality of entities instantiated by the worker module of the plurality of worker modules', 'a plurality of chunk modules wherein each chunk module of the plurality ...

Подробнее
28-01-2021 дата публикации

PARALLEL PROCESSING FOR SIGNAL GENERATION NEURAL NETWORKS

Номер: US20210027153A1
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for executing a signal generation neural network on parallel processing hardware. One of the methods includes receiving weight matrices of a layer of a signal generation neural network. Rows of a first matrix for the layer are interleaved by assigning groups of rows of the first matrix to respective thread blocks of a plurality of thread blocks. A first subset of rows of the one or more other weight matrices are assigned to a first subset of the plurality of thread blocks and a second subset of rows of the one or more other weight matrices are assigned to a second subset of the plurality of thread blocks. The first matrix operation is performed substantially in parallel by the plurality of thread blocks. The other matrix operations are performed substantially in parallel by the plurality of thread blocks. 1. A method comprising: a respective first weight matrix comprises values for a first matrix operation of the layer, and', 'one or more other weight matrices comprise values for one or more other matrix operations of the layer, wherein the one or more other matrix operations depend on a result of the first matrix operation;, 'receiving weight matrices of a layer of a plurality of layers of a signal generation neural network, wherein each layer of one or more layers in the neural network has a residual connection to a subsequent layer, and wherein each layer of the plurality of layers has a skip connection, wherein for each layerinterleaving rows of the first weight matrix for the layer by assigning groups of rows of the first weight matrix to respective thread blocks of a plurality of thread blocks, each thread block being a computation unit for execution by an independent processing unit of a plurality of independent processing units of a parallel processing device, each independent processing unit being a streaming multiprocessor;receiving, by the layer, an input ...

Подробнее
24-01-2019 дата публикации

METHODS AND APPARATUS TO OPTIMIZE MEMORY ALLOCATION IN RESPONSE TO A STORAGE REBALANCING EVENT

Номер: US20190028400A1
Принадлежит:

Methods and apparatus to optimize memory allocation in response to a storage rebalancing event are disclosed. An example apparatus includes a telematics agent to detect a rebalancing event based on metadata; and a decision engine to identify a cluster corresponding to the rebalancing event by processing the metadata; and increase a number of jumbo buffers in a network switch corresponding to the cluster in response to the rebalancing event. 1. An apparatus comprising:a telematics agent to detect a rebalancing event based on metadata; and identify a cluster of corresponding to the rebalancing event by processing the metadata; and', 'increase a number of jumbo buffers in a network switch corresponding to the cluster in response to the rebalancing event., 'a decision engine to2. The apparatus of claim 1 , wherein the telematics agent is to detect the rebalancing event by:intercepting metadata packets from the network switches; andanalyzing the metadata packets to identify a rebalance signature.3. The apparatus of claim 2 , wherein the telematics agent is to transmit the rebalance signature to the decision engine.4. The apparatus of claim 3 , wherein the decision engine is to identify the cluster based on the rebalance signature.5. The apparatus of claim 1 , further including a resource configuration agent claim 1 , the decision engine to increase the number of jumbo buffers in the network switch corresponding to the cluster by transmitting instructions to the resource configuration agent to reallocate buffer resources of the network switch to increase the first number of jumbo buffers and decrease a second number of non-jumbo buffers.6. The apparatus of claim 1 , wherein the decision engine is to update a system profile in response to increasing the number of jumbo buffers claim 1 , the system profile including at least one of a quality of service profile or an equal-cost multi-path profile.7. The apparatus of claim 1 , wherein the decision engine is to claim 1 , in ...

Подробнее
24-01-2019 дата публикации

COMPUTER-IMPLEMENTED SYSTEMS AND METHODS OF ANALYZING DATA IN AN AD-HOC NETWORK FOR PREDICTIVE DECISION-MAKING

Номер: US20190028534A1
Принадлежит: TransVoyant, Inc.

A computer-implemented system and method of predictive decision-making in an ad hoc network. The computer-implemented method includes receiving a set of rules into the ad hoc network and identifying a data set for each rule. The computer-implemented method also includes selecting a first and second node from the ad hoc network to process a first and second rule as a function of the identified data set according to an optimizing algorithm. The computer-implemented method also selects a third node to receive the processed results from the first and second nodes. An indication is provided of the processed results by the third node. 120-. (canceled)21. A computer-implemented method for predictive decision-making , comprising:receiving a set of rules into an ad-hoc network having distributed nodes, wherein each distributed node is capable of processing data as a function of one or more rules in the set of rules;for each rule of the received set of rules, identifying a respective potential candidate data set including spatial, temporal, or contextual data elements that are a respective potential candidate for the respective rule;selecting at least two nodes from the distributed nodes as a function of the respective identified potential candidate data set for each rule of the received set of rules;identifying, using at least one of the selected nodes, candidate spatial, temporal, or contextual data elements received by the respective selected node for at least one rule of the received set of rules;spatially, temporally, or contextually indexing, using at least one of the selected nodes, the respective identified candidate data elements for the at least one rule in memory as a function of the received set of rules; andchanging a display of at least one of the plurality of distributed nodes based on the indexed candidate data elements.22. The method of claim 21 , wherein the selected node performing the step of identifying candidate spatial claim 21 , temporal claim 21 , or ...

Подробнее
23-01-2020 дата публикации

PERFORMING OPTIMIZED COLLECTIVE OPERATIONS IN AN IRREGULAR SUBCOMMUNICATOR OF COMPUTE NODES IN A PARALLEL COMPUTER

Номер: US20200028891A1
Принадлежит:

In a parallel computer, performing optimized collective operations in an irregular subcommunicator of compute nodes may be carried out by: identifying, within the irregular subcommunicator, regular neighborhoods of compute nodes; selecting, for each neighborhood from the compute nodes of the neighborhood, a local root node; assigning each local root node to a node of a neighborhood-wide tree topology; mapping, for each neighborhood, the compute nodes of the neighborhood to a local tree topology having, at its root, the local root node of the neighborhood; and performing a one way, rooted collective operation within the subcommunicator including: performing, in one phase, the collective operation within each neighborhood; and performing, in another phase, the collective operation amongst the local root nodes. 1. A method of performing optimized collective operations in an irregular subcommunicator of compute nodes in a parallel computer , the method comprising: identifying, in a positive direction of a first dimension, each logical plane that includes the respective compute node, a first compute node of the irregular subcommunicator that is one or more hops away from the respective compute node in a positive direction of a second dimension, wherein the second dimension is orthogonal to the first dimension;', 'identifying, in a negative direction of the first dimension, each logical plane that includes the respective compute node and a second compute node of the irregular subcommunicator that is one or more hops away from the respective compute node in the positive direction of the second dimension;, 'establishing, by each respective compute node of the irregular subcommunicator, at least one logical plane that includes the respective compute node, wherein establishing the at least one logical plane comprises, 'identifying, within the irregular subcommunicator, regular neighborhoods of compute nodes, wherein identifying regular neighborhoods of compute nodes ...

Подробнее