Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 6439. Отображено 100.
19-07-2012 дата публикации

Optimizing The Deployment Of A Workload On A Distributed Processing System

Номер: US20120185867A1
Принадлежит: International Business Machines Corp

Optimizing the deployment of a workload on a distributed processing system, the distributed processing system having a plurality of nodes, each node having a plurality of attributes, including: profiling during operations on the distributed processing system attributes of the nodes of the distributed processing system; selecting a workload for deployment on a subset of the nodes of the distributed processing system; determining specific resource requirements for the workload to be deployed; determining a required geometry of the nodes to run the workload; selecting a set of nodes having attributes that meet the specific resource requirements and arranged to meet the required geometry; deploying the workload on the selected nodes.

Подробнее
26-07-2012 дата публикации

Automated cloud workload management in a map-reduce environment

Номер: US20120192197A1
Принадлежит: International Business Machines Corp

A computing device associated with a cloud computing environment identifies a first worker cloud computing device from a group of worker cloud computing devices with available resources sufficient to meet required resources for a highest-priority task associated with a computing job including a group of prioritized tasks. A determination is made as to whether an ownership conflict would result from an assignment of the highest-priority task to the first worker cloud computing device based upon ownership information associated with the computing job and ownership information associated with at least one other task assigned to the first worker cloud computing device. The highest-priority task is assigned to the first worker cloud computing device in response to determining that the ownership conflict would not result from the assignment of the highest-priority task to the first worker cloud computing device.

Подробнее
20-09-2012 дата публикации

Server method and system for executing applications on a wireless device

Номер: US20120239735A1
Автор: Philippe Clavel
Принадлежит: Individual

A server implemented method for facilitating execution of an application for a wireless device. The server selects a plurality of scene components, which comprise at least one functional unit operable to execute functions associated with the scene components. The functional units may be a portion of an application operable to be executed by the wireless device. The server selects a plurality of assets, which may be used in rendering a scene utilizing the plurality of scene components. The server determines a partition of functionality of the application which defines a server behavior module for executing on the server and a client behavior module for executing on the wireless device. The server customizes the plurality of scene components and the plurality of assets for the wireless device, which are then sent to the wireless device for execution and rendering.

Подробнее
04-10-2012 дата публикации

Information Handling System Application Decentralized Workload Management

Номер: US20120254437A1
Принадлежит: Individual

A cloud application management infrastructure models biological swarm behaviors to assign application resources to physical processing resources in a decentralized manner. A balanced and highly automated management of cloud infrastructure has a predictable and reliable response to changing resource loads by using a limited local rule set to define how application instances interact with available resources. Digital pheromone signals at physical resources are applied locally by a swarm module to determine if the physical resources provide an acceptable environment for an application and, if not, the application swarms to other environments until a suitable environment is found.

Подробнее
03-01-2013 дата публикации

Collaborating with resources residing in multiple information devices

Номер: US20130007136A1
Принадлежит: International Business Machines Corp

An appliance, user information device, method, and computer program product for collaborating with resources residing in multiple information devices. The user information device may communicate with the appliance, and the appliance may further communicate with a first assisting device, wherein the first assisting device has access to a first resource capable of performing a first operation. The user information device includes a device communication interface, a processor configured to execute at least one application, the at least one application configured to generate a first command associated with the first operation via the processor, and a resource agent program executable by the processor, the resource agent program configured to send the first command to the appliance via the device communication interface, the first command operable for enabling performance of the first operation using the first resource when the appliance sends the first command to the first assisting device.

Подробнее
03-01-2013 дата публикации

Processing workloads using a processor hierarchy system

Номер: US20130007762A1
Принадлежит: International Business Machines Corp

Workload processing is facilitated by use of a processor hierarchy system. The processor hierarchy system includes a plurality of processor hierarchies, each including one or more processors (e.g., accelerators). Each processor hierarchy has associated therewith a set of characteristics that define the processor hierarchy, and the processors of the hierarchy also have a set of characteristics associated therewith. Workloads are assigned to processors of processor hierarchies depending on characteristics of the workload, characteristics of the processor hierarchies and/or characteristics of the processors.

Подробнее
21-03-2013 дата публикации

Virtual machine placement within a server farm

Номер: US20130073730A1
Принадлежит: International Business Machines Corp

Disclosed herein are methods, systems, and computer program products for the placement of a virtual machine within a plurality of cache-coherent NUMA servers. According to an aspect, an example method includes determining a resource requirement of the virtual machine. The example method may also include determining a resource availability of one or more nodes of the plurality of servers. Further, the example method may include selecting placement of the virtual machine within one or more nodes of the plurality of cache-coherent NUMA servers based on the determined resource requirement and the determined resource availability.

Подробнее
20-06-2013 дата публикации

Allocating Compute Kernels to Processors in a Heterogeneous System

Номер: US20130160016A1
Принадлежит: Advanced Micro Devices Inc

A system and method embodiments for optimally allocating compute kernels to different types of processors, such as CPUs and GPUs, in a heterogeneous computer system are disclosed. These include comparing a kernel profile of a compute kernel to respective processor profiles of a plurality of processors in a heterogeneous computer system, selecting at least one processor from the plurality of processors based upon the comparing, and scheduling the compute kernel for execution in the selected at least one processor.

Подробнее
18-07-2013 дата публикации

System and method for rendering an image

Номер: US20130182000A1
Автор: Jang Hee Kim
Принадлежит: Macrograph Co Ltd

An image rendering method includes receiving, by a central processing unit (CPU), a plurality of first primitives, dividing, by the CPU, each of the plurality of first primitives into each of a plurality of grids, collecting, by the CPU, the plurality of grids as a grid set, transmitting the grid set from the CPU to a graphic processing unit (GPU) when a size of the grid set is greater than a threshold value, and shading, by the GPU, the grid set.

Подробнее
15-08-2013 дата публикации

Shared resources in a docked mobile environment

Номер: US20130212586A1
Принадлежит: International Business Machines Corp

A first and second data handling systems provides for shared resources in a docked mobile environment. The first data handling system maintains a set of execution tasks within the first data handling system having a system dock interface to physically couple to the second data handling system. The first data handling system assigns a task to be executed by the second data handling system while the two systems are physically coupled.

Подробнее
05-09-2013 дата публикации

Cache performance prediction and scheduling on commodity processors with shared caches

Номер: US20130232500A1
Принадлежит: VMware LLC

A method is described for scheduling in an intelligent manner a plurality of threads on a processor having a plurality of cores and a shared last level cache (LLC). In the method, a first and second scenario having a corresponding first and second combination of threads are identified. The cache occupancies of each of the threads for each of the scenarios are predicted. The predicted cache occupancies being a representation of an amount of the LLC that each of the threads would occupy when running with the other threads on the processor according to the particular scenario. One of the scenarios is identified that results in the least objectionable impacts on all threads, the least objectionable impacts taking into account the impact resulting from the predicted cache occupancies. Finally, a scheduling decision is made according to the one of the scenarios that results in the least objectionable impacts.

Подробнее
12-09-2013 дата публикации

METHOD AND SYSTEM FOR AN ATOMIZING FUNCTION OF A MOBILE DEVICE

Номер: US20130239118A1
Принадлежит: BROADCOM CORPORATION

Systems, apparatuses and methods are disclosed for apportioning tasks among devices. One such method is performed in handheld wireless communication device (HWCD). The method includes discovering available resources in a network and dynamically assessing cost functions for performing a task on the HWCD and on each of the discovered resources. Each of the respective cost functions is based on performance factors associated with the HWCD or with one of the devices. Based on change in the cost functions, the task is apportioned for local execution by the HWCD or remote execution by the available resources. 1. A method comprising: discovering one or more available resources in a communication network;', 'dynamically assessing respective cost functions for performing a task on the handheld wireless communication device and on each of the discovered one or more available resources, each of the respective cost functions based on one or more performance factors associated with one of the discovered one or more available resources or with the handheld wireless communication device; and', 'detecting a change in the dynamically assessed respective cost functions; and', 'apportioning, based on the detected change in the dynamically assessed respective cost functions, the task for one or both of local execution by the handheld wireless communication device and remote execution by the discovered one or more available resources., 'in a handheld wireless communication device2. The method according to claim 1 , wherein the dynamically assessed respective cost functions are dependent on one or more factors comprising available communication bandwidth claim 1 , available memory space claim 1 , available CPU processing power claim 1 , and available battery power.3. The method according to claim 2 , wherein the one or more factors are each weighted by a weighting factor.4. The method according to claim 3 , wherein at least one of the weighting factors is based at least in part on a user ...

Подробнее
26-09-2013 дата публикации

Collaboration processing apparatus, collaboration processing system, and program

Номер: US20130254297A1
Автор: Kazunori Kobayashi
Принадлежит: Ricoh Co Ltd

A collaboration processing apparatus that is connected to plural electronic apparatuses, receives a request from an application installed in the collaboration processing apparatus, and controls the electronic devices based on the received request to perform a collaboration process by causing the application and the electronic devices to collaborate, includes a capability information providing unit which receives an acquisition request for information related to capability of the electronic device via a previously defined interface and provides the information related to the capability of the electronic devices in response to the received acquisition request; and an execution controlling unit which receives the execution request based on the information of the capability from the application, to which the information related to the capability is provided by the capability information providing unit, via the previously defined interface, and controls the electronic devices based on the received execution request.

Подробнее
10-10-2013 дата публикации

Method for managing services on a network

Номер: US20130268648A1
Принадлежит: Thales SA

The invention relates to a method for managing services on a network, comprising: at least two interconnected computer sites, each of which is capable of implementing at least one service that can be accessed from the network; at least one service implemented on a network site; a means for transferring a service from an initial site to a separate destination site. Each is associated with security attributes and the method includes transferring at least one service from an initial site to a destination site of the network following a predetermined transfer sequence which depends on the security attributes.

Подробнее
21-11-2013 дата публикации

Apparatus for enhancing performance of a parallel processing environment, and associated methods

Номер: US20130311543A1
Автор: Kevin D. Howard
Принадлежит: Massively Parallel Technologies Inc

Parallel Processing Communication Accelerator (PPCA) systems and methods for enhancing performance of a Parallel Processing Environment (PPE). In an embodiment, a Message Passing Interface (MPI) devolver enabled PPCA is in communication with the PPE and a host node. The host node executes at least a parallel processing application and an MPI process. The MPI devolver communicates with the MPI process and the PPE to improve the performance of the PPE by offloading MPI process functionality to the PPCA. Offloading MPI processing to the PPCA frees the host node for other processing tasks, for example, executing the parallel processing application, thereby improving the performance of the PPE.

Подробнее
12-12-2013 дата публикации

Control flow in a heterogeneous computer system

Номер: US20130332702A1
Автор: Pierre Boudier
Принадлежит: Advanced Micro Devices Inc

Methods, apparatuses, and computer readable media are disclosed for control flow on a heterogeneous computer system. The method may include a first processor of a first type, for example a CPU, requesting a first kernel be executed on a second processor of a second type, for example a GPU, to process first work items. The method may include the GPU executing the first kernel to process the first work items. The first kernel may generate second work items. The GPU may execute a second kernel to process the generated second work items. The GPU may dispatch producer kernels when space is available in a work buffer. The GPU may dispatch consumer kernels to process work items in the work buffer when the work buffer has available work items. The GPU may be configured to determine a number of processing elements to execute the first kernel and the second kernel.

Подробнее
19-12-2013 дата публикации

Apparatus, system and method for heterogeneous data sharing

Номер: US20130339979A1
Автор: Ronald N. Hilton
Принадлежит: Proximal Systems Corp

An apparatus, system, and method are disclosed for offloading data processing. An offload task hosted on a first data processing system provides internal functionality substantially equivalent to that of a second task 304 hosted on a second data processing system of a potentially different architecture. A proxy task hosted on the second data processing system provides an external interface substantially equivalent to that of the second task. A communication mechanism between the first and second data processing systems may be comprised of a network, shared storage, and shared memory. The proxy task substantially replaces the second task, delegating the internal functionality of the second task to the offload task via mapping of arguments and accessing and translating of input and output data as required.

Подробнее
13-03-2014 дата публикации

Execution allocation cost assessment for computing systems and environments including elastic computing systems and environments

Номер: US20140074763A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

Techniques for allocating individually executable portions of executable code for execution in an Elastic computing environment are disclosed. In an Elastic computing environment, scalable and dynamic external computing resources can be used in order to effectively extend the computing capabilities beyond that which can be provided by internal computing resources of a computing system or environment. Machine learning can be used to automatically determine whether to allocate each individual portion of executable code (e.g., a Weblet) for execution to either internal computing resources of a computing system (e.g., a computing device) or external resources of an dynamically scalable computing resource (e.g., a Cloud). By way of example, status and preference data can be used to train a supervised learning mechanism to allow a computing device to automatically allocate executable code to internal and external computing resources of an Elastic computing environment.

Подробнее
07-01-2016 дата публикации

SEMICONDUCTOR DEVICE

Номер: US20160003910A1
Автор: ISHIMI Koichi
Принадлежит:

The disclosed invention provides a semiconductor device that enables early discovery of a sign of aged deterioration that occurs locally. An LSI has a plurality of modules and a delay monitor cluster including a plurality of delay monitors. Each delay monitor inducts a ring oscillator having a plurality of gate elements. Each delay monitor measures a delay time of the gate elements. A CPU #0 determines if a module proximate to a delay monitor suffers from aged deterioration, based on the delay time measured by the delay monitor. 1. A semiconductor device comprising:a plurality of modules;a plurality of delay monitors, wherein each delay monitor includes a ring oscillator having a plurality of gate elements and measures a delay time of said gate elements, anda control unit that whether or not the delay time measured by said delay monitor exceeds a predetermined reference value,wherein, of said delay monitors, every two delay monitors disposed near to each other form one pair, wherein a ring oscillator in one delay monitor of said pair continues to oscillate except for a predetermined number of cycles before and after a delay time measurement period and a ring oscillator in the other delay monitor of said pair oscillates only during a delay time measurement period, and said control unit determines a difference between a delay time measured by said one delay monitor and a delay time measured by said other delay monitor.2. The semiconductor device according to claim 1 , wherein said control unit issues an alert claim 1 , if having determined that said module suffers from aged deterioration.3. The semiconductor device according to claim 1 , wherein said control unit performs a built-in self test of said semiconductor device claim 1 , if having determined that said module suffers from aged deterioration.4. The semiconductor device according to claim 1 , wherein said control unit decreases the power supply voltage to the module proximate to said delay monitor claim 1 , if ...

Подробнее
07-01-2016 дата публикации

DYNAMIC PREDICTION OF HARDWARE TRANSACTION RESOURCE REQUIREMENTS

Номер: US20160004556A1
Принадлежит:

A transactional memory system dynamically predicts the resource requirements of hardware transactions. A processor of the transactional memory system predicts resource requirements of a first hardware transaction to be executed based on any one of a resource hint and a previous execution of a prior hardware transaction. The processor allocates resources for the first hardware transaction based on the predicted resource requirements. The processor executes the first hardware transaction. The processor saves resource usage information of the first hardware transaction for future prediction. 1. A method for dynamically predicting the resource requirements of hardware transactions , the method comprising:predicting, by a processor, resource requirements of a first hardware transaction to be executed based on any one of a resource hint and a previous execution of a prior hardware transaction;allocating, by the processor, resources for the first hardware transaction based on the predicted resource requirements;executing, by the processor, the first hardware transaction; andsaving, by the processor, resource usage information of the first hardware transaction for future prediction.2. The method of claim 1 , wherein the prior hardware transaction is the same transaction as the first hardware transaction claim 1 , and wherein the prediction of the resource requirements is based on the address of the beginning of the first hardware transaction being the same as the address of the prior hardware transaction.3. The method of claim 1 , wherein the resources allocated comprise any one of cache lines claim 1 , storage buffer size claim 1 , and functional units.4. The method of claim 1 , wherein the first hardware transaction is preemptively aborted based on the predicted resource requirements being insufficient.5. The method of claim 1 , wherein the first hardware transaction is preemptively aborted based on the predicted resource requirements conflicting with a second hardware ...

Подробнее
07-01-2016 дата публикации

METHOD FOR ASSIGNING PRIORITY TO MULTIPROCESSOR TASKS AND ELECTRONIC DEVICE SUPPORTING THE SAME

Номер: US20160004569A1
Принадлежит:

A method for determining task priorities in an electronic device is provided. The method includes receiving, at the electronic device, a request to perform a task, identifying a threshold parameter and a weighted value in accordance with a type of the requested task, measuring the threshold parameter of the task based on the identified weighted value, and assigning the requested task to one of a first operational unit and a second operational unit based on the measured threshold parameter and weighted value. 1. A method for determining task priorities in an electronic device , the method comprising:receiving, at the electronic device, a request to perform a task;identifying a threshold parameter and a weighted value in accordance with a type of the requested task;measuring the threshold parameter of the task based on the identified weighted value; andassigning the requested task to one of a first operational unit and a second operational unit based on the measured threshold parameter and weighted value.2. The method of claim 1 , wherein the first operational unit includes at least one first core processor and the second operational unit includes at least one second core processor having lower performance than the first core processor.3. The method of claim 2 , wherein assigning the requested task to the first operational unit comprises assigning the requested task to one of the at least first core processors of the first operational unit.4. The method of claim 2 , wherein assigning the requested task to the second operational unit comprises assigning the requested task to one of the at least second core processors of the second operational unit.5. The method of claim 1 , further comprising migrating claim 1 , when the threshold parameter of the assigned task claim 1 , which has been assigned to the second operational unit claim 1 , is greater than a preset threshold value claim 1 , the assigned task from the second operational unit to the first operational unit.6. ...

Подробнее
04-01-2018 дата публикации

TECHNIQUES FOR HYBRID COMPUTER THREAD CREATION AND MANAGEMENT

Номер: US20180004554A1
Принадлежит:

A technique for operating a computer system to support an application, a first application server environment, and a second application server environment includes intercepting a work request relating to the application issued to the first application server environment prior to execution of the work request. A thread adapted for execution in the first application server environment is created. A context is attached to the thread that non-disruptively modifies the thread into a hybrid thread that is additionally suitable for execution in the second application server environment. The hybrid thread is returned to the first application server environment. 1. A method of operating a computer system to support an application , a first application server environment , and a second application server environment , the method comprising:intercepting, by a request interceptor component executing on the computer system, a work request relating to the application issued to the first application server environment prior to execution of the work request;responsive to the request interceptor component, creating, using the computer system, a thread adapted for execution in the first application server environment by an executor component;responsive to the executor component, attaching to the thread, by a thread dispatcher component executing on the computer system, a context to non-disruptively modify the thread into a hybrid thread that is additionally suitable for execution in the second application server environment; andresponsive to the thread dispatcher component, returning the hybrid thread to the first application server environment by a catcher component executing on the computer system.2. The method of claim 1 , wherein the context comprises transactional control data of one of the application server environments claim 1 , security control data of one of the application server environments claim 1 , monitoring control data of one of the application server environments ...

Подробнее
07-01-2021 дата публикации

LOADING MODELS ON NODES HAVING MULTIPLE MODEL SERVICE FRAMEWORKS

Номер: US20210004268A1
Автор: LI Jiliang, WANG Yueming
Принадлежит:

This disclosure relates to model loading. In one aspect, a method includes determining, based on a preset execution script and resource information of multiple execution nodes, loading-tasks corresponding to the execution nodes. Each execution node is deployed on a corresponding cluster node. Loading requests are sent to the execution nodes, thereby causing the execution nodes to start execution processes based on the corresponding loading requests. The execution processes start multiple model service frameworks on each cluster node. Multiple models are loaded onto each of the model service frameworks. Each loading request includes loading-tasks corresponding to the execution node to which the loading request was sent. The execution processes include a respective execution process for each model service framework. 1. A computer-implemented method , comprising:determining, based on a preset execution script and resource information of multiple execution nodes, loading-tasks corresponding to the execution nodes, wherein each execution node is deployed on a corresponding cluster node; and the execution processes start multiple model service frameworks on each cluster node;', 'multiple models are loaded onto each of the model service frameworks;', 'each loading request comprises loading-tasks corresponding to the execution node to which the loading request was sent; and', 'the execution processes comprise a respective execution process for each model service framework., 'sending loading requests to the execution nodes, thereby causing the execution nodes to start execution processes based on the corresponding loading requests, wherein2. The computer-implemented method of claim 1 , wherein determining claim 1 , based on a preset execution script and resource information of multiple execution nodes claim 1 , loading-tasks corresponding to the execution nodes comprises determining a quantity of models corresponding to each execution node based on a total quantity of models ...

Подробнее
07-01-2021 дата публикации

LOADING MODELS ON NODES HAVING MULTIPLE MODEL SERVICE FRAMEWORKS

Номер: US20210004269A1
Автор: LI Jiliang, WANG Yueming
Принадлежит: Advanced New Technologies Co., Ltd.

This disclosure relates to model loading. In one aspect, a method includes determining, based on a preset execution script and resource information of multiple execution nodes, loading-tasks corresponding to the execution nodes. Each execution node is deployed on a corresponding cluster node. Loading requests are sent to the execution nodes, thereby causing the execution nodes to start execution processes based on the corresponding loading requests. The execution processes start multiple model service frameworks on each cluster node. Multiple models are loaded onto each of the model service frameworks. Each loading request includes loading-tasks corresponding to the execution node to which the loading request was sent. The execution processes include a respective execution process for each model service framework. 120.-. (canceled)21. A computer-implemented method , comprising:receiving, by an execution node from a control node, a loading request comprising loading tasks corresponding to execution nodes, wherein the loading tasks are determined by the control node based on a preset execution script and resource information for each of a plurality of execution nodes, and wherein different execution nodes are deployed on different cluster nodes; andstarting, by the execution node, multiple execution processes based on the loading request, wherein the multiple execution processes start multiple model service frameworks and multiple models are loaded onto each of the model service frameworks.22. The computer-implemented method of claim 21 , wherein the execution processes are in one-to-one correspondence with the model service frameworks.23. The computer-implemented method of claim 21 , further comprising:detecting that a target execution process has been lost; andin response to detecting that the target execution process has been lost, re-establishing the target execution process.24. The computer-implemented method of claim 21 , wherein:the loading request further ...

Подробнее
02-01-2020 дата публикации

ATTACHED ACCELERATOR BASED INFERENCE SERVICE

Номер: US20200004596A1
Принадлежит:

Implementations detailed herein include description of a computer-implemented method. In an implementation, the method at least includes receiving an application instance configuration, an application of the application instance to utilize a portion of an attached accelerator during execution of a machine learning model and the application instance configuration including: an indication of the central processing unit (CPU) capability to be used, an arithmetic precision of the machine learning model to be used, an indication of the accelerator capability to be used, a storage location of the application, and an indication of an amount of random access memory to use. 1. A computer-implemented method , comprising: an indication of the central processing unit (CPU) capability to be used,', 'an arithmetic precision of the machine learning model to be used,', 'an indication of the GPU capability to be used,', 'a storage location of the application, and', 'an indication of an amount of random access memory to use;, 'receiving, in a multi-tenant web services provider, an application instance configuration, an application of the application instance to utilize a portion of an attached graphics processing unit (GPU) during execution of a machine learning model and the application instance configuration includingprovisioning the application instance and the portion of the GPU attached to the application instance, wherein the application instance is implemented using a physical compute instance in a first instance location, wherein the portion of the GPU is implemented using a physical GPU in the second location, and wherein the physical GPU is accessible to the physical compute instance over a network;attaching the portion of the GPU to the application instance;loading the machine learning model onto the attached portion of the GPU; andperforming inference using the loaded machine learning model of the application using the portion of the GPU on the attached GPU.2. The method ...

Подробнее
02-01-2020 дата публикации

ATTACHED ACCELERATOR SCALING

Номер: US20200004597A1
Принадлежит:

Implementations detailed herein include description of a computer-implemented method. In an implementation, the method at least includes provisioning an application instance and portions of at least one accelerator attached to the application instance to execute a machine learning model of an application of the application instance; loading the machine learning model onto the portions of the at least one accelerator; receiving scoring data in the application; and utilizing each of the portions of the attached at least one accelerator to perform inference on the scoring data in parallel and only using one response from the portions of the accelerator 1. A computer-implemented method , comprising:receiving, in a multi-tenant web services provider, an application instance configuration, an application of the application instance to utilize a plurality of portions of at least one attached graphics processing unit (GPU) during execution of a machine learning model;provisioning the application instance and the portions of the at least one GPU attached to the application instance;loading the machine learning model onto the portions of the at least one GPU;receiving scoring data in the application; andutilizing each of the portions of the attached at least one GPU to perform inference on the scoring data in parallel and only using one response from the portions of the GPU.2. The method of claim 1 , wherein the one response to use is a temporally first response.3. The method of claim 1 , further comprising:tracking timing of each of response from the portions of the attached at least one GPU; andaltering the provisioning of the plurality of portions of at least one GPU based on the tracked timing.4. A computer-implemented method claim 1 , comprising:provisioning an application instance and portions of at least one accelerator attached to the application instance to execute a machine learning model of an application of the application instance;loading the machine learning ...

Подробнее
02-01-2020 дата публикации

CORE MAPPING

Номер: US20200004721A1
Принадлежит:

The disclosed technology is generally directed to peripheral access. In one example of the technology, stored configuration information is read. The stored configuration information is associated with mapping a plurality of independent execution environments to a plurality of peripherals such that the peripherals of the plurality of peripherals have corresponding independent execution environments of the plurality of independent execution environments. A configurable interrupt routing table is programmed based on the configuration information. An interrupt is received from a peripheral. The interrupt is routed to the corresponding independent execution environment based on the configurable interrupt routing table. 120-. (canceled)21. An apparatus , comprising:a plurality of processing cores;a plurality of peripherals; anda configurable interrupt routing table that selectively maps each of the plurality of peripherals to an individual processing core of the plurality of processing cores, wherein the mapping of each of the plurality of peripherals to the individual processing core is configurable while a lock bit of the apparatus is not set, wherein the mapping of each of the plurality of peripherals to the individual processing core is locked in response to the lock bit of the apparatus being set, and wherein, once locked, the mapping of each of the plurality of peripherals to the individual processing core remains locked until a reboot of the apparatus.22. The apparatus of claim 21 , wherein the configurable interrupt routing table includes a plurality of configuration registers.23. The apparatus of claim 21 , wherein a first processing core of the plurality of processing cores is associated with at least two independent execution environments.24. The apparatus of claim 23 , wherein a first independent execution environment associated with the first processing core is a Secure World operating environment of the first processing core claim 23 , and wherein a second ...

Подробнее
07-01-2021 дата публикации

System and method for provisioning of artificial intelligence accelerator (aia) resources

Номер: US20210004658A1
Принадлежит: Solidrun Ltd

A system and method for provisioning of artificial intelligence accelerator (AIA) resources. The method includes receiving a request for an NPU allocation from a client device; determining an available NPU based on a scanning of a network to discover NPU resources; and allocating the available NPU to the client device.

Подробнее
03-01-2019 дата публикации

Method to Optimize Core Count for Concurrent Single and Multi-Thread Application Performance

Номер: US20190004861A1
Принадлежит: Dell Products LP

A system, method, and computer-readable medium are disclosed for performing a core optimization operation, comprising: enabling all of a plurality of processor cores of a processor; selectively turning off at least one of the plurality of processor cores, the selectively turning off the at least one of the plurality of processor cores being based upon an application to be executed by the processor, the selectively turning off being performed dynamically during runtime of the processor; and, controlling process thread distribution to the plurality of processor cores via an operating system executing on the processor, the process thread distribution not distributing threads to the turned off at least one of the plurality of processor cores.

Подробнее
20-01-2022 дата публикации

COMPUTING RESOURCE ALLOCATION

Номер: US20220019475A1
Принадлежит:

There is provided a method of computing resource allocation. The method comprises allocating a first bounded amount of computing resources forming a first set of computing resources; exclusively assigning the first set of computing resources to a first process of a computer program; receiving a request from the first process for additional computing resources; in response to the request from the first process, allocating a second bounded amount of computing resources forming a second set of computing resources; and spawning a second process from the first process and exclusively assigning the second set of computing resources to the second process; wherein this method may be repeated indefinitely by the first process, second process, or any other process created according to this method. By following this method, a process does not control the amount of computing resources allocated to that process (i.e., itself), but instead controls the amount of computing resources allocated to its child processes. 115-. (canceled)16. A method of computing resource allocation comprising:allocating a first bounded amount of computing resources forming a first set of computing resources;exclusively assigning the first set of computing resources to a first process of a computer program;receiving a request from the first process for additional computing resources;in response to the request from the first process, allocating a second bounded amount of computing resources forming a second set of computing resources; andspawning a second process from the first process and exclusively assigning the second set of computing resources to the second process.17. The method of claim 16 , wherein:the first and second set of computing resources are provided by a first node of a computing system; orthe first and second set of computing resources are respectively provided by a first and a second node of a computing system.18. The method of claim 16 , wherein the first set of computing resources is ...

Подробнее
10-01-2019 дата публикации

RESOURCE OPTIMIZATION IN VEHICLES

Номер: US20190009790A1
Принадлежит:

This disclosure describes various embodiments for resource optimization in a vehicle. In an embodiment, a system for resource optimization in a vehicle is described. The system may comprise a memory; a processor coupled to the memory; and a resource optimization module. The resource optimization module may be configured to: monitor usage of local computing resources of the vehicle, the local computing resources comprising the processor and available bandwidth of a transmission medium; determine an availability of the local computing resources; evaluate data captured by one or more sensors of the vehicle; and determine whether to process the data locally or remotely based, at least in part, on the availability of the local computing resources and the data captured by the one or more sensors. 1. A system for resource optimization in a vehicle , the system comprising:a memory;a processor coupled to the memory; and monitor usage of local computing resources of the vehicle, the local computing resources comprising the processor and available bandwidth of a transmission medium;', 'determine an availability of the local computing resources;', 'evaluate data captured by one or more sensors of the vehicle; and', 'determine whether to process the data locally or remotely based, at least in part, on the availability of the local computing resources and the data captured by the one or more sensors., 'a resource optimization module configured to2. The system of claim 1 , wherein the resource optimization module is further configured to create metadata describing characteristics of the data.3. The system of claim 1 , wherein the processor is configured to process the data based upon the resource optimization module determining the availability of local computing resources is adequate for processing the data.4. The system of claim 3 , wherein the processor is further configured to store:a result comprising processed data;the data; andmetadata comprising a tag indicating the data ...

Подробнее
27-01-2022 дата публикации

METHOD FOR REPOINTING RESOURCES BETWEEN HOSTS

Номер: US20220027209A1
Принадлежит: VMWARE, INC.

Techniques are disclosed for reallocating host resources in a virtualized computing environment when certain criteria have been met. In some embodiments, a system identifies a host disabling event. In view of the disabling event, the system identifies a resource for reallocation from a first host to a second host. Based on the identification, the computer system disassociates the identified resource's virtual identifier from the first host device and associates the virtual identifier with the second host device. Thus, the techniques disclosed significantly reduce a system's planned and unplanned downtime. 1. A method for managing reallocation of resources between host devices , the method comprising:detecting an event to disable a first host device selected from a plurality of host devices; identifying a plurality of available resources at the first host device;', 'determining whether reallocation criteria have been met for at least one of the available resources at the first host device;', selecting the at least one of the available resources at the first host device for reallocation to a second host device; and', 'reallocating the selected available resource from the first host device to the second host device; and, 'as a result of the determination that the reallocation criteria for the at least one of the available resources at the first host device have been met, forgoing reallocating the at least one of the available resources from the first host device to the second host device; and', 'replicating the at least one of the available resources from the first host device to a location distinct from the first host device., 'as a result of the determination that the reallocation criteria for the at least one of the available resources have not been met], 'in response to detecting an event to disable the first host device2. The method of claim 1 , wherein the reallocation criteria for the at least one of the available resources at the first host device includes a ...

Подробнее
27-01-2022 дата публикации

VIRTUALIZING GRAPHICS PROCESSING IN A PROVIDER NETWORK

Номер: US20220028351A1
Принадлежит: Amazon Technologies, Inc.

Methods, systems, and computer-readable media for virtualizing graphics processing in a provider network are disclosed. A virtual compute instance is provisioned from a provider network. The provider network comprises a plurality of computing devices configured to implement a plurality of virtual compute instances with multi-tenancy. A virtual GPU is attached to the virtual compute instance. The virtual GPU is implemented using a physical GPU, and the physical GPU is accessible to the virtual compute instance over a network. An application is executed using the virtual GPU on the virtual compute instance. Executing the application generates virtual GPU output that is provided to a client device. 120.-. (canceled)21. A system , comprising:a plurality of virtual compute instances, implemented using central processing unit (CPU) resources and memory resources of one or more physical compute instances of a provider network;a host comprising one or more physical graphics processing units (GPUs) that implement a plurality of virtual GPUs that are accessible to the virtual compute instances over a network, the one or more physical GPUs distinct from the one or more physical compute instances that implements the virtual compute instances; and 'attach each of the plurality of the virtual GPUs to respective virtual compute instances of the plurality of virtual compute instances;', 'one or more computing devices configured to implement an elastic graphics service configured to 'execute an application using a respective attached virtual GPU of the plurality of virtual GPUs.', 'wherein one or more of the virtual compute instances are configured to22. The system as recited in claim 21 ,wherein the provider network comprises another host comprising one or more other physical graphics processing units (GPUs) that implement another plurality of virtual GPUs;wherein the other plurality of virtual GPUs are organized into a plurality of virtual GPU classes, each class having distinct ...

Подробнее
12-01-2017 дата публикации

TOPOLOGY-AWARE PROCESSOR SCHEDULING

Номер: US20170010920A1
Принадлежит:

In an example embodiment, a method of operating a task scheduler for one or more processors is provided. A topology of one or more processors is obtained, the topology indicating a plurality of execution units and physical resources associated with each of the plurality of execution units. A task to be performed by the one or more processors is received. Then a plurality of available execution units from the plurality of execution units is identified. An optimal execution unit is then determined, from the plurality of execution units, to which to assign the task, based on the topology. The task is then assigned to the optimal execution unit, after which the task is sent to the optimal execution unit for execution. 1. A method of operating a task scheduler for one or more processors , the method comprising:obtaining a topology of the one or more processors, the topology indicating a plurality of execution units and physical resources associated with each of the plurality of execution units;receiving a task to be performed by the one or more processors;identifying a plurality of available execution units from the plurality of execution units;determining an optimal execution unit, from the plurality of execution units, to which to assign the task, based on the topology and one or more user-specified rules;assigning the task to the optimal execution unit; andsending the task to the optimal execution unit for execution.2. The method of claim 1 , wherein the determining an optimal execution unit includes:analyzing a nature of the task;comparing the nature of the task to natures of one or more other tasks previously assigned to execution units of the one or more processors; anddetermining an optimal execution unit, from the plurality of execution units, to which to assign the task, based on the topology and based on the comparison of the nature of the task to natures of one or more other tasks previously assigned to execution units of the one or more processors.3. The ...

Подробнее
14-01-2016 дата публикации

TASK ALLOCATION IN A COMPUTING ENVIRONMENT

Номер: US20160011908A1
Принадлежит:

A method comprises, receiving, at each of a plurality of computing devices, a task execution estimation request message from a central server, the task execution estimation request message comprising a worst-case execution time (WCET) corresponding to the computing device. The method further comprises, computing, by each of the plurality of computing devices, an estimate task execution time for the task based on the WCET and a state transition model corresponding to the computing device, wherein the state transition model indicates available processing resources corresponding to the computing device. Further, the method comprises transmitting, by each of the plurality of computing devices, the estimate task execution time to the central server for allocation of the task to a computing device from amongst the plurality of computing devices based on the estimate task execution time corresponding to the computing device. 1. A method for allocating a task in a computing environment , the method comprising:receiving, at each of a plurality of computing devices, a task execution estimation request message from a central server, wherein the task execution estimation request message comprises a worst-case execution time (WCET) corresponding to the computing device;computing, by each of the plurality of computing devices, an estimate task execution time for the task based on the WCET and a state transition model corresponding to the computing device, wherein the state transition model indicates available processing resources corresponding to the computing device; andtransmitting, by each of the plurality of computing devices, the estimate task execution time to the central server for allocation of the task to a computing device from amongst the plurality of computing devices based on the estimate task execution time corresponding to the computing device.2. The method as claimed in claim 1 , wherein the computing comprises:identifying a current state of the computing device, ...

Подробнее
14-01-2021 дата публикации

MEMORY-AWARE PLACEMENT FOR VIRTUAL GPU ENABLED SYSTEMS

Номер: US20210011773A1
Принадлежит:

Disclosed are aspects of memory-aware placement in systems that include graphics processing units (GPUs) that are virtual GPU (vGPU) enabled. Virtual graphics processing unit (vGPU) data is identified for graphics processing units (GPUs). A configured GPU list and an unconfigured GPU list are generated using the GPU data. The configured GPU list specifies configured vGPU profiles for configured GPUs. The unconfigured GPU list specifies a total GPU memory for unconfigured GPUs. A vGPU request is assigned to a vGPU of a GPU. The GPU is a first fit, from the configured GPU list or the unconfigured GPU list that satisfies a GPU memory requirement of the vGPU request. 1. A system comprising:at least one computing device comprising at least one processor and at least one data store; monitor a computing environment to identify graphics processing unit (GPU) data for a plurality of virtual GPU (vGPU) enabled GPUs of the computing environment;', 'generate, based on the GPU data, a configured GPU list comprising a plurality of configured GPUs in increasing order of configured vGPU profile memory, the configured GPU list specifying a configured vGPU profile for a respective configured GPU;', 'generate, based on the GPU data, an unconfigured GPU list of a plurality of unconfigured GPUs, the unconfigured GPU list specifying a total GPU memory for a respective unconfigured GPU;', 'receive a vGPU request comprising a GPU memory requirement; and', 'assign the vGPU request to a vGPU of a GPU identified based on a first fit that satisfies the GPU memory requirement from: the configured GPU list, or the unconfigured GPU list., 'machine readable instructions stored in the at least one data store, wherein the instructions, when executed by the at least one processor, cause the at least one computing device to at least2. The system of claim 1 , wherein the instructions claim 1 , when executed by the at least one processor claim 1 , cause the at least one computing device to at least: ...

Подробнее
14-01-2021 дата публикации

APPLICATION PROGRAM MANAGEMENT METHOD AND APPARATUS

Номер: US20210011774A1
Автор: MAO Minhua
Принадлежит:

This application provides an application program management method and apparatus. The method is performed in a database cluster system including at least two database nodes, at least one database object is stored in each database node, and the method includes: running an application program on a first database node in a first time period; determining a target database node based on at least one historical database object accessed by the application program in the first time period, where the target database node stores the historical database object; and running the application program on the target database node in a second time period. According to this application, a database node on which an application program runs can be dynamically adjusted, to avoid overload of the database node. 1. An application program management method comprising:running an application program on a first database node in a first time period, wherein, the first database node is a database node in a database cluster system comprising at least two database nodes, and at least one database object is stored in each database node;determining a target database node based on at least one historical database object accessed by the application program in the first time period, wherein the target database node stores the historical database object; andrunning the application program on the target database node in a second time period.2. The method according to claim 1 , wherein claim 1 , the running an application program on a first database node in a first time period comprises:running a first application module on the first database node in the first time period, wherein the application program is any application program in the first application module.3. The method according to claim 2 , wherein the method further comprises:determining a first application program from the first application module;wherein, the determining a target database node based on at least one historical database object ...

Подробнее
10-01-2019 дата публикации

METHODS AND SYSTEMS FOR COORDINATED TRANSACTIONS IN DISTRIBUTED AND PARALLEL ENVIRONMENTS

Номер: US20190012205A1
Принадлежит:

Automated techniques are disclosed for minimizing communication between nodes in a system comprising multiple nodes for executing requests in which a request type is associated with a particular node. For example, a technique comprises the following steps. Information is maintained about frequencies of compound requests received and individual requests comprising the compound requests. For a plurality of request types which frequently occur in a compound request, the plurality of request types is associated to a same node. As another example, a technique for minimizing communication between nodes, in a system comprising multiple nodes for executing a plurality of applications, comprises the steps of maintaining information about an amount of communication between said applications, and using said information to place said applications on said nodes to minimize communication among said nodes. 1. In an electronic trading system , a method for handling multileg requests , the method comprising:receiving a multileg request for executing a trade transaction, wherein the multileg request comprises a plurality of legs, wherein each leg of the multileg request comprises a request to execute a different trade transaction;determining which leg among the plurality of legs of the multileg request is less likely to execute immediately;designating as a primary leg, the leg which is determined less likely to execute immediately;designating all other legs of the multileg request as secondary legs;placing the primary leg and the secondary legs in associated request queues of execution venues which handle the trade transactions associated with the primary and secondary legs;in response to a secondary leg of the multileg request reaching a front of the associated request queue, setting aside the secondary leg and not placing the secondary leg in an order book for attempting to match the secondary leg to a pending trade transaction;in response to the primary leg reaching a front of the ...

Подробнее
10-01-2019 дата публикации

VIRTUAL DEVICE MIGRATION OR CLONING BASED ON DEVICE PROFILES

Номер: US20190012208A1
Принадлежит:

Techniques for placing virtual machines based on compliance of device profiles are disclosed. In one embodiment, a list of device profiles may be maintained, with each device profile including details of at least one virtual device and associated capabilities. Further, a device profile from the list of device profiles may be assigned to a virtual machine running on a first host computing system. A virtual device and associated configurations required by the virtual machine may be identified to comply with the device profile. A management operation may be performed to migrate or clone the virtual device and associated configurations from a second host computing system to the first host computing system to support the compliance of the device profile assigned to the virtual machine. 1. A method comprising:maintaining a list of device profiles, each device profile including details of at least one virtual device and associated capabilities;assigning a device profile from the list of device profiles to a virtual machine running on a first host computing system;identifying a virtual device and associated configurations required by the virtual machine based on compliance with the device profile; andperforming a management operation to migrate or clone the virtual device and associated configurations from a second host computing system to the first host computing system to support the compliance of the device profile assigned to the virtual machine.2. The method of claim 1 , wherein the management operation comprises one of a virtual device cloning and a virtual device migration.3. The method of claim 1 , comprising attaching the virtual device to the virtual machine to support the compliance of the device profile upon migrating or cloning the virtual device claim 1 , wherein the virtual device is attached to the virtual machine via virtual machine reconfigurations.4. The method of claim 1 , wherein migrating or cloning the virtual device and associated configurations from ...

Подробнее
10-01-2019 дата публикации

HANDLING TENANT REQUESTS IN A SYSTEM THAT USES HARDWARE ACCELERATION COMPONENTS

Номер: US20190012209A1
Принадлежит:

A service mapping component (SMC) is described herein for processing requests by instances of tenant functionality that execute on software-driven host components (or some other components) in a data processing system. The SMC is configured to apply at least one rule to determine whether a service requested by an instance of tenant functionality is to be satisfied by at least one of: a local host component, a local hardware acceleration component which is locally coupled to the local host component, and/or at least one remote hardware acceleration component that is indirectly accessible to the local host component via the local hardware acceleration component. In performing its analysis, the SMC can take into account various factors, such as whether or not the service corresponds to a line-rate service, latency-related considerations, security-related considerations, and so on. 120-. (canceled)21. A data processing system , comprising:a first server unit that includes a first processing unit configured to execute tenant functionality, a first hardware acceleration component, and a first local link configured to couple the first processing unit with the first hardware acceleration component;a second server unit that includes a second processing unit, a second hardware acceleration component, and a second local link configured to couple the second processing unit with the second hardware acceleration component; anda service mapping component provided on the first server unit, the second server unit, or a third server unit in the data processing system, the service mapping component being configured to, in different instances, cause different services requested by the tenant functionality on the first processing unit to be satisfied by the first processing unit, the first hardware acceleration component, and the second hardware acceleration component.22. The data processing system of claim 21 , wherein the first processing unit and the second processing unit are ...

Подробнее
10-01-2019 дата публикации

Distributed Computing Mesh

Номер: US20190012212A1
Автор: Lewis Ronald A.
Принадлежит:

Novel tools and techniques are provided for implementing a distributed computing mesh, and, more particularly, for implementing a distributed computing mesh using a hierarchical framework to distribute workload across multiple computing nodes. In various embodiments, a hierarchical distributed computing mesh might be implemented using a plurality of network nodes. A first control node may assign at least one first network node as at least one second control node. The second control node might receive a computing task from the first control node. The second control node might designate additional network nodes to process one or more portions of the computing task. The second control node may then divide the computing task and send the one or more portions of the computing task to the additional network nodes for processing. The second control node may receive one or more processed portions of the computing task from the additional network nodes. 1. A method , comprising:assigning, with a first control node, at least one first network node as at least one second control node;receiving, with the at least one second control node, a computing task from the first control node, wherein the computing task is a portion of a computational problem;determining, with the at least one second control node, an amount of computing power necessary to process the computing task;designating, with the at least one second control node, one or more additional network nodes to process one or more portions of the computing task, based at least in part on the determined amount of computing power;sending, with the at least one second control node, the one or more portions of the computing task to the one or more additional network nodes for processing; andreceiving, with the at least one second control node and from the one or more additional network nodes, one or more processed portions of the computing task.2. The method of claim 1 , further comprising:combining, with the at least one ...

Подробнее
10-01-2019 дата публикации

DYNAMICALLY SHIFTING TASKS IN DISTRIBUTED COMPUTING DATA STORAGE

Номер: US20190012234A1
Принадлежит:

A method for dynamically shifting data-related tasks in a dispersed storage network (DSN). In an embodiment, a first dispersed storage and task (DST) execution unit (or computing device) of the DSN determines an incremental partial task execution capacity level, which is compared to a threshold level. When the partial task execution capacity level is above the threshold, the first DST execution unit selects one or more locally-stored encoded data slices which are also stored in a second DST execution unit. The first DST execution unit further obtains, from the second DST execution unit, at least one partial task relating to the encoded data slices. The first DST execution unit subsequently performs the at least one partial task on the one or more encoded data slices to produce partial results for use by the second DST execution unit or a device associated with assignment of the at least one partial task. 1. A method for execution by a first dispersed storage and task (DST) execution unit of a set of DST execution units , the method comprises:determining an incremental partial task execution capacity level of the first DST execution unit;comparing the incremental partial task execution capacity level of the first DST execution unit to a threshold level;in response to determining that the incremental partial task execution capacity level of the first DST execution unit is above the threshold level, selecting one or more encoded data slices of a slice group stored by the first DST execution unit, wherein the one or more encoded data slices are additionally stored by a second DST execution unit of the set of DST execution units;obtaining, from the second DST execution unit, at least one partial task associated with the one or more encoded data slices of the slice group; andfacilitating execution of the at least one partial task on the one or more encoded data slices of the slice group stored by the first DST execution unit to produce partial results.2. The method of ...

Подробнее
09-01-2020 дата публикации

Synchronization and exchange of data between processors

Номер: US20200012536A1
Принадлежит: Graphcore Ltd

A system comprising: a first subsystem comprising one or more first processors, and a second subsystem comprising one or more second processors. The second subsystem is configured to process code over a series of steps delineated by barrier synchronizations, and in a current step, to send a descriptor to the first subsystem specifying a value of each of one or more parameters of each of one or more interactions that the second subsystem is programmed to perform with the first subsystem via an inter-processor interconnect in a subsequent step. The first subsystem is configured to execute a portion of code to perform one or more preparatory operations, based on the specified values of at least one of the one or more parameters of each interaction as specified by the descriptor, to prepare for said one or more interactions prior to the barrier synchronization leading into the subsequent phase.

Подробнее
10-01-2019 дата публикации

TECHNOLOGIES FOR SWITCHING NETWORK TRAFFIC IN A DATA CENTER

Номер: US20190014396A1
Принадлежит:

Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuitry is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed. 1. A network switch comprising:one or more processors;communication circuitry coupled to the one or more processors, wherein the communication circuitry is to assist the one or more processors to switch network traffic of multiple link layer protocols; receive, with the communication circuitry through an optical connection, network traffic to be forwarded;', 'determine a link layer protocol of the received network traffic, wherein the received network traffic is formatted according to one of the multiple link layer protocols; and', 'forward the network traffic as a function of the determined link layer protocol to a destination network device., 'one or more memory devices having stored therein a plurality of instructions that, when executed, cause the network switch to2. The network switch of claim 1 , wherein to receive the network traffic comprises to receive network traffic through an optical connection that provides one fourth of a total bandwidth of a link.3. The network switch of claim 1 , wherein to receive the network traffic comprises to receive the network traffic from a sled coupled to the optical connection.4. The network switch of claim 3 , wherein to receive the network ...

Подробнее
03-02-2022 дата публикации

HARDWARE RESOURCE CONFIGURATION FOR PROCESSING SYSTEM

Номер: US20220035679A1
Принадлежит:

A method for controlling hardware resource configuration for a processing system comprises obtaining performance monitoring data indicative of processing performance associated with workloads to be executed on the processing system, providing a trained machine learning model with input data depending on the performance monitoring data; and based on an inference made from the input data by the trained machine learning model, setting control information for configuring the processing system to control an amount of hardware resource allocated for use by at least one processor core. A corresponding method of training the model is provided. This is particularly useful for controlling inter-core borrowing of resource between processor cores in a multi-core processing system, where resource is borrowed between respective cores, e.g. cores on different layers of a 3D integrated circuit. 1. A computer-implemented method for controlling hardware resource configuration for a processing system comprising at least one processor core; the method comprising:obtaining performance monitoring data indicative of processing performance associated with workloads to be executed on the processing system;providing input data to a trained machine learning model, the input data depending on the performance monitoring data; andbased on an inference made from the input data by the trained machine learning model, setting control information for configuring the processing system to control an amount of hardware resource allocated for use by the at least one processor core.2. The method of claim 1 , in which:the processing system comprises a plurality of processor cores and is configured to support a first processor core processing a workload using borrowed hardware resource of a second processor core; andthe control information is set based on the inference, to control an amount of inter-core borrowing of hardware resource between the plurality of processor cores.3. The method of claim 1 , in ...

Подробнее
03-02-2022 дата публикации

Resource provisioning systems and methods

Номер: US20220035834A1
Принадлежит: Snowflake Inc

A method and apparatus managing a set of processors for a set of queries is described. In an exemplary embodiment, a device receives a set of queries for a data warehouse, the set of queries including one or more queries to be processed by the data warehouse. The device further provisions a set of processors from a first plurality of processors, where the set of processors to process the set of queries, and a set of storage resources to store data for the set of queries. In addition, the device monitors a utilization of the set of processors as the set of processors processes the set of queries. The device additionally updates a number of the processors in the set of processors provisioned based on the utilization/Furthermore, the device processes the set of queries using the updated set of processors.

Подробнее
03-02-2022 дата публикации

ACCESSING DATA OF CATALOG OBJECTS

Номер: US20220035835A1
Принадлежит:

Example systems and methods for cloning catalog objects are described. In one implementation, a method includes creating a copy of a catalog object without copying a data associated with the catalog object by only coping metadata associated with the object. The method further includes modifying, by one or more processors, the data associated with the catalog object independently of the copy of the catalog object. 1. A system comprising:a memory to store an original catalog object associated with a dataset, and create a duplicate catalog object of the original catalog object by copying metadata associated with the dataset without copying the dataset;', 'determine, based on the metadata, whether the dataset needs to be accessed without accessing the dataset; and', 'access, based on the duplicate catalog object, the dataset associated with the original catalog object responsive to determining that the dataset needs to be accessed., 'one or more processors, operatively coupled to the memory, the one or more processors to2. The system of claim 1 , wherein the dataset is stored in files claim 1 , and wherein the one or more processors are further to:execute data access requests directed to the dataset stored in the files by reading the duplicate catalog object of the original catalog object when the dataset is being recreated; andadd additional files to either of the original catalog object or the duplicate catalog object of the original catalog object independently of another.3. The system of claim 1 , wherein the duplicate catalog object comprises a duplicate hierarchy of one or more generations of children.4. The system of claim 3 , wherein to copy the metadata the one or more processors are further to copy an inventory of the dataset.5. The system of claim 3 , wherein to copy the metadata the one or more processors are further to copy information regarding the dataset that enables identification of the dataset without requiring access to the dataset.6. The system of ...

Подробнее
19-01-2017 дата публикации

Allocating field-programmable gate array (fpga) resources

Номер: US20170017523A1
Автор: Steven A. Guccione
Принадлежит: Bank of America Corp

A system for allocating field-programmable gate array (FPGA) resources comprises a plurality of FPGAs operable to implement one or more pipeline circuits, the plurality of FPGAs comprising FPGAs of different processing capacities, and one or more processors operable to access a set of data comprising a plurality of work items to be processed according to a pipeline circuit associated with each of the plurality of work items, determine processing requirements for each of the plurality of work items based at least in part on the pipeline circuit associated with each of the plurality of work items, sort the plurality of work items according to the determined processing requirements, and allocate each of the plurality of work items to one of the plurality of FPGAs, such that no FPGA is allocated a work item with processing requirements that exceed the processing capacity of the FPGA.

Подробнее
18-01-2018 дата публикации

Packet data traffic management apparatus

Номер: US20180019955A1
Принадлежит: Innovasic Inc

A packet data network traffic management device comprises a plurality of ports comprising at least a first port, a second port, and a third port; and a plurality of deterministic multi-threaded deterministic micro-controllers, each of the micro-controllers associated with a corresponding one of the ports to control packet data through the corresponding port; and the plurality of multi-threaded deterministic micro-controllers cooperatively operate to selectively communicate data packets between the plurality of ports.

Подробнее
21-01-2021 дата публикации

DECENTRALIZED RESOURCE SCHEDULING

Номер: US20210019176A1
Принадлежит:

Disclosed are various embodiments for distributed resource scheduling. An eviction request from a first host is received. The eviction request comprises data regarding a virtual machine to be migrated from the first host. The eviction request is then broadcast to a plurality of hosts. A plurality of responses are received from the plurality of hosts, each response comprising a score representing an ability of a respective one of the plurality of hosts to act as a new host for the virtual machine. A second host is selected from the plurality of hosts to act as the new host for the virtual machine based at least in part on the score in each of the plurality of responses. Then, a response is sent to the first host, the response containing an identifier of the second host. 1. A system , comprising:a computing device comprising a processor and a memory; receive an eviction request from a first host, the eviction request comprising data regarding a virtual machine to be migrated from the first host;', 'broadcast the eviction request to a plurality of hosts;', 'receive a plurality of responses from the plurality of hosts, each response comprising a score representing an ability of a respective one of the plurality of hosts to act as a new host for the virtual machine;', 'select a second host from the plurality of hosts to act as the new host for the virtual machine based at least in part on the score in each of the plurality of responses; and', 'send a response to the first host, the response containing an identifier of the second host., 'machine-readable instructions stored in the memory that, when executed by the processor, cause the computing device to at least2. The system of claim 1 , wherein the machine-readable instructions further cause the computing device to determine whether any of the plurality of hosts are available as the new host for the virtual machine based at least in part on the score in each of the plurality of responses.3. The system of claim 1 , ...

Подробнее
16-01-2020 дата публикации

Method for Allocating Memory Resources and Terminal Device

Номер: US20200019441A1

A method for allocating memory resources and a terminal device are provided. The method is applied to the terminal device. The terminal device includes a rich execution environment (REE) and a fingerprint trust application (TA). The method includes the following. In response to a request for memory resources from the fingerprint TA, the REE obtains N values of memory resources requested at N time points within a preset period by the fingerprint TA, where each of the N values of memory resources is in one-to-one correspondence with one of the N time points and N represents an integer larger than 1. The REE determines a target value of memory resources according to the N values of memory resources, and allocates memory resources equal in value to the target value of memory resources for the fingerprint TA.

Подробнее
21-01-2021 дата публикации

Workload/converged infrastructure asset allocation system

Номер: US20210019187A1
Автор: Alan Raymond White
Принадлежит: Dell Products LP

A workload/Converged Infrastructure (CI) asset allocation system includes a CI system having a plurality of CI assets that include compute devices and storage arrays. A CI/workload management system is coupled to the CI system and receives a workload that includes workload requirements. The CI/workload management system then determines a first storage array that is included in the CI assets and that satisfies at least one storage requirement included in the workload requirements, and a first subset of the compute devices included in the CI assets that each include a path to the first storage array and that satisfy at least one compute requirement included in the workload requirements. The CI/workload management system then identifies the first subset of the compute devices, and configures the first subset of the compute devices and the first storage array to provide the workload.

Подробнее
21-01-2021 дата публикации

FINITE STATE MACHINE DRIVEN WORKFLOWS

Номер: US20210019192A1
Принадлежит: Capital One Services, LLC

Disclosed herein are embodiments for providing finite state machine driven workflows. In an embodiment, a workflow template is defined for a type of task. The workflow template may represent a finite state machine. The workflow template may be linked to an external party and an asset type, which may be stored in a workflow database. An asset may be received from the external party including an external party attribute identifying the external party, an asset type attribute, and an owner attribute. The owner attribute may be associated with an application end user. A determination may be made whether the external party attribute and the asset type attribute of the asset match the external party and the asset type linked to the workflow template. If a match is determined, instances of the task and the one or more actions of the workflow template may be created. 1. A system , comprising:a memory; and defining a workflow template for a type of task, the workflow template representing a finite state machine;', 'linking the workflow template to an external party and an asset metadata attribute stored in a workflow database;', 'receiving an asset from the external party, the asset including an external party attribute, an asset metadata attribute, and an owner attribute, wherein the owner attribute is associated with an application end user;', 'determining whether the external party attribute and the asset metadata attribute of the asset match the stored external party and the stored asset metadata attribute linked to the workflow template; and', 'instantiating the workflow template when the external party attribute and the asset metadata attribute of the asset match the stored external party and the stored asset metadata attribute of the workflow template., 'at least one processor coupled to the memory and configured to perform operations comprising2. The system of claim 1 , wherein the workflow template includes a task claim 1 , one or more actions claim 1 , and one or ...

Подробнее
21-01-2021 дата публикации

EXTENDING BERKELEY PACKET FILTER SEMANTICS FOR HARDWARE OFFLOADS

Номер: US20210019197A1
Принадлежит: Intel Corporation

Examples include registering a device driver with an operating system, including registering available hardware offloads. The operating system receives a call to a hardware offload, inserts a binary filter representing the hardware offload into a hardware component and causes the execution of the binary filter by the hardware component when the hardware offload is available, and executes the binary filter in software when the hardware offload is not available. 125-. (canceled)26. At least one non-transient machine-readable medium storing instructions associated with a Linux operating system , the instructions being executable by at least one machine , the at least one machine being usable in association with a host computer and network interface controller circuitry , the host computer being to execute , when the host computer is in operation , the Linux operating system and a user space application , the Linux operating system , when executed , having a kernel space , the instructions when executed by the at least one machine resulting in the at least one machine being configured for performance of operations comprising:providing at least one call associated with the kernel space, the at least one call being usable to set, at least in part, packet filter rules, the packet filter rules corresponding, at least in part, to packet filter rule data, the packet filter rule data being for use in programming, at least in part, packet processing hardware offload circuitry of the network interface controller circuitry to determine, based upon header data of at least one incoming packet and the packet filter rule data, at least one action of the packet filter rules to apply to the at least one incoming packet; and [ at least one network address translation-related operation associated with the header data;', 'dropping the at least one incoming packet; and/or', 'forwarding the at least one incoming packet;, 'the at least one action is configurable to include, the packet filter ...

Подробнее
17-01-2019 дата публикации

Fpga-enabled compute instances

Номер: US20190020538A1
Принадлежит: Amazon Technologies Inc

A resource manager of a virtualized computing service indicates to a client that FPGA-enabled compute instances are supported at the service. From a set of virtualization hosts of the service, a particular host from which an FPGA is accessible is selected for the client based on an indication of computation objectives of the client. Configuration operations are performed to prepare the host for the application, and an FPGA-enabled compute instance is launched at the host for the client.

Подробнее
17-01-2019 дата публикации

TECHNOLOGIES FOR OPTICAL COMMUNICATION IN RACK CLUSTERS

Номер: US20190021182A1
Принадлежит:

Technologies for optical communication in a rack cluster in a data center are disclosed. In the illustrative embodiment, a network switch is connected to each of 1,024 sleds by an optical cable that enables communication at a rate of 200 gigabits per second. The optical cable has low loss, allowing for long cable lengths, which in turn allows for connecting to a large number of sleds. The optical cable also has a very high intrinsic bandwidth limit, allowing for the bandwidth to be upgraded without upgrading the optical infrastructure. 1a plurality of network switches, each network switch of the plurality of network switches comprising a plurality of optical connectors;a plurality of sleds, each sled of the plurality of sleds comprising a circuit board, an optical connecter mounted on the circuit board, and one or more physical resources mounted on the circuit board; at least two optical fibers;', 'a first connector at a first end of the passive optical cable connected to an optical connector of a sled of a plurality of sleds; and', 'a second connector at a second end of the passive optical cable connected to an optical connector of the plurality of optical connectors of a corresponding network switch of the plurality of the network switches,, 'a plurality of passive optical cables, wherein each passive optical cable of the plurality of passive optical cables compriseswherein each of the plurality of sleds is connected to each of the plurality of network switches by at least one of the plurality of passive optical cables.. A system comprising: The present application is a continuation application of U.S. application Ser. No. 15/396,035, entitled “TECHNOLOGIES FOR OPTICAL COMMUNICATION IN RACK CLUSTERS,” which was filed on Dec. 30, 2016, is scheduled to issue as U.S. Pat. No. 10,070,207 on Sep. 4, 2018, and claims the benefit of U.S. Provisional Patent Application No. 62/365,969, filed Jul. 22, 2016, U.S. Provisional Patent Application No. 62/376,859, filed Aug. 18, ...

Подробнее
16-01-2020 дата публикации

METHOD AND SYSTEM FOR REALIZING FUNCTION BY CAUSING ELEMENTS OF HARDWARE TO PERFORM LINKAGE OPERATION

Номер: US20200022051A1
Принадлежит: SONY CORPORATION

A system that stores functional information indicating a capability of each of a plurality of elements located remotely from the system; identifies a function capable of being performed by linking a plurality of the elements based on the stored functional information; and transmits information corresponding to the identified function capable of being performed by linking the plurality of elements to a first device remote from the system. 1circuitry that:stores functional information indicating a capability of each of a plurality of elements located remotely from the system, the plurality of elements including at least one or more sensors, the at least one or more sensors being a motion sensor, a camera sensor, or a human detection sensor;identifies a function capable of being performed by linking a plurality of displayed information corresponding to the elements based on the stored functional information; andgenerates executable application instructions; andtransmits the executable application instructions corresponding to the identified function capable of being performed by linking the plurality of displayed information corresponding to the elements to a first device remote from the system.. A system comprising: This application is a continuation of U.S. application Ser. No. 16/163,091, filed Oct. 17, 2018, which is a continuation of U.S. application Ser. No. 16/008,770, filed Jun. 14, 2018 (now U.S. Pat. No. 10,142,901), which is a continuation of U.S. application Ser. No. 15/123,171, filed Sep. 1, 2016 (now U.S. Pat. No. 10,021,612), which is based on PCT Application No. PCT/JP2015/002360, filed May 8, 2015, which claims the benefit of Japanese Priority Patent Application JP 2014-101507, filed May 15, 2014, the entire contents of each are incorporated herein by reference.The present disclosure relates to a method and system for realizing a function by causing elements of hardware or software to perform a linkage operation.In recent years, with the development of ...

Подробнее
26-01-2017 дата публикации

Method and Network Node for Selecting a Media Processing Unit

Номер: US20170024259A1
Принадлежит:

The disclosure relates to a method for selecting a media processing unit A, B, D; A, B, D; performed in a network node of a distributed cloud . The distributed cloud comprises two or more media processing units A, B, D; A, B, D; configurable to handle media processing required by a media service. The method comprises: receiving , from a communication device A, B, C, D, a request for the media service; obtaining , for each media processing unit A, B, D; A, B, D; , at least one configurable parameter value of a parameter relating to handling of the media service; and selecting , based on the at least one parameter value, a media processing unit A, B, D; A, B, D; for processing the requested media service for the communication device A, B, C, D. The disclosure also relates to corresponding network node, computer program and computer program products. 126-. (canceled)27. A method for selecting a media processing unit performed in a network node of a distributed cloud , the distributed cloud comprising two or more media processing units configurable to handle media processing required by a media service , the method comprising:receiving, from a communication device, a request for the media service;obtaining, for each media processing unit, at least one configurable parameter value of a parameter relating to handling of the media service; andselecting, based on the at least one parameter value, a media processing unit for processing the requested media service for the communication device.28. The method of claim 27 , wherein the selecting comprises:determining, for each media processing unit, a score by summing all parameter values for a respective media processing unit, the respective sum constituting a respective score for each media processing unit; andselecting the media processing unit having the highest score.29. The method of claim 28 , wherein the determining for each media processing unit a score comprises normalizing each parameter value by dividing the ...

Подробнее
25-01-2018 дата публикации

TECHNOLOGIES FOR MANAGING ALLOCATION OF ACCELERATOR RESOURCES

Номер: US20180024861A1
Принадлежит:

Technologies for dynamically managing the allocation of accelerator resources include an orchestrator server. The orchestrator server is to assign a workload to a managed node for execution, determine a predicted demand for one or more accelerator resources to accelerate the execution of one or more jobs within the workload, provision, prior to the predicted demand, one or more accelerator resources to accelerate the one or more jobs, and allocate the one or more provisioned accelerator resources to the managed node to accelerate the execution of the one or more jobs. Other embodiments are also described and claimed. 1. An orchestrator server to dynamically manage the allocation of accelerator resources , the orchestrator server comprising:one or more processors; assign a workload to a managed node for execution;', 'determine a predicted demand for one or more accelerator resources to accelerate the execution of one or more jobs within the workload;', 'provision, prior to the predicted demand, one or more accelerator resources to accelerate the one or more jobs; and', 'allocate the one or more provisioned accelerator resources to the managed node to accelerate the execution of the one or more jobs., 'one or more memory devices having stored therein a plurality of instructions that, when executed by the one or more processors, cause the orchestrator server to2. The orchestrator server of claim 1 , wherein to determine the predicted demand comprises to determine a demand for one or more field programmable gate arrays (FPGAs).3. The orchestrator server of claim 2 , wherein to provision the one or more accelerator resources comprises to provide claim 2 , to the one or more FPGAs claim 2 , a bit stream indicative of a configuration of each FPGA to accelerate execution of the one or more jobs.4. The orchestrator server of claim 1 , wherein to determine the predicted demand comprises to determine the number of accelerator resources to allocate to satisfy the predicted ...

Подробнее
25-01-2018 дата публикации

TEST SUPPORT DEVICE AND TEST SUPPORT METHOD

Номер: US20180024904A1
Принадлежит: FUJITSU LIMITED

A test support device includes a processor configured to acquire event information on events which occur during an execution of a target program. The processor is configured to classify the events into event groups on basis of similarity of timings at which the respective events occur. The processor is configured to calculate, for each of the event groups, a summed value of evaluation values of the respective events classified into the relevant event group. The processor is configured to determine for each of the event groups, on basis of the summed value, whether a breakpoint of a target process executed by the target program is present at a timing corresponding to the relevant event group. The processor is configured to display timing information indicating a timing at which a breakpoint is determined to be present, in association with elapsed time after a start of executing the target program. 1. A non-transitory computer-readable recording medium having stored therein a test support program that causes a computer to execute a test support process , the test support process comprising:acquiring event information on events which occur during an execution of a target program to be tested;classifying the events into event groups on basis of similarity of timings at which the respective events occur;calculating, for each of the event groups, a summed value of evaluation values of the respective events classified into the relevant event group;determining for each of the event groups, on basis of the summed value, whether a breakpoint of a target process is present at a timing corresponding to the relevant event group, the target process being executed by the target program; anddisplaying timing information indicating a timing at which a breakpoint is determined to be present, in association with elapsed time after a start of executing the target program.2. The non-transitory computer-readable recording medium according to claim 1 , the test support process comprising: ...

Подробнее
10-02-2022 дата публикации

MANAGEMENT DEVICE, MANAGEMENT SYSTEM, MANAGEMENT METHOD, AND PROGRAM

Номер: US20220043429A1
Автор: KATO Kohei, NASU Osamu
Принадлежит: Mitsubishi Electric Corporation

A receiver () receives an acquisition request to acquire a value of real resource information associated with a device () connected to a network or a value of virtual resource information associated with a calculation result of calculation performed using a value of the real resource information. A real resource information acquirer () acquires the value of the real resource information by causing a collector () to collect a value from the device () associated with the real resource information. A virtual resource information acquirer () acquires the value of the virtual resource information by causing a calculator () to perform calculation using the value of the real resource information. A responder () returns a response including the value of the real resource information or a response including the value of the virtual resource information based on the received acquisition request. 1. A management device for managing a data model including real resource information associated with a device connected to a network , and virtual resource information associated with a calculation result of calculation performed using a value of the real resource information , the management device comprising:a receiver to receive an acquisition request generated based on the data model to acquire the value of the real resource information or a value of the virtual resource information;a calculator to perform the calculation;a real resource information acquirer to acquire the value of the real resource information by causing a collector to collect a value from the device associated with the real resource information;a virtual resource information acquirer to acquire the value of the virtual resource information by causing the calculator to perform the calculation using the value of the real resource information; anda responder to return, in response to the receiver receiving the acquisition request for the real resource information, a response including the value of the real resource ...

Подробнее
25-01-2018 дата публикации

Automated data center maintenance

Номер: US20180025299A1
Принадлежит: Intel Corp

Techniques for automated data center maintenance are described. In an example embodiment, an automated maintenance device may comprise processing circuitry and non-transitory computer-readable storage media comprising instructions for execution by the processing circuitry to cause the automated maintenance device to receive an automation command from an automation coordinator for a data center, identify an automated maintenance procedure based on the received automation command, and perform the identified automated maintenance procedure. Other embodiments are described and claimed.

Подробнее
10-02-2022 дата публикации

TECHNOLOGIES FOR PROVIDING PREDICTIVE THERMAL MANAGEMENT

Номер: US20220043679A1
Автор: Chiu ShuLing, Ma ChungWen
Принадлежит:

Technologies for providing predictive thermal management include a compute device. The compute device includes a compute engine and an execution assistant device to assist the compute engine in the execution of a workload. The compute engine is configured to obtain a profile that relates a utilization factor indicative of a present amount of activity of the execution assistant device to a predicted temperature of the execution assistant device, determine, as the execution assistant device assists in the execution of the workload, a value of the utilization factor of the execution assistant device, determine, as a function of the determined value of the utilization factor and the obtained profile, the predicted temperature of the execution assistant device, determine whether the predicted temperature satisfies a predefined threshold temperature, and adjust, in response to a determination that the predicted temperature satisfies the predefined threshold temperature, an operation of the compute device to reduce the predicted temperature. Other embodiments are also described and claimed. 1. A compute device comprising:a compute engine;an execution assistant device configured to assist the compute engine in the execution of a workload;wherein the compute engine is configured to:obtain a profile that relates a utilization factor indicative of a present amount of activity of the execution assistant device to a predicted temperature of the execution assistant device;determine, as the execution assistant device assists in the execution of the workload, a value of the utilization factor of the execution assistant device;determine, as a function of the determined value of the utilization factor and the obtained profile, the predicted temperature of the execution assistant device;determine whether the predicted temperature satisfies a predefined threshold temperature; andadjust, in response to a determination that the predicted temperature satisfies the predefined threshold ...

Подробнее
10-02-2022 дата публикации

COMPUTERIZED SYSTEMS AND METHODS FOR FAIL-SAFE LOADING OF INFORMATION ON A USER INTERFACE USING A CIRCUIT BREAKER

Номер: US20220043689A1
Принадлежит: Coupang Corp.

Systems and methods are provided for fail-safe loading of information on a user interface, comprising receiving, via a modular platform, requests for access to a mobile application platform from a plurality of mobile devices, opening and directing the requests for access to the mobile application platform to a sequential processor of an application programming interface (API) gateway when a parallel processor of the API gateway is unresponsive to requests for access to the mobile application platform for a predetermined period of time, periodically checking a status of the parallel processor, and redirecting the requests for access to the mobile application platform to the parallel processor when the parallel processor is capable of processing requests for access to the mobile application platform. 120-. (canceled)21. A computer-implemented system for fail-safe loading of information on a user interface , the system comprising:a memory storing instructions; and receive requests for access to a mobile application platform from a plurality of mobile devices;', 'direct the requests for access to the mobile application platform to a sequential processor of an application programming interface (API) gateway when a parallel processor of the API gateway is unresponsive to requests for access to the mobile application platform for a predetermined period of time; and', 'redirect the requests for access to the mobile application platform to the parallel processor when the parallel processor is capable of processing requests for access to the mobile application platform,', 'wherein at least one of the parallel processor or the sequential processor is configured to transmit, to one or more modular providers, a request for one or more modules to display on the mobile application platform, and', 'wherein the one or more modules are stored in a database., 'at least one processor configured to execute the instructions to22. The system of claim 21 , wherein the one or more modular ...

Подробнее
24-01-2019 дата публикации

APPARATUSES, METHODS, AND SYSTEMS FOR BLOCKCHAIN TRANSACTION ACCELERATION

Номер: US20190026146A1
Принадлежит:

Methods and apparatuses relating to accelerating blockchain transactions are described. In one embodiment, a processor includes a hardware accelerator to execute an operation of a blockchain transaction, and the hardware accelerator includes a dispatcher circuit to route the operation to a transaction processing circuit when the operation is a transaction operation and route the operation to a block processing circuit when the operation is a block operation. In another embodiment, a processor includes a hardware accelerator to execute an operation of a blockchain transaction; and a network interface controller including a dispatcher circuit to route the operation to a transaction processing circuit of the hardware accelerator when the operation is a transaction operation and route the operation to a block processing circuit of the hardware accelerator when the operation is a block operation. 1. An apparatus comprising:a core to execute a thread and offload an operation of a blockchain transaction;a transaction processing circuit of a hardware accelerator to execute an offloaded operation of a blockchain transaction;a block processing circuit of the hardware accelerator to execute an offloaded operation of a blockchain transaction; anda dispatcher circuit of the hardware accelerator to route an offloaded operation to the transaction processing circuit of the hardware accelerator when the offloaded operation is a transaction operation and route the offloaded operation to the block processing circuit of the hardware accelerator when the offloaded operation is a block operation.2. The apparatus of claim 1 , wherein the dispatcher circuit is to perform an initial processing for the offloaded operation before routing the offloaded operation claim 1 , and the initial processing comprises a digital signature operation.3. The apparatus of claim 1 , wherein the dispatcher circuit is to route the offloaded operation to a peer processing circuit when the offloaded operation is ...

Подробнее
24-01-2019 дата публикации

Accelerator control apparatus, accelerator control method, and program

Номер: US20190026157A1
Принадлежит: NEC Corp

An accelerator control apparatus includes: a task storage part which holds an executable task(s); a data scheduler which selects a task needing a relatively small input/output data amount on a memory included in an accelerator when the task is executed by the accelerator from the executable task(s) and instructs the accelerator to prepare for data I/O on the memory for the selected task; and a task scheduler which instructs the accelerator to execute the selected task and adds a task that becomes executable upon completion of the selected task to the task storage part, wherein the data scheduler continues, depending on a use status of the memory, selection of a next task from the executable task(s) held in the task storage part and preparation of data I/O for the next task selected.

Подробнее
24-01-2019 дата публикации

VIRTUAL VECTOR PROCESSING

Номер: US20190026158A1
Принадлежит:

Methods and apparatus to provide virtualized vector processing are described. In one embodiment, one or more operations corresponding to a virtual vector request are distributed to one or more processor cores for execution. 1. An apparatus comprising:a first logic to allocate a first portion of one or more operations corresponding to a virtual vector request to a first processor core; anda second logic to generate a first signal corresponding to a second portion of the one or more operations.2. The apparatus of claim 1 , further comprising a second processor core to receive the first signal claim 1 , wherein the second processor core comprises:a third logic to allocate a third portion of the one or more operations to the second processor core; anda fourth logic to generate a second signal corresponding to a fourth portion of the one or more operations.3. The apparatus of claim 2 , wherein the third logic allocates the third portion based on information corresponding to one or more available resources of the second processor core.4. The apparatus of claim 2 , wherein the third logic allocates the third portion based on information corresponding to the first signal.5. The apparatus of claim 2 , further comprising a fourth logic to maintain information corresponding to one or more available resources of the second processor core.6. The apparatus of claim 2 , wherein the third logic allocates the third portion based on overhead information corresponding to communication with one or more of the first processor core or a third processor core.7. The apparatus of claim 2 , further comprising a fifth logic to transmit an acknowledgment signal to the first processor core after the second processor core has retired one or more operations corresponding to the third portion.8. The apparatus of claim 1 , wherein the first processor core comprises one or more of the first logic or the second logic.9. The apparatus of claim 1 , further comprising a plurality of processor cores.10. ...

Подробнее
24-01-2019 дата публикации

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM STORING PROGRAM

Номер: US20190026159A1
Принадлежит: FUJITSU LIMITED

An information processing device includes a memory storing information indicating a virtual address space for data to be processed; and a processor that executes, via the virtual address space, a given process on the data to be processed, monitors access from the processor to multiple monitoring regions among a plurality of regions included in the virtual address space and have been set as targets to be monitored, and executes given control based on an accessed monitoring region among the multiple monitoring regions and for which the access has been detected by the processor. 1. An information processing device comprising:a memory storing information indicating a virtual address space for data to be processed; anda processor that executes, via the virtual address space, a given process on the data to be processed, monitors access from the processor to multiple monitoring regions among a plurality of regions included in the virtual address space and have been set as targets to be monitored, and executes given control based on an accessed monitoring region among the multiple monitoring regions and for which the access has been detected by the processor.2. The information processing device according to claim 1 ,wherein the multiple monitoring regions are set so that the multiple monitoring regions are not adjacent to each other in the virtual address space,wherein the processor accesses the plurality of regions included in the virtual address space in given order in the given process, andwherein when the processor protects the multiple monitoring regions before the start of the given process and a control signal indicating access to a protected monitoring region is generated after the start of the given process, the processor identifies the accessed monitoring region based on the control signal and a virtual address of the identified monitoring region.3. The information processing device according to claim 2 ,wherein the processor cancels the protection of the ...

Подробнее
25-01-2018 дата публикации

Technologies for allocating resources within a self-managed node

Номер: US20180026904A1
Принадлежит: Intel Corp

Technologies for dynamically allocating resources within a self-managed node include a self-managed node to receive quality of service objective data indicative of a performance objective of one or more workloads assigned to the self-managed node. Each workload includes one or more tasks. The self-managed node is also to execute the one or more tasks to perform the one or more workloads, obtain telemetry data as the workloads are performed, determine, as a function of the telemetry data, an adjustment to the allocation of resources among the workloads to satisfy the performance objective, and apply the determined adjustment as the workloads are performed by the self-managed node. Other embodiments are also described and claimed.

Подробнее
25-01-2018 дата публикации

Technologies for determining and storing workload characteristics

Номер: US20180027060A1
Принадлежит: Intel Corp

Technologies for determining and storing workload characteristics include an orchestrator server to identify a workload to be executed by a managed node, obtain a profile associated with the workload, wherein the profile includes a model that relates an input parameter set indicative of one of more characteristics of the workload with an output parameter set indicative of one or more aspects of resources to be allocated for execution of the workload, determine, as a function of the input parameter set and the model, resources to allocate to the managed node to execute the workload, and allocate the determined resources to the managed node to execute the workload. Other embodiments are also described and claimed.

Подробнее
25-01-2018 дата публикации

Techniques to determine and process metric data for physical resources

Номер: US20180027063A1
Принадлежит: Intel Corp

Various embodiments are generally directed to an apparatus, method and other techniques for communicating metric data between a plurality of management controllers for sleds via an out-of-band (OOB) network, the sleds comprising physical resources and the metric data to indicate one or more metrics for the physical resources. Embodiments may also include determining a physical resource of the physical resources to perform a task based at least in part on the one or more metrics, and causing the task to be performed by the physical resources.

Подробнее
25-01-2018 дата публикации

Technologies for managing the efficiency of workload execution

Номер: US20180027066A1
Принадлежит: Intel Corp

Technologies for managing the efficiency of workload execution in a managed node include a managed node that includes one or more processors that each include multiple cores. The managed nodes is to execute threads of workloads assigned to the managed node, generate telemetry data indicative of an efficiency of execution of the threads, determine, as a function of the telemetry data, an adjustment to a configuration of the threads among the cores to increase the efficiency of the execution of the threads, and apply the determined adjustment. Other embodiments are also described and claimed.

Подробнее
23-01-2020 дата публикации

Virtual network function management apparatus, virtual infrastructure management apparatus, and virtual network function configuration method

Номер: US20200026542A1
Принадлежит: NEC Corp

A virtual network function management apparatus includes: a physical machine candidate query part configured to query, when creating a virtual network function, about a physical machine candidate in which a virtual machine configuring a virtual network function can be deployed, with respect to a virtual infrastructure management apparatus that manages a virtual infrastructure configured by using 2 or more types of physical machine; a physical machine selection part configured to select a physical machine that can satisfy performance required by the virtual network function, from among physical machine candidates received from the virtual infrastructure management apparatus; and a virtual machine creation instruction part configured to instruct the virtual infrastructure management apparatus to specify the selected physical machine and create a virtual machine configuring the virtual network function.

Подробнее
23-01-2020 дата публикации

QUANTUM HYBRID COMPUTATION

Номер: US20200026551A1
Принадлежит:

Technologies are described herein to implement quantum hybrid computations. Embodiments include receiving a hybrid program, assigning respective functions corresponding to the hybrid program to either of CPU processing or QPU processing, scheduling processing for the respective functions, initiating execution of the hybrid program, and collating results of the execution of the classical-quantum hybrid program. 1. A method (OS perspective) , comprising:assigning respective functions corresponding to a hybrid program to either of classical information processing or quantum information processing;scheduling processing for the respective functions corresponding to the hybrid program;initiating execution of the hybrid program;transferring partial results of functions between classical processors and quantum processors; andcollating results of the execution of the hybrid program.2. The method of claim 1 , wherein the assigning is based on one or more features for the respective function that are inherently associated with either of classical information processing or quantum information processing.3. The method of claim 1 , wherein the classical information processing includes any form of digital processing.4. The method of claim 1 , wherein the assigning is based on heuristics and a characterized performance profile of the classical processors and the quantum processors.5. The method of claim 1 , wherein the scheduling includes prioritizing of the respective functions for both classical information processing and quantum information processing.6. The method of claim 4 , wherein scheduling is based on optimization.7. The method of claim 5 , wherein the scheduling is based data dependencies between functions.8. The method of claim 5 , wherein the initiating is based on results of the prioritizing.9. An apparatus (OS perspective) claim 5 , comprising:a receiver to receive a hybrid program;an arbiter to assign respective functions to either of classical information ...

Подробнее
23-01-2020 дата публикации

AUTOMATIC LOCALIZATION OF ACCELERATION IN EDGE COMPUTING ENVIRONMENTS

Номер: US20200026575A1
Принадлежит:

Methods, apparatus, systems and machine-readable storage media of an edge computing device which is enabled to access and select the use of local or remote acceleration resources for edge computing processing is disclosed. In an example, an edge computing device obtains first telemetry information that indicates availability of local acceleration circuitry to execute a function, and obtains second telemetry that indicates availability of a remote acceleration function to execute the function. An estimated time (and cost or other identifiable or estimateable considerations) to execute the function at the respective location is identified. The use of the local acceleration circuitry or the remote acceleration resource is selected based on the estimated time and other appropriate factors in relation to a service level agreement. 1. An edge computing device in an edge computing system , comprising:acceleration circuitry;processing circuitry; and obtain first telemetry information that indicates availability of the acceleration circuitry to execute a function;', 'obtain second telemetry information that indicates of a remote acceleration resource to execute the function, the remote acceleration resource located at a remote location in the edge computing system that is remote from the edge computing device;', 'identify an estimated time to execute the function at the acceleration circuitry or the remote acceleration resource, based on evaluation of the first and second telemetry information; and', 'select use of the acceleration circuitry or the remote acceleration resource, to execute the function on a workload, based on identification of the estimated time to execute the function at the remote acceleration resource or the acceleration circuitry in relation to a service level agreement., 'a memory device comprising instructions stored thereon, wherein the instructions, when executed by the processing circuitry, configure the processing circuitry to perform operations to2 ...

Подробнее
23-01-2020 дата публикации

DETECTION OF RESOURCE BOTTLENECKS IN EXECUTION OF WORKFLOW TASKS USING PROVENANCE DATA

Номер: US20200026633A1
Принадлежит:

Techniques are provided for detecting resource bottlenecks in workflow task executions using provenance data. An exemplary method comprises: obtaining a state of multiple workflow executions of multiple concurrent workflows performed with different resource allocation configurations in a shared infrastructure environment; obtaining first and second signature execution traces of a task representing first and second resource allocation configurations, respectively; identifying first and second corresponding sequences of time intervals in the first and second signature execution traces for the task, respectively, based on a similarity metric; and identifying a given time interval as a resource bottleneck of a resource that differs between the first and second resource allocation configurations based on a change in execution time for the given time interval between the first and second signature execution traces. The first signature execution trace is optionally obtained by disaggregating data related to batches of workflow executions. 1. A method , comprising:obtaining a state of multiple workflow executions of a plurality of concurrent workflows in a shared infrastructure environment, wherein said multiple workflow executions are performed with a plurality of different resource allocation configurations, wherein said state comprises provenance data of said multiple workflow executions and wherein each of said multiple workflow executions is comprised of one or more tasks;obtaining a first signature execution trace of at least one task within the plurality of concurrent workflows representing a first resource allocation configuration, and a second signature execution trace of said at least one task within the plurality of concurrent workflows representing a second resource allocation configuration;identifying, using at least one processing device, a first sequence of time intervals in said first signature execution trace for said at least one task that corresponds to a ...

Подробнее
28-01-2021 дата публикации

ELASTIC CONTAINER PLATFORM ARCHITECTURE

Номер: US20210026680A1
Автор: Toy Mehmet
Принадлежит:

A method, a device, and a non-transitory storage medium are described in which an elastic platform virtualization service is provided in relation to a virtual device. The elastic platform virtualization service includes logic that provides for the management of a virtualized device during its life cycle. The creation or reconfiguration of the virtualized device is based on a tertiary choice between using dedicated hardware and dedicated kernel; common hardware and common kernel; or a combination of the dedicated hardware, dedicated kernel, common hardware, and common kernel. 1. A method comprising:determining, by a device, a first configuration for a virtualized device to be configured on the device, wherein the first configuration is based on three choices between using dedicated hardware and dedicated kernel; common hardware and common kernel; and a combination of the dedicated hardware, dedicated kernel, common hardware, and common kernel; andcreating, by the device based on the determining, the virtualized device.2. The method of claim 1 , wherein the virtualized device includes a container that provides an application service based on the first configuration.3. The method of claim 2 , wherein the device includes multiple virtualized devices and multiple containers claim 2 , which include the virtualized device and the container.4. The method of claim 1 , further comprising:monitoring, by the device, an operation of the virtualized device; anddetermining, by the device based on the monitoring, whether the virtualized device is to be reconfigured.5. The method of claim 1 , wherein claim 1 , when the first configuration includes using only the dedicated hardware and dedicated kernel claim 1 , the method further comprising:storing, by the device, a usage threshold value that indicates at least one of a maximum utilization value or an average utilization value for the dedicated hardware;identifying, by the device and based on the storing, that the at least one of ...

Подробнее
28-01-2021 дата публикации

THREAD SERIALIZATION, DISTRIBUTED PARALLEL PROGRAMMING, AND RUNTIME EXTENSIONS OF PARALLEL COMPUTING PLATFORM

Номер: US20210027416A1
Принадлежит:

Systems, apparatuses, and methods may provide for technology to process graphical data, and to modify a runtime environment in a parallel computing platform for a graphic environment. 1. (canceled)2. A semiconductor package apparatus , comprising:a substrate; and detect, in a runtime environment, a graphic application for use in a parallel computing platform; and', 'modify, during runtime and in response to detecting the graphic application, the source code in connection with the parallel computing platform by adding at least one source code extension., 'logic coupled to the substrate, wherein the logic is at least partially implemented in one or more of configurable logic or fixed-functionality hardware logic, the logic to3. The semiconductor package apparatus of claim 2 , wherein detecting the graphic application comprises detecting active execution lanes of the graphic application.4. The semiconductor package apparatus of claim 3 , wherein:detecting active execution lanes comprises applying a predetermine value;for each loop iteration, execution lanes whose value matches the predetermine value are active; andfor each loop iteration, execution lanes whose value does not match the predetermine value are inactive.5. The semiconductor package apparatus of claim 4 , wherein the at least one source code extension is to execute a loop body once for each predetermined value that different threads in a warp hold in that variable.6. The semiconductor package apparatus of claim 4 , wherein modifying the source code comprises executing a loop body once for each detected active execution lane.7. A graphics processing system claim 4 , comprising:a memory; and detect, in a runtime environment, a graphic application for use in a parallel computing platform; and', 'modify, during runtime and in response to detecting the graphic application, the source code in connection with the parallel computing platform by adding at least one source code extension., 'a semiconductor package ...

Подробнее
02-02-2017 дата публикации

Parallel runtime exection on multiple processors

Номер: US20170031691A1
Принадлежит: Apple Inc

A method and an apparatus that schedule a plurality of executables in a schedule queue for execution in one or more physical compute devices such as CPUs or GPUs concurrently are described. One or more executables are compiled online from a source having an existing executable for a type of physical compute devices different from the one or more physical compute devices. Dependency relations among elements corresponding to scheduled executables are determined to select an executable to be executed by a plurality of threads concurrently in more than one of the physical compute devices. A thread initialized for executing an executable in a GPU of the physical compute devices are initialized for execution in another CPU of the physical compute devices if the GPU is busy with graphics processing threads Sources and existing executables for an API function are stored in an API library to execute a plurality of executables in a plurality of physical compute devices, including the existing executables and online compiled executables from the sources.

Подробнее
01-02-2018 дата публикации

Data Processing Method and Apparatus

Номер: US20180032375A1
Автор: Shao Gang, Tan Weiguo
Принадлежит:

A data processing method and apparatus are disclosed. The method is determining candidate computing frameworks for each sub-task in a sub-task set; predicating operation time and resource consumption that correspond to each candidate computing framework when the candidate computing framework executes the sub-task; and selecting, in the candidate computing frameworks according to the predicated operation time and resource consumption that correspond to each candidate computing framework when the candidate computing framework executes the sub-task, a target computing framework executing the sub-task (), and executing the sub-task (). In this way, a resource management system selects a target computing framework from multiple computing frameworks according to operation time and resource consumption, to execute each sub-task, so as to improve the data processing efficiency and working performance of the system. 1. A data processing method , comprising:receiving a task request, wherein the task request carries a task submitted by a user;generating a sub-task set comprising at least one sub-task according to the task in the task request;determining input data for executing each sub-task; determining, in all computing frameworks configured in a system, computing frameworks capable of executing the sub-task as candidate computing frameworks, wherein a quantity of the candidate computing frameworks is greater than or equal to 2;', 'separately predicting, according to the input data of the sub-task and a predication model that corresponds to each candidate computing framework, operation time and resource consumption that correspond to each candidate computing framework when the candidate computing framework executes the sub-task; and', 'selecting, in the candidate computing frameworks according to the predicated operation time and resource consumption that correspond to each candidate computing framework when the candidate computing framework executes the sub-task, a target ...

Подробнее
17-02-2022 дата публикации

SYSTEMS AND METHODS FOR SCALING DATA WAREHOUSES

Номер: US20220050857A1
Принадлежит:

Example resource provisioning systems and methods are described. In one implementation, multiple processing resources are provided within a data warehouse. The processing resources include at least one processor and at least one storage device. At least one query to process database data is received. At least some of the processing resources may process the database data. When a processing capacity of the processing resources has reached a threshold processing capacity, the processing capacity is automatically scaled by adding at least one additional processor to the data warehouse. 1. A method comprising:provisioning a first data warehouse comprising a plurality of processing resources, the plurality of processing resources comprising at least one processor and at least one storage device;receiving a processing request to process database data stored on a storage platform comprising a plurality of shared storage devices in association with the first data warehouse;determining that a processing capacity of the plurality of processing resources of the first data warehouse would reach a threshold processing capacity when processing the processing request based on metadata including information regarding pre-cached data on the plurality of processing resources of the first data warehouse; andprovisioning a second data warehouse to process at least a portion of the processing request after determining that the processing capacity of the plurality of processing resources of the first data warehouse would reach the threshold processing capacity when processing the processing request.2. The method of claim 1 , further comprising:scaling one or more of the processing capacity of the plurality of processing resources of the first data warehouse by adding additional processing resources to the first data warehouse and the storage capacity of the storage platform by adding at least one additional storage device.3. The method of claim 2 , further comprising:determining that the ...

Подробнее
31-01-2019 дата публикации

Technologies for dynamically allocating data storage capacity for different data storage types

Номер: US20190034102A1
Принадлежит: Intel Corp

Technologies for allocating data storage capacity on a data storage sled include a plurality of data storage devices communicatively coupled to a plurality of network switches through a plurality of physical network connections and a data storage controller connected to the plurality of data storage devices. The data storage controller is to determine a target storage resource allocation to be used by one or more applications to be executed by one or more sleds in a data center, determine data storage capacity available for each of a plurality of different data storage types on the data storage sled, wherein each data storage type is associated with a different level of data redundancy, determine an amount of data storage capacity for each data storage type to be allocated to satisfy the target storage resource allocation, and adjust the amount of data storage capacity allocated to each data storage type.

Подробнее
31-01-2019 дата публикации

Closed loop performance controller work interval instance propagation

Номер: US20190034238A1
Принадлежит: Apple Inc

Systems and methods are disclosed for scheduling threads on an asymmetric multiprocessing system having multiple core types. Each core type can run at a plurality of selectable voltage and frequency scaling (DVFS) states. Threads from a plurality of processes can be grouped into thread groups. Execution metrics are accumulated for threads of a thread group and fed into a plurality of tunable controllers. A closed loop performance control (CLPC) system determines a control effort for the thread group and maps the control effort to a recommended core type and DVFS state. A closed loop thermal and power management system can limit the control effort determined by the CLPC for a thread group, and limit the power, core type, and DVFS states for the system. Metrics for workloads offloaded to co-processors can be tracked and integrated into metrics for the offloading thread group.

Подробнее
31-01-2019 дата публикации

OPTIMIZED RESOURCE METERING IN A MULTI TENANTED DISTRIBUTED FILE SYSTEM

Номер: US20190034241A1
Принадлежит:

A method and system for automatically metering a distributed file system node is provided. The method includes receiving data associated with jobs for execution via a distributed file system. Characteristics of the jobs are uploaded and policy metrics data associated with hardware usage metering is retrieved. Resource requests associated with hardware resource usage are retrieved and attributes associated with the resource requests are uploaded. The policy metrics data is analyzed and a recommendation circuit is queried with respect to the resource requests. A set of metrics of the policy metrics data associated with the resource requests is determined and a machine learning circuit is updated. Utilized hardware resources are determined with respect to the hardware usage metering and said resource requests. 1. A distributed file system metering and hardware usage technology improvement method comprising:receiving from a user, by a processor of a hardware device comprising specialized discrete non-generic analog, digital, and logic based plugin circuitry including a specially designed integrated circuit designed for only implementing said distributed file system metering and hardware usage technology improvement method, job data associated with jobs for execution via a distributed file system;retrieving, by said processor enabling a policy engine circuit of said hardware device, policy and cost metrics data associated with hardware usage metering, wherein said policy and cost metrics data comprises policies implemented as pluggable components defined in advance, by: uploading xml files, via descriptor files or json files, or via a command line, for said jobs;retrieving, by said processor enabling a hardware device cluster, resource requests describing hardware resource usage of said jobs and metered with respect to a fine grained level of the actual utilization of hardware resources;querying, by said processor enabling said job descriptor engine of said hardware ...

Подробнее
30-01-2020 дата публикации

MIXED INSTANCE CATALOGS

Номер: US20200034168A1
Автор: IV Leo C., Singleton
Принадлежит:

Methods and systems for providing services using mixed instance catalogs are described herein. A catalog may comprise a plurality of first virtual machines and a plurality of second virtual machines. The capacity of a first virtual machine may be larger than the capacity of a second virtual machine. Connection requests to access a service associated with the catalog may be distributed among the plurality of first virtual machines and the plurality of second virtual machines. 1. A method comprising:receiving, by a computing device and from a user device, a connection request to access a service session associated with a catalog;based on determining that the catalog comprises a plurality of first virtual machines and a plurality of second virtual machines, determining whether a quantity of a plurality of service sessions hosted by the plurality of first virtual machines satisfies a session count threshold associated with the plurality of first virtual machines;based on determining that the quantity of the plurality of service sessions hosted by the plurality of first virtual machines satisfies the session count threshold associated with the plurality of first virtual machines, determining a virtual machine, of the plurality of second virtual machines, to host the service session; andsending, to the virtual machine, an instruction to host the service session.2. The method of claim 1 , wherein a session count threshold associated with a first virtual machine of the plurality of first virtual machines is larger than a session count threshold associated with a second virtual machine of the plurality of second virtual machines.3. The method of claim 1 , wherein a hardware capacity associated with a first virtual machine of the plurality of first virtual machines is larger than a hardware capacity associated with a second virtual machine of the plurality of second virtual machines.4. The method of claim 1 , further comprising:receiving, by the computing device and from a ...

Подробнее
30-01-2020 дата публикации

COMMITTED PROCESSING RATES FOR SHARED RESOURCES

Номер: US20200034204A1
Принадлежит: Amazon Technologies, Inc.

Customers of a shared-resource environment can provision resources in a fine-grained manner that meets specific performance requirements. A customer can provision a data volume with a committed rate of Input/Output Operations Per Second (IOPS) and pay only for that commitment (plus any overage), and the amount of storage requested. The customer will then at any time be able to complete at least the committed rate of IOPS. If the customer generates submissions at a rate that exceeds the committed rate, the resource can still process at the higher rate when the system is not under pressure. Even under pressure, the system will deliver at least the committed rate. Multiple customers can be provisioned on the same resource, and more than one customer can have a committed rate on that resource. Customers without committed or guaranteed rates can utilize the uncommitted portion, or committed portions that are not being used. 125-. (canceled)26. A system , comprising:a plurality of storage servers that provide committed rates of input/output operations per second (IOPS) to a plurality of data volumes on behalf of different users of a network-available data storage service; receive, via an interface for the control plane, a request that specifies a committed rate of IOPS for a first data volume;', provides a committed rate of IOPS for a second data volume; and', 'has sufficient capacity to additionally provide at least a portion of the committed rate of IOPS for the first data volume based on a predicted usage of IOPS for the second data volume at the storage server that is less than the committed rate of IOPS for the second data volume; and, 'evaluate the committed rates of IOPS of the storage servers to identify one of the storage servers that, 'commit the storage server to provide the portion of the committed rate of IOPS for the first data volume in addition to the committed rate of IOPS for the second data volume, wherein the committed rates of IOPS for the first data ...

Подробнее
17-02-2022 дата публикации

COMPUTING SYSTEMS WITH OFF-LOAD PROCESSING FOR NETWORKING RELATED TASKS

Номер: US20220052926A1
Принадлежит:

Computing systems with off-load processing for networking related tasks are disclosed. A first mobile electronic device includes first wireless communication circuitry to support cellular communication; and second wireless communication circuitry to support wireless communication. The first electronic device includes processor circuitry to: identify a first one of a first cellular network or a second cellular network based on availability of the first and second cellular networks; initiate establishment of a first communication link between a second mobile electronic device and the first one of the first cellular network or the second cellular network via the first wireless communication circuitry and the second wireless communication circuitry; and initiate establishment of a second communication link between the second mobile electronic device and a second one of the first cellular network or the second cellular network based on a change in the availability of the first and second cellular networks. 1. A first mobile electronic device comprising:a battery;a display;first wireless communication circuitry to support cellular communication, to communicate with at least one of a first cellular network or a second cellular network;second wireless communication circuitry to support wireless communication with a second mobile electronic device over at least one of a local area network or a personal area network, the second mobile electronic device separate from the first mobile electronic device;at least one memory to store an identifier for the second mobile electronic device, the identifier to facilitate establishment of communication between the second mobile electronic device and at least one of the first cellular network or the second cellular network;instructions in the first mobile electronic device; and identify a first one of the first cellular network or the second cellular network based on availability of the first and second cellular networks;', 'initiate ...

Подробнее
04-02-2021 дата публикации

Operation for Generating Workload Recommendations

Номер: US20210034420A1
Принадлежит: Dell Products LP

A system, method, and computer-readable medium are disclosed for facilitating a sale of an asset used in a complex asset environment via a sales facilitation operation. In various embodiments the sales facilitation operation includes: identifying a plurality of assets within a complex asset environment; collecting information regarding the plurality of assets within the complex asset environment, the information regarding each of the plurality of assets comprising information from a plurality of data sources; performing a workload-based asset recommendation operation, the workload-based asset recommendation operation analyzing the information regarding each of the plurality of assets to generate a workload-based asset recommendation; and, performing the sales facilitation operation using the information regarding each of the plurality of assets within the complex asset environment and the workload-based asset recommendation.

Подробнее
04-02-2021 дата публикации

CONTAINER ORCHESTRATION IN DECENTRALIZED NETWORK COMPUTING ENVIRONMENTS

Номер: US20210034423A1
Принадлежит:

A computer-implemented method for deploying containers in a decentralized network computing environment includes: registering a predetermined amount of computing resources reserved by a plurality of computing devices for utilization as a worker node for running containers; receiving a request from a consumer node to provide services for deployment of a container workload; selecting at least a first computing device from the plurality of computing devices to serve as the worker node for deployment of the container workload; obtaining unidirectional control over a portion of the predetermined amount of computing resources reserved by at least the first computing device; and deploying the container workload on at least the first computing device. A corresponding computer system and computer program product are also disclosed. 1. A computer-implemented method for deploying containers in a decentralized network computing environment , comprising:registering, by one or more processors, a predetermined amount of computing resources reserved by a plurality of computing devices for utilization as a worker node for running containers;receiving, by the one or more processors, a request from a consumer node to provide services for deployment of a container workload;selecting, by the one or more processors, at least a first computing device from the plurality of computing devices to serve as the worker node for deployment of the container workload;obtaining, by the one or more processors, unidirectional control over a portion of the predetermined amount of computing resources reserved by at least the first computing device; anddeploying, by the one or more processors, the container workload on at least the first computing device.2. The computer-implemented method of claim 1 , further comprising:determining, by the one or more processors, that the container workload includes stateful containers;linking, by the one or more processors, a volume to a stateful container; andmounting, ...

Подробнее
04-02-2021 дата публикации

QUERY PLANS FOR ANALYTIC SQL CONSTRUCTS

Номер: US20210034640A1
Принадлежит:

A system and method for managing data storage and data access with querying data in a distributed system without buffering the results on intermediate operations in disk storage. 1. A method comprising:initiating, within a first execution node of an execution platform, a first operator in a query plan to process a set of data and generate an intermediate result of a query;determining whether the first operator has produced at least the intermediate result; and pushing the intermediate result of the first operator to a plurality of secondary operators;', 'initiating, by one or more processors, each of the other secondary operators to process the intermediate result to generate a plurality of second results;', 'operating on the plurality of second results to generate a final result; and', 'storing the final result to disk storage., 'after determining that the first operator has produced at least the intermediate result2. The method of claim 1 , wherein the final result is generated without storing the intermediate result to the disk storage.3. The method of claim 1 , wherein each of the plurality of secondary operators process the intermediate result with a different operation.4. The method of claim 3 , further comprising delaying operation of at least one of the plurality of secondary operators.5. The method of claim 4 , wherein operation of at least one of the plurality of secondary operators is delayed to coordinate the generation of the plurality of second results.6. The method of claim 1 , wherein the intermediate result generated by the first operator is not recomputed by any of the plurality of secondary operators claim 1 , and wherein the intermediate result is processed by the plurality of secondary operators to execute a plurality of different queries.7. The method of claim 1 , wherein each of the plurality of secondary operators are unique operators.8. The method of claim 1 , wherein the intermediate result is not materialized.9. The method of claim 1 , ...

Подробнее
31-01-2019 дата публикации

COMPUTING SYSTEMS WITH OFF-LOAD PROCESSING FOR NETWORKING RELATED TASKS

Номер: US20190036792A1
Принадлежит:

Example computing systems with off-load processing for networking related tasks are disclosed. Example consumer electronic devices disclosed herein include first wireless interface circuitry to support cellular communication and second wireless interface circuitry to support wireless local area network communication. Disclosed example consumer electronic devices also include processor circuitry to monitor a communication environment, select one of the first wireless interface circuitry or the second wireless interface circuitry to provide a user device in communication with the consumer electronic device with access to a network, and connect the user device with the network via the selected one of the first wireless interface circuitry or the second wireless interface circuitry. Disclosed example consumer electronic devices further include a housing dimensioned to be positioned in a consumer residence. 1. A consumer electronic device comprising:first wireless interface circuitry to support cellular communication;second wireless interface circuitry to support wireless local area network communication; monitor a communication environment;', 'select one of the first wireless interface circuitry or the second wireless interface circuitry to provide a user device in communication with the consumer electronic device with access to a network; and', 'connect the user device with the network via the selected one of the first wireless interface circuitry or the second wireless interface circuitry; and, 'processor circuitry toa housing dimensioned to be positioned in a consumer residence.2. The consumer electronic device of claim 1 , further including wired interface circuitry in communication with the network.3. The consumer electronic device of claim 2 , wherein the wired interface circuitry is an Ethernet interface.4. The consumer electronic device of claim 1 , wherein the selected one of the first wireless interface circuitry or the second wireless interface circuitry is ...

Подробнее
09-02-2017 дата публикации

DATA PARALLEL COMPUTING ON MULTIPLE PROCESSORS

Номер: US20170039092A1
Принадлежит:

A method and an apparatus that allocate one or more physical compute devices such as CPUs or GPUs attached to a host processing unit running an application for executing one or more threads of the application are described. The allocation may be based on data representing a processing capability requirement from the application for executing an executable in the one or more threads. A compute device identifier may be associated with the allocated physical compute devices to schedule and execute the executable in the one or more threads concurrently in one or more of the allocated physical compute devices concurrently. 1receiving, from a host application executing on a host processor, a request to identify any compute device that matches a processing requirement for a task corresponding to source code in the host application;sending, to the host application, a compute identifier for each compute device that matches the processing requirement;receiving, from the host application, a request specifying a compute identifier selected by the host application; andgenerating a context for the compute device that corresponds to the selected compute identifier.. A computer implemented method comprising: This application is a continuation of co-pending U.S. application Ser. No. 14/163,710 filed Jan. 24, 2014, which is a continuation of U.S. application Ser. No. 13/614,975 filed Sep. 13, 2012, now issued as U.S. Patent No. 9,207,971, which is a continuation of U.S. application Ser. No. 11/800,185 filed on May 3,2007, now U.S. Pat. No. 8,276,164 issued Sep. 25, 2012 which is related to, and claims the benefits of, U.S. Provisional Patent Application No. 60/923,030, filed on Apr. 11, 2007 and U.S. Provisional Patent Application No. 60/925,616, filed on Apr. 20, 2007, which are hereby incorporated herein by reference.The present invention relates generally to data parallel computing. More particularly, this invention relates to data parallel computing across both CPUs (Central ...

Подробнее
08-02-2018 дата публикации

SYSTEMS AND METHODS FOR MANAGING PROCESSING LOAD

Номер: US20180039519A1
Принадлежит:

A method for managing processing load by an electronic device is described. The method includes determining to offload a task being executed on the electronic device. The method also includes communicating with a peer device. The method further includes determining that the peer device is capable of executing the task based on the communication. The method additionally includes offloading the task to the peer device. The method also includes receiving an output of the task. The output is generated while the peer device is executing the task. 1. A method for managing processing load by an electronic device , comprising:determining to offload a task being executed on the electronic device;communicating with a peer device;determining that the peer device is capable of executing the task based on the communication;offloading the task to the peer device; andreceiving an output of the task, wherein the output is generated while the peer device is executing the task.2. The method of claim 1 , wherein offloading the task comprises stopping execution of the task on the electronic device.3. The method of claim 1 , wherein offloading the task comprises sending an instruction to the peer device to cause the peer device to execute the task.4. The method of claim 1 , wherein determining to offload the task comprises determining that the processing load of the electronic device has exceeded a threshold.5. The method of claim 4 , wherein determining to offload the task comprises determining that the processing load in Million Instructions Per Second (MIPS) has exceeded a MIPS threshold.6. The method of claim 1 , wherein determining to offload the task is based on a thermal condition on the electronic device.7. The method of claim 1 , wherein determining that the peer device is capable of executing the task comprises determining that the peer device has available processing capacity to execute the task.8. The method of claim 1 , further comprising:determining, after the task has ...

Подробнее
08-02-2018 дата публикации

INFORMATION PROCESSING SYSTEM THAT DETERMINES A MEMORY TO STORE PROGRAM DATA FOR A TASK CARRIED OUT BY A PROCESSING CORE

Номер: US20180039523A1
Принадлежит:

An information processing system includes a first core, a second core having a processing speed that is slower than the first core, a first memory, a second memory having a slower response time than the first memory, and a management processor. The management processor is configured to determine a core for executing a task, cause program data for executing the task to be copied to the first memory and then cause the first core to execute the task using the program data in the first memory, when the first core is determined as the core for executing the task, and cause the program data for executing the task to be copied to the second memory and then cause the second core to execute the task using the program data in the second memory, when the second core is determined as the core for executing the task. 1. An information processing system comprising:a first core;a second core having a processing speed that is slower than the first core;a first memory;a second memory having a slower response time than the first memory; and determine a core for executing a task,', 'cause program data for executing the task to be copied to the first memory and then cause the first core to execute the task using the program data in the first memory, when the first core is determined as the core for executing the task, and', 'cause the program data for executing the task to be copied to the second memory and then cause the second core to execute the task using the program data in the second memory, when the second core is determined as the core for executing the task., 'a management processor configured to'}2. The information processing system according to claim 1 , whereinthe management processor determines the core for executing the task based on metadata of the task.3. The information processing system according to claim 1 , whereinthe management processor determines the core for executing the task based on use states of at least one of the first core, the second core, the first ...

Подробнее
24-02-2022 дата публикации

System resource allocation for code execution

Номер: US20220058062A1
Принадлежит: Intel Corp

Examples described herein relate to an including at least one processor and a system agent communicatively coupled to the at least one processor. In some examples, the at least one of the at least one processor, when operational, is configured to: execute an operating system (OS) to: receive a call to perform a kernel-level operation and adjust settings of system resources assigned to perform the kernel-level operation based on a class of service associated with the call.

Подробнее
24-02-2022 дата публикации

Methods and apparatus for dynamic shader selection for machine learning

Номер: US20220058476A1
Принадлежит: Qualcomm Inc

The present disclosure relates to methods and apparatus for selecting a sequence of shaders for performing a machine-learning operation on a graphics processing unit (GPU). The apparatus can receive a request to perform a machine-learning operation. The apparatus can determine a plurality of sequences of shaders that are capable of performing the machine-learning operation. The apparatus can determine a cost for each sequence of the plurality of sequences of shaders based on a cost function associated with each shader. The apparatus can execute a selected sequence of shaders of the plurality of sequences of shaders having a lowest cost.

Подробнее
24-02-2022 дата публикации

Job based bidding

Номер: US20220058727A1
Автор: Jesse Barnes, Max ALT
Принадлежит: Core Scientific Operating Co

A system and method for job-based bidding for application processing work is disclosed. The system may include interfaces for submitting requests for bids (RFBs) and corresponding offers. The system may allow pricing based on the type of application and the quantity and type of work primitive to be processed, and it may use prior captured performance data to calculate estimated per unit of work costs that can be translated to different system types based on their capabilities. This per unit of work cost may assist providers in making offers on the RFBs. Recommended job requirements may also be generated. Once an offer is accepted, the system may configure and dispatch the job to the appropriate provider computing queue(s).

Подробнее
07-02-2019 дата публикации

Technologies for providing streamlined provisioning of accelerated functions in a disaggregated architecture

Номер: US20190042234A1
Принадлежит: Intel Corp

Technologies for providing streamlined provisioning of accelerated functions in a disaggregated architecture include a compute sled. The compute sled includes a network interface controller and circuitry to determine whether to accelerate a function of a workload executed by the compute sled, and send, to a memory sled and in response to a determination to accelerate the function, a data set on which the function is to operate. The circuitry is also to receive, from the memory sled, a service identifier indicative of a memory location independent handle for data associated with the function, send, to a compute device, a request to schedule acceleration of the function on the data set, receive a notification of completion of the acceleration of the function, and obtain, in response to receipt of the notification and using the service identifier, a resultant data set from the memory sled. The resultant data set was produced by an accelerator device during acceleration of the function on the data set. Other embodiments are also described and claimed.

Подробнее
07-02-2019 дата публикации

JOB DISTRIBUTION WITHIN A GRID ENVIRONMENT

Номер: US20190042309A1
Принадлежит:

According to one aspect of the present disclosure, a technique for job distribution within a grid environment includes receiving a job at a submission cluster for distribution of the job to at least one of a plurality of execution clusters where each execution cluster includes one or more execution hosts. Resource attributes are determined corresponding to each execution host of the execution clusters. For each execution cluster, execution hosts are grouped based on the resource attributes of the respective execution hosts. For each grouping of execution hosts, a mega-host is defined for the respective execution cluster where the mega-host for a respective execution cluster defines resource attributes based on the resource attributes of the respective grouped execution hosts. An optimum execution cluster is selected for receiving the job based on a weighting factor applied to select resources of the respective execution clusters. 1. A method for job distribution within a grid environment , comprising:receiving a job at a submission cluster for distribution of the job to at least one of a plurality of execution clusters, each execution cluster comprising one or more execution hosts;determining resource attributes corresponding to each execution host of the execution clusters;grouping, for each execution cluster, execution hosts based on the resource attributes of the respective execution hosts;defining, for each grouping of execution hosts, a mega-host for the respective execution cluster, the mega-host for a respective execution cluster based on combining select resource attributes of the respective grouped execution hosts;determining resource requirements for the job;selecting an optimum execution cluster for receiving the job based on a weighting factor applied to select resources of the respective execution clusters;identifying candidate mega-hosts within the optimum execution cluster for the job based on the resource attributes of the respective mega-hosts and ...

Подробнее
07-02-2019 дата публикации

USER EQUIPMENT SELECTION FOR MOBILE EDGE COMPUTING

Номер: US20190042318A1
Принадлежит:

An access node of a mobile communication network controls access of a group of user equipments to the mobile communication network. The access node selects one or more of the user equipments of the group as candidate user equipment for supporting mobile edge computing. The access node then sends an indication of the one or more candidate user equipments to a mobile edge computing server. The mobile edge computing server receives the indication and selects at least one target user equipment for execution of computational tasks from the candidate user equipments. The mobile edge computing server then distributes a computational task to the selected at least one target user equipment. 1. A method of supporting mobile edge computing , the method comprising:controlling access, by an access node of a mobile communication network, controlling access of a group of user equipments to the mobile communication network;selecting, by the access node, one or more of the user equipments of the group as candidate user equipment for supporting mobile edge computing; andsending, by the access node, an indication of said one or more candidate user equipments to a mobile edge computing server.2. The method according to claim 1 , further comprising:receiving, by the access node, from at least one user equipment of the group, an indication of one or more capabilities of the user equipment; andperforming, by the access node, said selecting based on the received indication.3. The method according to claim 2 , further comprising:receiving, by the access node, the indication during establishment of a radio connection of the user equipment to the access node.4. The method according to claim 1 , further comprising:receiving, by the access node, from at least one user equipment of the group, information on local conditions at the user equipment; andperforming, by the access node, said selecting based on the received information on local conditions at the user equipment.5. The method according ...

Подробнее