Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 10568. Отображено 200.
27-02-2016 дата публикации

ПРИОСТАНОВКА И/ИЛИ РЕГУЛИРОВАНИЕ ПРОЦЕССОВ ДЛЯ РЕЖИМА ОЖИДАНИЯ С ПОДКЛЮЧЕНИЕМ

Номер: RU2576045C2

Изобретение относится к по меньшей мере одной системе и/или одному методу присваивания классификации управления питанием, по меньшей мере одному процессу, связанному с компьютерной средой, перевода компьютерной среды в режим ожидания с подключением на основании классификаций управления питанием, присвоенных процессам, и перевода компьютерной среды из режима ожидания с подключением в режим выполнения. Способ содержит этапы, на которых: идентифицируют процесс, которому должна быть присвоена классификации управления питанием. Присваивают классификацию управления питанием, например, привилегированный, регулируемый и/или приостанавливаемый, которая может быть присвоена процессам на основании различных факторов, например, таких как: обеспечивает ли процесс требуемые функции, и/или обеспечивает ли процесс функции, используемые для базового режима работы компьютерной среды. Таким образом, компьютерную среду можно переводить в маломощный режим ожидания с подключением, в котором возможно продолжение ...

Подробнее
16-08-2019 дата публикации

РАВНОПРАВНОЕ РАЗДЕЛЕНИЕ СИСТЕМНЫХ РЕСУРСОВ В ИСПОЛНЕНИИ РАБОЧЕГО ПРОЦЕССА

Номер: RU2697700C2

Способ может быть применен на практике в распределенном вычислительном окружении, которое предоставляет вычислительные ресурсы множеству арендаторов. Технический результат заключается в предотвращении запуска рабочих нагрузок, превышающих некоторую продолжительность. Способ включает в себя действия для выделения ограниченного множества системных ресурсов арендаторам. Способ включает в себя идентификацию части ресурса. Способ дополнительно включает в себя идентификацию выполняющейся рабочей нагрузки арендатора. Характеристики контрольных точек идентифицируются для исполняющейся рабочей нагрузки арендатора. На основе характеристик контрольных точек и части ресурса идентифицируется событие вытеснения задачи. 3 н. и 17 з.п. ф-лы, 6 ил.

Подробнее
26-02-2019 дата публикации

УПРАВЛЕНИЕ ВЫПОЛНЕНИЕМ ПОТОКОВ В МНОГОПОТОЧНОМ ПРОЦЕССОРЕ

Номер: RU2680737C2

Изобретение относится к средствам управления выполнением потоков в многопоточном процессоре. Техническим результатом является возможность совместного использования контейнера потоками с разным приоритетом. Способ выполняется посредством потока, работающего на процессоре, включает операции: остановка выполнения других потоков на ядре процессора, в ответ на выполнение критической последовательности или другой последовательности, нуждающейся в ресурсах ядра процессора или в управлении ресурсами ядра процессора, причем остановка включает: выявление того, запрещает ли другой поток свою остановку, остановку выборки команд и выполнения на другом потоке, определение того, что выполнение другого потока в процессоре прекратилось, если выполнение другого потока на процессоре прекратилось, то получение для другого потока информации о состоянии, выполнение потоком операций в процессоре и разрешение выполнения другого потока в процессоре. Система реализует способ. 3 н. и 4 з.п. ф-лы, 12 ил.

Подробнее
29-11-2018 дата публикации

Номер: RU2017103676A3
Автор:
Принадлежит:

Подробнее
10-06-2016 дата публикации

УСТРОЙСТВО ОБРАБОТКИ ИНФОРМАЦИИ, СПОСОБ ОБРАБОТКИ ИНФОРМАЦИИ И ПРОГРАММА

Номер: RU2014147323A
Принадлежит:

... 1. Устройство обработки информации, содержащеемножество каркасов приложений, на которых выполняются приложения; иблок принятия решений, выполненный с возможностью управлять переключением рабочих состояний множества каркасов приложений.2. Устройство обработки информации по п. 1, в котором блок принятия решений управляет переключением рабочих состояний множества каркасов приложений на основании команды из приложения, выполняемое на выбранном одном из множества каркасов приложений.3. Устройство обработки информации по п. 2, дополнительно содержащее блок переключения процесса, выполненный с возможностью переключать активированное приложение на приложение, выполняемое на выбранном одном из множества каркасов приложений.4. Устройство обработки информации по п. 3, в котором блок переключения процесса дополнительно выполнен с возможностью останавливать работу текущего активированного приложения и начинать работу приложения, выполняемое на выбранном одном из множества каркасов приложений.5. Устройство ...

Подробнее
20-08-2015 дата публикации

ПРИОСТАНОВКА И/ИЛИ РЕГУЛИРОВАНИЕ ПРОЦЕССОВ ДЛЯ РЕЖИМА ОЖИДАНИЯ С ПОДКЛЮЧЕНИЕМ

Номер: RU2014104496A
Принадлежит:

... 1. Способ присваивания классификации управления питанием процессу, содержащий этапы, на которых:идентифицируют процесс, которому должна быть присвоена классификации управления питанием; иприсваивают классификацию управления питанием процессу, при этом присваивание содержит этапы, на которых:определяют, управляет ли жизненным циклом процесса, по меньшей мере, одно из процесса и компонента управления жизненным циклом, и тогда присваивают процессу классификацию привилегированного;определяют, можно ли процесс приостановить без отказа компьютерной системы, и не требуются ли ограниченные динамические функции, ассоциированные с процессом, и тогда присваивают процессу классификацию приостанавливаемого;определяют, можно ли процесс регулировать без отказа компьютерной системы и требуются ли ограниченные динамические функции, ассоциированные с процессом, и тогда присваивают процессу классификацию регулируемого; иопределяют, невозможно ли процесс приостановить или регулировать без отказа компьютерной ...

Подробнее
10-11-2016 дата публикации

СПОСОБ И УСТРОЙСТВО ДЛЯ УПРАВЛЕНИЯ ФОНОВЫМ ПРИЛОЖЕНИЕМ И ТЕРМИНАЛЬНЫМ УСТРОЙСТВОМ

Номер: RU2015113734A
Принадлежит:

... 1. Способ управления фоновым приложением, включает:создание списка приложений в соответствии с запущенными приложениями в операционной системе, причем список приложений, по крайней мере, включает идентификаторы запущенных приложений в операционной системе;просмотр идентификаторов в списке приложений;определение того, является ли приложение, соответствующее текущему просматриваемому идентификатору, фоновым приложением;определение того, содержит ли заданный белый список текущий просматриваемый идентификатор, и превышает ли число идентификаторов, соответствующих фоновым приложениям в списке приложений, заданное пороговое значение, если приложение, соответствующее текущему просматриваемому идентификатору, является фоновым приложением, причем заданный белый список включает в себя идентификаторы фоновых приложений, назначенных пользователем;выбор идентификатора, соответствующего фоновому приложению, из списка приложений, и закрытие фонового приложения, соответствующего выбранному идентификатору ...

Подробнее
16-08-2012 дата публикации

Motor car has two control devices for providing predetermined software functions, where one of the control devices is turned OFF during execution of software function provided by other control device

Номер: DE102011104259A1
Автор: GUT GEORG, GUT, GEORG
Принадлежит:

The motor car (10) has two control devices (12,14) that are arranged for providing predetermined software functions under specific operating conditions. The control device (12) is turned OFF during execution of predetermined software function under operating condition provided by the control device (14). The operating conditions are determined based on a route scheduler and/or setting of a vehicle key (20). An independent claim is included for method for operating motor car.

Подробнее
14-06-2018 дата публикации

VERFAHREN ZUM DATENAUSTAUSCH ZWISCHEN EINEM PRIMÄREN KERN UND EINEM SEKUNDÄREN KERN IN EINEM ECHTZEIT-BETRIEBSSYSTEM

Номер: DE102017128650A1
Принадлежит:

Ein Verfahren zum Austausch von Daten in einem Echtzeit-Betriebssystem zwischen einem primären Kern und einem sekundären Kern in einem Mehrkernprozessor beinhaltet das Ausführen eines primären Pfades über den primären Kern und das Ausführen eines sekundären Pfades über den sekundären Kern. Der primäre Pfad ist so konfiguriert, um eine relativ schnellere Verarbeitungsaufgabe zu sein und der sekundäre Pfad so dazu konfiguriert ist, eine relativ langsamere Verarbeitungsaufgabe zu sein. Das Verfahren beinhaltet das Entwickeln eines Freeze-Verfahren-Flags, um einen entsprechenden Flag-Status einzustellen und durch den primären Pfad zu löschen. Das Verfahren beinhaltet das Entwickeln eines Dateneinfrierungs-Flags, um einen entsprechenden Flag-Status eingestellt zu haben und durch jeweils den primären und sekundären Pfad zu löschen. Eine Komponente, die operativ mit dem Mehrkernprozessor verbunden ist, kann mindestens teilweise anhand einer Differenz zwischen primären und sekundären Sätzen von ...

Подробнее
20-11-2014 дата публикации

Unterbrechung von Aufgaben zur Verwatlung von Chip-Komponenten

Номер: DE102014101633A1
Принадлежит:

Die Erfindung stellt einen IC-Chip (102) bereit, aufweisend ein Servicemodul (104), das so ausgelegt ist, dass eine oder mehrere Komponenten (114, 116, 118, 120) des Chips über eine oder mehrere Aufgaben (T1 bis T5) verwaltet werden, wobei das Servicemodul aufweist: ein Verarbeitungsmodul (106); einen Datenspeicher (112), auf dem ein gegenwärtiger Zustand (CS) einer gegenwärtig ausgeführten (T4) der Aufgaben gespeichert ist; eine Schnittstelle (122) zum Empfangen einer Anforderung (R), um eine weitere (T1) der Aufgaben auszuführen, wobei die gegenwärtig ausgeführte Aufgabe (T4) eine erste Priorität (P4) aufweist und die andere Aufgabe (T1) eine zweite Priorität (P1) aufweist; einen Taktgeber (108), der zum Messen eines Zeitintervalls ausgelegt ist, das zwischen dem Empfangen der Anforderung und der aktuellen Uhrzeit vergangen ist; ein Steuermodul, das so ausgelegt ist, dass die gegenwärtig ausgeführte Aufgabe unterbrochen und die Ausführung der angeforderten Aufgabe ausgelöst wird, falls ...

Подробнее
11-06-2014 дата публикации

Temporarily freeing up resources in a computer system

Номер: GB0002467844B
Принадлежит: GZERO LTD, GZERO LIMITED

Подробнее
31-12-2014 дата публикации

An almost fair busy lock

Номер: GB0201420412D0
Автор:
Принадлежит:

Подробнее
15-11-2006 дата публикации

Method for reducing energy consumption of buffered applications using simultaneous multi-threaded processor

Номер: GB0002426096A
Принадлежит:

Management of resources in a computer system can reduce energy consumption of the system. This can be accomplished by monitoring application states for the software applications and/or monitoring thread states in a multi threading system, and then makes resource adjustments in the system. The application states monitoring may be performed by monitoring the data buffers set up for temporary data used by software applications. Depending on the buffer levels, resources may be increased or decreased. Adjustments of resources may come in the form of changing the voltage and frequency of the processors in the system, and other means. Decreasing the resources may help reduce energy consumption. Management of resources may also be performed by monitoring the threads associated with one or multiple software applications in the system and controlling the dispatch of threads. A ready thread may be delayed to increase the opportunity for concurrent running of multiple threads. Concurrent running of ...

Подробнее
22-07-2009 дата публикации

A method of maintaining applications in a computing device

Номер: GB0002421323B
Принадлежит: SYMBIAN SOFTWARE LTD

Подробнее
08-06-2016 дата публикации

A method and system for scalable job processing

Номер: GB0002533017A
Принадлежит:

The present invention relates to a method for processing jobs within a cluster architecture. The method comprises the transmission of messages relating to the ongoing processing of jobs back to a client via a persistent messaging channel. A system for processing jobs within a cluster architecture is also disclosed. A method step (801 see fig 8) identifies a job 300 via a universally unique identifier UUID. Task parameters and a message identifier are also provided. The method comprises the step (803 see fig 8) of creating a messaging channel. A message registry is also provided. The invention provides for example a way of mapping child and parent jobs in a network.

Подробнее
12-06-2013 дата публикации

Computer system

Номер: GB0201308167D0
Автор:
Принадлежит:

Подробнее
08-07-2020 дата публикации

Context switch by changing memory pointers

Номер: GB0202007823D0
Автор:
Принадлежит:

Подробнее
05-04-2023 дата публикации

Control method, control apparatus, and electronic device

Номер: GB0002611392A
Принадлежит:

A control method includes obtaining a first task content and, in response to detecting that a first display area outputs a second task content, mapping the first task content to a second display area for output. The first display area and the second display area are different display areas in a same display screen, or the first display area and the second display area are on different display screens and each include at least a partial display area of one of the different display screens.

Подробнее
16-03-2022 дата публикации

Asynchronous data movement pipeline

Номер: GB0002598809A
Принадлежит:

Apparatuses, systems, and techniques to parallelize operations in one or more programs with data copies from global memory to shared memory in each of the one or more programs. In at least one embodiment, a program performs operations on shared data and then asynchronously copies shared data to shared memory, and continues performing additional operations in parallel while the shared data is copied to shared memory until an indicator provided by an application programming interface to facilitate parallel computing, such as CUDA, informs said program that shared data has been copied to shared memory.

Подробнее
15-06-2011 дата публикации

FLEXRAY SYSTEM WITH EFFICIENT STORAGE OF INSTRUCTIONS

Номер: AT0000511138T
Принадлежит:

Подробнее
15-06-2011 дата публикации

LOGIC OPERATION PROCEDURE AND MOBILE TERMINAL

Номер: AT0000510255T
Принадлежит:

Подробнее
28-05-2020 дата публикации

Memory management for application loading

Номер: AU2017277697B2
Принадлежит: FPA Patent Attorneys Pty Ltd

Some embodiments can load one or more applications into working memory from persistent storage when permitted by a memory pressure level of a mobile device. Loading the applications into working memory enables the applications to be launched into the foreground quickly when the user indicates the desire to launch. Some embodiments may identify a set of applications that are designated for providing snapshots to be displayed when the mobile device is in a dock mode. Certain embodiments may determine a current memory pressure level. Some embodiments may load an application in the set of applications into working memory from a persistent storage responsive to determining that the memory pressure level is below a threshold. Certain embodiments may continue to load additional applications responsive to determining that the memory pressure level is below the threshold. After determining that the memory pressure level is above the threshold, some embodiments may reclaim memory.

Подробнее
25-07-2019 дата публикации

Techniques for behavioral pairing in a task assignment system

Номер: AU2018383093A1
Принадлежит: AJ PARK

Techniques for behavioral pairing in a task assignment system are disclosed. The techniques may be realized as a method for behavioral pairing in a task assignment system comprising: determining, by at least one computer processor communicatively coupled to and configured to operate in the task assignment system, a priority for each of a plurality of tasks; determining, by the at least one computer processor, an agent available for assignment to any of the plurality of tasks; and assigning, by the at least one computer processor, a first task of the plurality of tasks to the agent using a task assignment strategy, wherein the first task has a lower-priority than a second task of the plurality of tasks.

Подробнее
02-05-2013 дата публикации

Application lifetime management

Номер: AU2011323985A1
Принадлежит:

In a computing device running multiple applications, a check is made as to whether a threshold value of multiple threshold values has been met. Each of the multiple threshold values is associated with a characteristic of one of the multiple applications or a characteristic of a resource of the computing device. If the threshold value has not been met, then the multiple applications are allowed to continue running on the computing device. However, if the threshold value has been met, then one or more of the multiple applications to shut down is selected based at least in part on the characteristic associated with the threshold value that has been met, and the selected application is shut down.

Подробнее
13-02-2014 дата публикации

On-demand tab rehydration

Номер: AU2012287345A1
Принадлежит:

Various embodiments proactively monitor and efficiently manage resource usage of individual tabs. In at least some embodiments, one or more tabs can be dehydrated in accordance with various operational parameters, and rehydrated when a user actually activates a particular tab. In at least some embodiments, rehydration can occur on a tab-by-tab basis, while at least some tabs remain dehydrated. Dehydrated tabs can, in some embodiments, be visually presented to a user in a manner in which normal, active tabs are presented.

Подробнее
20-10-2003 дата публикации

Time-multiplexed speculative multi-threading to support single-threaded applications

Номер: AU2003222244A8
Принадлежит:

Подробнее
13-08-2015 дата публикации

Method and device for managing memory of user device

Номер: AU2014258068A1
Принадлежит:

A method and a device dynamically managing background processes according to a memory status so as to efficiently use the memory in a user device supporting a multitasking operating system. The method includes determining reference information for adjustment of the number of background processes; identifying a memory status based on the reference information; and adjusting the number of the background processes in correspondence to the memory status.

Подробнее
31-03-2016 дата публикации

Display object pre-generation

Номер: AU2016201606A1
Принадлежит:

DISPLAY OBJECT PRE-GENERATION In one embodiment, a computing device identifies a portion of a display object to pre generate. The device may monitor a thread to identify the next upcoming window of idle time (i.e., the next opportunity when the thread will be idle for a minimum period of time). The device may add one or more selected pre-generation tasks to a message queue for execution by the thread during the window. The device may execute the one or more selected pre-generation tasks in the message queue by pre-generating at least one selected element of a display object with content for a portion of the content layout, and then return the display object. CLIENT NETWORKING SYSTEM SYSTEM THIRD-PARTY ...

Подробнее
12-09-2002 дата публикации

METHOD AND SYSTEM FOR DISTRIBUTED PROCESSING MANAGEMENT

Номер: CA0002439007A1
Принадлежит:

A system for distributed process management comprising a plurality of units of software for installation on computing platform, and furter software for controlling the operation of the plurality of units in use of the system, wherein each unit of software is provided with means for communicating with other units of software, and at least some of the units of software are further provided with means for providing one or more elements of a sofware process, said further software being capable of defining at least one set of software units and controlling communications by the units in the set to be limited to communication only with other units of the set.

Подробнее
26-02-2019 дата публикации

PREVENTING DISRUPTIVE COMPUTER EVENTS DURING MEDICAL PROCEDURES

Номер: CA0002710733C
Принадлежит: BIOSENSE WEBSTER INC, BIOSENSE WEBSTER, INC.

A computer-implemented system for process control has two operating modes: normal mode and active procedure mode, with automatic transition between them. In normal mode, the operating system, firewall and anti-virus are fully operational. When entering a time-critical phase of a process, a process control application signals the operating system and utilities, whereupon transition to active procedure mode automatically occurs, in which access by the system services and by other applications to the resources of the computer is selectively limited in favor of the process control application. Upon completion of the procedure, the system automatically returns to normal mode.

Подробнее
01-11-2007 дата публикации

METHOD, SYSTEM, AND MEMORY FOR SCHEDULING AND CANCELLING TASKS

Номер: CA0002545428A1
Автор: DENIS, MARTIN
Принадлежит:

A memory, system and method for task scheduling and execution, the memory containing a data structure including a scheduling file containing tasks scheduled for execution, and a cancelling file containing references to tasks which execution is cancelled. A scheduler module reads the scheduled tasks of the scheduling file and the cancelled tasks of the cancelling file, and triggers execution of the scheduled tasks not referenced in the cancelling file. The data structure may comprise a plurality of pairs of scheduling and cancelling files, each pair being associated with a time interval. When the scheduler module receives from an application module a task scheduling request for scheduling a task, it writes the task in the task scheduling file. When the scheduler module further receives a task cancelling request for cancelling of the task, it writes the task in the task cancelling file.

Подробнее
16-07-2005 дата публикации

REMOTE SYSTEM ADMINISTRATION USING COMMAND LINE ENVIRONMENT

Номер: CA0002502682A1
Принадлежит:

A command line environment is configured to receive a command line that implicates a plurality of remote nodes. The command line environment is configured to establish a session, which may be persistent, to each implicated remote node, and to initiate execution of the remote command on those nodes. The session may be assigned to a variable, and the remote execution may be performed concurrently. Results of the remote execution are received and may be aggregated into an array. The command line environment may distribute the task of establishing sessions to other systems to improve performance.

Подробнее
27-07-2010 дата публикации

METHOD AND APPARATUS FOR HANDLING THREADS IN A DATA PROCESSING SYSTEM

Номер: CA0002498050C

A method, apparatus, and computer instructions for managing threads. A kernel thread associated with a user thread is detected as being unneeded by the user thread. The kernel thread is semi-detached in which data for the thread does not change stacks in response to the kernel thread being unneeded.

Подробнее
21-03-2009 дата публикации

CENTRALIZED POLLING SERVICE

Номер: CA0002629694A1
Принадлежит:

A centralized polling system is set forth for providing constant time select call functionality to a plurality of polling tasks in an operating system kernel. In one aspect, the CPS registers for and thereby captures events of interest on a continual basis. Polling tasks are supplied with active events thereby eliminating the need to repetitively pol l large numbers of inactive sockets. An exemplary embodiment of the CPS includes a system interface to the operating system kernel, a data structure for maintaining a profile for each of the polling tasks, and an application programming interface for registering the polling tasks, receiving the active sockets and corresponding read/write event types via the system interface, updating the profile within the data structure for each of the polling tasks, and returning the current read and write ready sockets to respective ones of the polling tasks.

Подробнее
07-05-2019 дата публикации

OPPORTUNISTIC MULTITASKING

Номер: CA0002988269C
Принадлежит: APPLE INC, APPLE INC.

Services for a personal electronic device are provided through which a form of background processing or multitasking is supported. The disclosed services permit user applications to take advantage of background processing without significant negative consequences to a user's experience of the foreground process or the personal electronic device's power resources. To effect the disclosed multitasking, one or more of a number of operational restrictions may be enforced. A consequence of such restrictions may be that a process will not be able to do in the background state, what it may be able to do if it were in the foreground state. Implementation of the disclosed services may be substantially transparent to the executing user applications and, in some cases, may be performed without the user application's explicit cooperation.

Подробнее
25-06-2015 дата публикации

METHOD FOR COMPOSING AND EXECUTING A REAL-TIME TASK-SEQUENCING PLAN

Номер: CA0002932690A1
Принадлежит:

L'invention est relative à un procédé d'exécution de deux tâches en temps partagé, comprenant les étapes consistant à décomposer hors-ligne chaque tâche en une séquence répétitive de trames successives, où chaque trame est associée à une opération atomique ayant un besoin d'exécution, et définit une date de début à partir de laquelle l'opération peut commencer et une date d'échéance à laquelle l'opération doit être terminée, d'où il résulte que chaque trame définit une marge de temps dans laquelle l'opération peut démarrer; vérifier pour chaque trame d'une première des séquences répétitives que l'opération correspondante peut être exécutée entre deux opérations successives quelconques d'un groupe de trames de la deuxième séquence répétitive, chevauchant la trame (Fai) de la première séquence répétitive, tout en respectant les dates de début et les échéances des opérations; et si la vérification est satisfaite, autoriser l'exécution des deux tâches. Les opérations des deux tâches sont alors ...

Подробнее
04-06-2019 дата публикации

RESTARTING PROCESSES

Номер: CA0002826282C
Принадлежит: AB INITIO TECHNOLOGY LLC

Techniques are disclosed that include a computer-implemented method, including storing information related to an initial state (402) of a process upon being initialized, wherein execution of the process includes executing at least one execution phase and upon completion of the executing of the execution phase storing information representative of an end state (404) of the execution phase; aborting execution (506) of the process in response to a predetermined event; and resuming execution of the process from one of the saved initial and end states (512) without needing to shut down the process.

Подробнее
10-05-2012 дата публикации

APPLICATION LIFETIME MANAGEMENT

Номер: CA0002814604A1
Принадлежит:

In a computing device running multiple applications, a check is made as to whether a threshold value of multiple threshold values has been met. Each of the multiple threshold values is associated with a characteristic of one of the multiple applications or a characteristic of a resource of the computing device. If the threshold value has not been met, then the multiple applications are allowed to continue running on the computing device. However, if the threshold value has been met, then one or more of the multiple applications to shut down is selected based at least in part on the characteristic associated with the threshold value that has been met, and the selected application is shut down.

Подробнее
13-10-2011 дата публикации

OPPORTUNISTIC MULTITASKING

Номер: CA0002795489A1
Принадлежит:

Services for a personal electronic device are provided through which a form of background processing or multitasking is supported. The disclosed services permit user applications to take advantage of background processing without significant negative consequences to a user's experience of the foreground process or the personal electronic device's power resources. To effect the disclosed multitasking, one or more of a number of operational restrictions may be enforced. A consequence of such restrictions may be that a process will not be able to do in the background state, what it may be able to do if it were in the foreground state. Implementation of the disclosed services may be substantially transparent to the executing user applications and, in some cases, may be performed without the user application's explicit cooperation ...

Подробнее
20-07-1999 дата публикации

CONTROL SYSTEM FOR PARALLEL EXECUTION OF JOB STEPS IN COMPUTER SYSTEM

Номер: CA0002112509C
Принадлежит:

A Job step parallel execution control system in a computer system has a job control statement which can designate whether a job step is to be executed by a specifically designated host computer or by a arbitrarily selected host computer, whether parallel execution of the job step while other job step in the same job is in execution, and/or whether the job is to be continued, to be forcedly terminated or to be terminated after termination of the other job step of the job currently in execution, upon commanding execution of the job step. Upon execution of the job, the job control statement is sequentially decoded to efficiently control parallel execution and termination of execution of the job step.

Подробнее
15-02-2023 дата публикации

System und Verfahren für eine KI Meta-Konstellation.

Номер: CH0000718872A2
Принадлежит:

System und Verfahren für eine Gerätekonstellation umfassen zum Beispiel ein Verfahren für eine Gerätekonstellation, wobei das Verfahren die folgenden Schritte umfasst: Empfangen einer Anfrage, wobei die Anfrage eine Vielzahl von Anfrageparametern enthält; Zerlegen der Anfrage in eine oder mehrere Aufgaben; Auswählen einer oder mehrerer Edge-Geräte zumindest teilweise auf der Grundlage der Vielzahl von Anfrageparametern; Zuweisen der einen oder mehreren Aufgaben an die eine oder mehreren ausgewählten Edge-Geräte, um zu bewirken, dass das eine oder mehrere ausgewählte Edge-Geräte die eine oder mehreren Aufgaben ausführen; und Empfangen eines oder mehrerer Aufgabenergebnisse von dem einen oder der mehreren ausgewählten Edge-Geräte.

Подробнее
30-03-2018 дата публикации

Application system-level operation method and apparatus

Номер: CN0107861798A
Автор: ZHANG JINGMIN, LI CHONG, MA RAN
Принадлежит:

Подробнее
01-02-2019 дата публикации

Annotation monitoring method, device and electronic device based on expression recognition

Номер: CN0109298783A
Принадлежит:

Подробнее
03-09-2019 дата публикации

Information device

Номер: CN0106998482B
Автор:
Принадлежит:

Подробнее
17-04-2013 дата публикации

Resuming applications and/or exempting applications from suspension

Номер: CN103049339A
Принадлежит:

Only a particular number of applications on a computing device are active at any given time, with applications that are not active being suspended. A policy is applied to determine when an application is to be suspended. However, an operating system component can have a particular application be exempted from being suspended (e.g., due to an operation being performed by the application). Additionally, an operating system component can have an application that has been suspended resumed (e.g., due to a desire of another application to communicate with the suspended application).

Подробнее
07-08-2018 дата публикации

Operational system of completion port model applied to Windows

Номер: CN0108376098A
Принадлежит:

Подробнее
03-08-2018 дата публикации

Service procedure control method, server and computer readable storage medium

Номер: CN0108363619A
Принадлежит:

Подробнее
28-09-2016 дата публикации

Memory cleaning method and device, and electronic equipment

Номер: CN0105975301A
Принадлежит:

Подробнее
04-09-2018 дата публикации

Memory cleaning method and device, the electronic device

Номер: CN0105975301B
Автор:
Принадлежит:

Подробнее
03-07-2020 дата публикации

Method for manufacturing a secure and modular hardware-based application and operating system therefor

Номер: FR0003091368A1
Принадлежит:

Подробнее
12-11-2012 дата публикации

PROCESSOR CORE STACK EXTENSION

Номер: KR0101200477B1
Автор:
Принадлежит:

Подробнее
05-03-2019 дата публикации

Номер: KR0101954310B1
Автор:
Принадлежит:

Подробнее
24-08-2020 дата публикации

Electronic device and method for managing life cycle of a plurality of applications executed in the electronic device

Номер: KR1020200099306A
Автор:
Принадлежит:

Подробнее
21-05-2014 дата публикации

SUSPENSION AND/OR THROTTLING OF PROCESSES FOR CONNECTED STANDBY

Номер: KR1020140061393A
Автор:
Принадлежит:

Подробнее
10-11-2004 дата публикации

TIME-MULTIPLEXED SPECULATIVE MULTI-THREADING TO SUPPORT SINGLE-THREADED APPLICATIONS

Номер: KR20040094888A
Принадлежит:

TIME-MULTIPEXED SPECULATIVE MULTI- THREADING TO SUPPORT SINGLE-THREADED APPLICATIONSABSTRACTOne embodiment of the present invention provides a system that facilitates interleaved execution of a head thread and a speculative thread within a single processor pipeline. The system operates by executing program instructions using the head thread, and by speculatively executing program instructions in advance of the head thread using the speculative thread, wherein the head thread and the speculative thread execute concurrently through time-multiplexed interleaving in the single processor pipeline. © KIPO & WIPO 2007 ...

Подробнее
18-01-2019 дата публикации

애플리케이션 프로그램 데이터 처리 방법 및 디바이스

Номер: KR1020190006516A
Автор: 종 슈나
Принадлежит:

... 본 출원은 애플리케이션 프로그램 데이터 처리 방법 및 디바이스를 제공한다. 애플리케이션 프로그램 데이터 처리 방법은, 적어도 하나의 애플리케이션 프로그램의 아이콘을 갖는 인터페이스에서 제1 미리 결정된 사용자 동작을 검출할 때 애플리케이션 프로그램의 보류 상태를 트리거하는 단계; 및 보류 상태에서, 타겟 애플리케이션 프로그램 상에서 제2 미리 결정된 사용자 동작을 검출할 때 타겟 애플리케이션 프로그램의 데이터를 처리하는 단계를 포함한다. 본 출원의 애플리케이션 프로그램 데이터 처리 방법 및 디바이스에 따르면, 애플리케이션 프로그램을 시작하지 않고서 애플리케이션 프로그램의 데이터가 빠르게 그리고 효율적으로 지워질 수 있으며, 그리하여 사용자 경험을 개선할 수 있다.

Подробнее
03-07-2018 дата публикации

단말기 기반 웨이크록(WAKELOCK)의 제어 방법, 장치, 및 단말기

Номер: KR1020180074762A
Автор: 양 칭화
Принадлежит:

... 본 발명은 단말기 기반 웨이크록의 제어 방법, 장치, 및 단말기를 개시한다. 상기 방법은 백그라운드에서 실행되는 제1 어플리케이션 프로그램을 획득하는 단계와, 상기 제1 어플리케이션 프로그램이 미리 설정된 필터 기준을 충족하는지 여부를 판정하는 단계와, 제2 어플리케이션 프로그램을 획득하기 위하여 상기 미리 설정된 필터 기준을 충족하지 못하는 제1 어플리케이션 프로그램을 선택하는 단계와, 상기 제2 어플리케이션 프로그램에 의해 유지되는 웨이크록 및 상기 제2 어플리케이션 프로그램에 의해 호출된 서비스를 강제적으로 해제하는 단계를 포함한다. 본 해결 방법에 따라, 백그라운드에서 실행되는 제1 어플리케이션 프로그램에 의해 유지되는 웨이크록과 상기 제1 어플리케이션 프로그램에 의해 호출되는 서비스가 효율적으로 제어될 수 있다. 이는, 미리 설정된 필터 기준 및 상기 제1 어플리케이션 프로그램에 의해 호출된 서비스를 충족하지 못하며 백그라운드에서 실행되는 제1 어플리케이션 프로그램이 부적절하게 장시간동안 웨이크록을 유지하는 것을 방지함으로써, 단말기의 에너지 소비를 감소시키고, 시스템 리소스들을 감소시킨다.

Подробнее
05-07-2016 дата публикации

método para evitar a finalização de tarefa em um progama de comunicação multitarefa

Номер: BRPI0917076A2
Автор: LI BING, BING LI
Принадлежит:

Подробнее
11-09-2019 дата публикации

Номер: TWI671640B

Подробнее
06-12-2012 дата публикации

CONTROL METHOD, CONTROL DEVICE AND COMPUTER SYSTEM

Номер: WO2012163275A1
Принадлежит:

Provided are a control method and a control device applied in a computer system, and a computer system. The control method according to the embodiments of the present invention is applied in a computer system, wherein the computer system includes a system memory containing two divided storage areas with the two storage areas being respectively a first storage area and a second storage area. The control method includes: loading a first operating system into the first storage area; running the first operating system; and starting up a system memory access drive by the first operating system, so as to load into the second storage area the pre-stored memory mapping data of the second operating system by the system memory access drive.

Подробнее
21-07-2005 дата публикации

SLEEP STATE MECHANISM FOR VIRTUAL MULTITHREADING

Номер: WO2005066781A2
Автор: SAMRA, Nicholas
Принадлежит:

Method, apparatus and system embodiments provide support for multiple SoEMT software threads on multiple SMT logical thread contexts. A sleep state mechanism maintains a current value of an element of architecture state for each physical thread. The current value corresponds to an active virtual thread currently running on the physical thread. The sleep state mechanism also maintains sleep values of the architecture state element for each inactive thread. The active and inactive values may be maintained in a cross-bar configuration. Upon a read of the architecture state element, simplified mux logic selects among the current values to provide the current value for the appropriate active thread. Upon a thread switch, control logic associated with the sleep state mechanism swaps the active state value for the current thread with the inactive state value for the new thread.

Подробнее
01-04-2004 дата публикации

METHOD AND APPARATUS FOR HANDLING THREADS IN A DATA PROCESSING SYSTEM

Номер: WO2004027599A3
Принадлежит:

A method, apparatus, and computer instructions for managing threads. A kernel thread associated with a user thread is detected as being unneeded by the user thread. The kernel thread is semi-detached in which data for the thread does not change stacks in response to the kernel thread being unneeded.

Подробнее
09-02-2006 дата публикации

Method, apparatus, and computer program product for dynamically tuning a data processing system by identifying and boosting holders of contentious locks

Номер: US2006031658A1
Принадлежит:

A method, apparatus, and computer program product are disclosed for a simultaneous multithreading (SMT) data processing system for modifying the processing of software threads that acquire a contentious software lock. The system includes a processor that is capable of concurrently executing multiple different threads on the processor. The processor is also capable of utilizing hardware thread priorities assigned to each thread the processor is processing by granting a greater, disparate amount of resources to the highest priority thread. A hardware priority is assigned to each one of the SMT threads. A contentious lock is identified. Ones of the multiple threads are identified that attempt to acquire the contentious lock. These threads are dynamically redirected to special code for handling contentious locks. The hardware priority of a thread acquiring a contentious lock is then boosted. According to the preferred embodiment, the present invention redirects callers of a locking function ...

Подробнее
11-03-2021 дата публикации

SYNCHRONIZING SCHEDULING TASKS WITH ATOMIC ALU

Номер: US20210073029A1
Принадлежит:

A method of synchronizing a group of scheduled tasks within a parallel processing unit into a known state is described. The method uses a synchronization instruction in a scheduled task which triggers, in response to decoding of the instruction, an instruction decoder to place the scheduled task into a non-active state and forward the decoded synchronization instruction to an atomic ALU for execution. When the atomic ALU executes the decoded synchronization instruction, the atomic ALU performs an operation and check on data assigned to the group ID of the scheduled task and if the check is passed, all scheduled tasks having the particular group ID are removed from the non-active state.

Подробнее
06-05-2014 дата публикации

Operating system shutdown reversal and remote web monitoring

Номер: US0008719820B2

A method is disclosed for reversing operating system shutdown, including: detecting, by a monitoring program, an attempt by a user to log off, shut down, or restart a computer containing an operating system capable of running a plurality of program windows; determining if any program window is still open in the operating system; automatically cancelling, by the monitoring program, the logoff, shutdown, or restart request if it is determined that a program window is still open; and attempting to close any open program window by the monitoring program.

Подробнее
22-06-2017 дата публикации

METHOD FOR SELECTING AND CONTROLLING SECOND WORK PROCESS DURING FIRST WORK PROCESS IN MULTITASKING MOBILE TERMINAL

Номер: US20170177193A1
Принадлежит:

Provided is a method for controlling a plurality of work processes in a multitasking mobile terminal, and more particularly, a method for selecting a second work process during a first work process and controlling a predetermined function of the selected second work process. In the controlling method, icons corresponding to the respective work processes are displayed in response to a user command, and a desired work process is selected through the displayed icons. A predetermined function of the selected work process is controlled through a pop-up menu activated in response to the user command.

Подробнее
12-07-2007 дата публикации

Multifunction device, control device, multifunction device control system, method of controlling multifunction device, program, and storage medium

Номер: US20070159663A1
Автор: Kunihiko Tsujimoto
Принадлежит: Sharp Kabushiki Kaisha

A multifunction device achieves a device function by appropriately combining plural elemental functions including a scanning function, a printing function, and a communication function. The multifunction device includes: a service layer for executing the plural elemental functions; an API table storage section which stores an API table in which a first API for executing the device function is associated with a second API that the service layer can receive; and an Open I/F layer which receives the first API, specifies, in the API table, the second API corresponding to the first API, and outputs the specified second API to the service layer. With this arrangement, it is possible to provide a multifunction device which allows new control from a control device to the multifunction device to be easily developed.

Подробнее
06-10-2005 дата публикации

System for building interactive calculations on web pages

Номер: US20050223351A1
Принадлежит: Loma Linda University

A system for building interactive calculations on web pages, comprising a central processor for executing program instructions stored on computer readable media, interfaces in communication with the central processor, one or more computer readable media in communication with the central processor containing program instructions for executing a manager object for controlling the interaction of objects, one or more than one calculation objects (26, 28) in communication with the manager object, one or more than one input objects (14, 16, 18) controlling the model parameters, one or more than one output objects (20, 22, 24) displaying values of calculated variables, wherein the manager object, each of the input objects, and each of the output objects are coded with no model specific information, and model specific information is communicated through the manager object by parameter or variable name.

Подробнее
21-06-2007 дата публикации

Interface apparatus, interface method in information processing apparatus, and interface program

Номер: US20070143713A1
Принадлежит: Sony Corporation

An interface apparatus controls at least activation and termination of one or more registered application programs in accordance with user operations. The apparatus includes a first holding section configured to hold information of the application programs; a user operation acceptance section; a second holding section configured to receive and hold status information of each application program; a list presentation section configured to present an application program list window and to indicate at least whether a status of each application program is a running status or a terminated status based on the status information; a selection section configured to select an application program to be controlled from among the application programs in accordance with a predetermined first user operation; and a control section configured to control the status of the selected application program based on the status information when the user operation acceptance section has accepted a predetermined second ...

Подробнее
26-07-2007 дата публикации

METHODS AND SERVERS FOR ESTABLISHING A CONNECTION BETWEEN A CLIENT SYSTEM AND A VIRTUAL MACHINE HOSTING A REQUESTED COMPUTING ENVIRONMENT

Номер: US20070174429A1
Принадлежит: Citrix Systems, Inc.

A method for providing access to a computing environment includes the step of receiving a request from a client system for an enumeration of available computing environments. Collected data regarding available computing environments are accessed. Accessed data are transmitted to a client system, the accessed data indicating to the client system each computing environment available to a user of the client system. A request is received from the client system to access one of the computing environments. A connection is established between the client system and a virtual machine hosting the requested computing environment.

Подробнее
10-10-2019 дата публикации

TESTING AND REPRODUCTION OF CONCURRENCY ISSUES

Номер: US20190310930A1
Принадлежит:

A method and system for testing a server code in a server concurrently handling multiple client requests create a job-specific breakpoint in the server code using a library application programming interface (API) that allows the job-specific breakpoint in the server code being enabled or disabled based on a job identifier. The library API controls the job-specific breakpoint in the server code via a plurality of readymade functions that execute, in a desired sequence, various synchronous and asynchronous program paths associated with the multiple client requests. By using the library API, the method and system are capable of establishing a new server connection with the server and retrieving the job identifier from the server associated with the established new server connection, pausing execution of a client job based on enabling the job-specific breakpoint, and resuming execution of the client job based on disabling the job-specific breakpoint.

Подробнее
29-09-2020 дата публикации

Forward killing of threads corresponding to graphics fragments obscured by later graphics fragments

Номер: US0010789768B2
Принадлежит: ARM Limited, ADVANCED RISC MACH LTD

A graphics processing apparatus comprises fragment generating circuitry to generate graphics fragments corresponding to graphics primitives, thread processing circuitry to perform threads of processing corresponding to the fragments, and forward kill circuitry to trigger a forward kill operation to prevent further processing of a target thread of processing corresponding to an earlier graphics fragment when the forward kill operation is enabled for the target thread and the earlier graphics fragment is determined to be obscured by one or more later graphics fragments. The thread processing circuitry supports enabling of the forward kill operation for a thread including at least one forward kill blocking instruction having a property indicative that the forward kill operation should be disabled for the given thread, when the thread processing circuitry has not yet reached a portion of the thread including the at least one forward kill blocking instruction.

Подробнее
31-10-2019 дата публикации

HANDLING ZERO FAULT TOLERANCE EVENTS IN MACHINES WHERE FAILURE LIKELY RESULTS IN UNACCEPTABLE LOSS

Номер: US2019332457A1
Принадлежит:

Provided are a computer program product, system, and method for managing I/O requests to a storage array of storage devices in a machine having a processor node and device adaptor. In response to initiating a rebuild of data in the storage array, the device adaptor determines whether a remaining fault tolerance at the storage array comprises a non-zero fault tolerance that permits at least one further storage device to fail and still allow recovery of data stored in the storage array. In response to determining that the remaining fault tolerance is a zero fault tolerance that does not permit at least one storage device to fail and allow recovery of data, the device adaptor sends a message to the processor node to cause the processor node to initiate an emergency protocol to terminate a mission critical operation when the processor node is performing the mission critical operation.

Подробнее
26-10-2021 дата публикации

Method for process management and electronic device

Номер: US0011157315B2

A method for process management in an electronic device is disclosed. The method includes: acquiring a set of association processes corresponding to an application in the electronic device and priority levels of association processes, wherein the set of association processes includes a primary process of the application and at least one secondary process bound to the primary process; acquiring an operation state of the primary process and an operation state of each of the at least one secondary process, respectively; and adjusting a binding state between the primary process and each of the at least one secondary process and performing a priority adjustment for the primary process and each of the at least one secondary process between which are in the adjusted binding state according to the operation state of the primary process and the operation state of each of the at least one secondary process.

Подробнее
20-09-2011 дата публикации

Method and apparatus for ensuring fairness and forward progress when executing multiple threads of execution

Номер: US0008024735B2
Принадлежит: Intel Corporation, INTEL CORP, INTEL CORPORATION

A system and method for determine which threads to execute at a given time in a multi-threaded computer system. A thread prioritizer determines execution fairness between pairs of potentially executing threads. A switch enabler determines forward progress of each executing thread. The resulting indicators from the thread prioritizer and switch enabler may aid in the determination of whether or not to switch a particular potentially executing thread into execution resources.

Подробнее
24-11-2020 дата публикации

Managing the graceful termination of a virtualized network function instance

Номер: US0010846128B2

The present invention provides apparatuses, methods, computer programs, computer program products and computer-readable media regarding managing the graceful termination of a virtualized network function (VNF) instance. The method comprises receiving a request for a graceful termination of a virtual network function instance, transmitting the request for the graceful termination of the virtual network function instance to an element manager, checking, whether a confirmation that the virtual network function instance to be terminated has been taken out of service, is received, and if the confirmation is received, terminating the virtual network function instance.

Подробнее
18-07-2017 дата публикации

Dynamic timeout period adjustment of service requests

Номер: US0009710302B2

According to one exemplary embodiment, a method for dynamically timing out a first process within a plurality of suspended processes is provided. The method may include determining that a second process is attempting to suspend. The method may also include determining if a number of suspended processes plus one is less than a threshold value. The method may then include selecting the first process within the plurality of suspended processes to prematurely time out based on determining that the number of suspended processes plus one is not less than the threshold value. The method may further include timing out the selected first process. The method may also include suspending the second process.

Подробнее
22-09-2016 дата публикации

INFORMATION PROCESSING APPARATUS AND RECORDING MEDIUM

Номер: US20160275023A1
Принадлежит:

In accordance with one embodiment, an information processing apparatus comprises an operation section, a signal generation section and a control section. The operation section outputs an operation signal indicating that it is operated by a user. The signal generation section generates a first control signal and a second control signal based on the operation signal from the operation section. The control section starts a pre-determined program based on the first control signal and executes an interruption processing of the pre-determined program based on the second control signal. The interruption processing means temporarily stopping the pre-determined program being executed or releasing the temporary stop of the pre-determined program.

Подробнее
07-11-2017 дата публикации

Partial resume for operating system modules

Номер: US0009811374B1
Принадлежит: Google Inc., GOOGLE INC

A computing device may receive a data packet. The computing device may be operating a plurality of kernel-space software modules that are in a suspended state, and the computing device may also be operating a plurality of user-space software modules that are in the suspended state. It may be determined that the data packet is of a particular packet type. Data packets of the particular packet type may be consumed by any of a particular subset of the kernel-space software modules. While the user-space software modules remain in the suspended state, the computing device may further (i) transition at least some kernel-space software modules to a non-suspended state, (ii) consume, by a particular one of the non-suspended kernel-space software modules, the data packet, and (iii) transition the non-suspended kernel-space software modules to the suspended state.

Подробнее
05-03-2019 дата публикации

Cancellable command application programming interface (API) framework

Номер: US10223155B2

Embodiments are provided that include the use of a cancellable command application programming interface (API) framework that provides cooperative multitasking for synchronous and asynchronous operations based in part on a command timing sequence and a cancellable command API definition. A method of an embodiment enables a user or programmer to use a cancellable command API definition as part of implementing a responsive application interface using a command timing sequence to control execution of active tasks. A cancellable command API framework of an embodiment includes a command block including a command function, a task engine to monitor the command function, and a timer component to control execution of asynchronous and synchronous tasks based in part on first and second control timing intervals associated with a command timing sequence. Other embodiments are also disclosed.

Подробнее
21-11-2017 дата публикации

Managing distributed execution of programs

Номер: US0009826031B2

Techniques are described for managing distributed execution of programs. In some situations, the techniques include determining configuration information to be used for executing a particular program in a distributed manner on multiple computing nodes and/or include providing information and associated controls to a user regarding ongoing distributed execution of one or more programs to enable the user to modify the ongoing distributed execution in various manners. Determined configuration information may include, for example, configuration parameters such as a quantity of computing nodes and/or other measures of computing resources to be used for the executing, and may be determined in various manners, including by interactively gathering values for at least some types of configuration information from an associated user (e.g., via a GUI that is displayed to the user) and/or by automatically determining values for at least some types of configuration information (e.g., for use as recommendations ...

Подробнее
14-03-2019 дата публикации

SYSTEM, METHOD, AND APPARATUS FOR RENDERING INTERFACE ELEMENTS

Номер: US20190079781A1
Принадлежит:

A method for rendering interface elements, including: obtaining a first set of one or more interface elements associated with a target user interface (UI) to be rendered, the first set of one or more interface elements comprising one or more interface elements that meet a pre-configured priority condition; rendering the first set of one or more interface elements at a higher priority than other interface elements associated with the target UI; and outputting a rendering result of the first set of one or more interface elements.

Подробнее
21-03-2019 дата публикации

FORWARD KILLING OF THREADS CORRESPONDING TO GRAPHICS FRAGMENTS OBSCURED BY LATER GRAPHICS FRAGMENTS

Номер: US20190088009A1
Принадлежит:

A graphics processing apparatus comprises fragment generating circuitry to generate graphics fragments corresponding to graphics primitives, thread processing circuitry to perform threads of processing corresponding to the fragments, and forward kill circuitry to trigger a forward kill operation to prevent further processing of a target thread of processing corresponding to an earlier graphics fragment when the forward kill operation is enabled for the target thread and the earlier graphics fragment is determined to be obscured by one or more later graphics fragments. The thread processing circuitry supports enabling of the forward kill operation for a thread including at least one forward kill blocking instruction having a property indicative that the forward kill operation should be disabled for the given thread, when the thread processing circuitry has not yet reached a portion of the thread including the at least one forward kill blocking instruction.

Подробнее
11-10-2016 дата публикации

Method for managing threads using executing time scheduling technique and electronic device using the same method

Номер: US0009465655B2
Принадлежит: HTC Corporation, HTC CORP

A method for managing threads and an electronic device using the method are provided. In the method, a current time is obtained. A time interval from now to a time for the processor to wake up next time is calculated. The processor is released until reaching the end of the time interval. When the end of the time interval is reached or a first notice signal of the processor is received, a first newest time is obtained to update a current time, and the current time is logged as a basis time. It is respectively checked whether the current time satisfies a plurality of predetermined time conditions of the registered threads against a plurality of registered threads in the threads. When the current time satisfies the predetermined time condition of a first registered thread among the registered threads, the first registered thread is waked up.

Подробнее
25-08-2020 дата публикации

Systems and methods of a production environment tool

Номер: US0010754688B2
Автор: James Richard Powell
Принадлежит: WiseTech Global Limited

Disclosed are risk visualisation methods and systems of a production environment tool. Tasks are delivered to a task board in an arrangement that indicates how long the tasks have been on the board, that is, the defined time period. For each task, a penetration value is calculated which is derived from the time elapsed since the initiation time of the task as a percentage of the defined time period. A criterion of a percentile value of the tasks establishes a subset of the tasks that have lower penetration values. Then by determining a specific task in the subset of the sequence of tasks that is at the high end of the percentile value, a visual indication of that specific task on the board provides notice to an observer of the board of whether there is a risk that the resource is falling behind in task completion.

Подробнее
08-03-2018 дата публикации

SCHEDULING COMPUTER PROGRAM JOBS

Номер: US20180067767A1
Принадлежит:

A method and computer system for scheduling, for periodic execution, a program requiring a computer hardware resource for execution. A processor of the computer system receives a request to schedule the program for execution on a day at a specified time and periodically thereafter at the specified time, and in response, the processor determines if there was historical availability of the resource exceeding a predetermined availability threshold on the day at approximately the specified time to execute the program, and if so, schedule the program for execution on the day at the specified time and periodically thereafter, and if not, not schedule the program for execution on the day at the specified time periodically. In response to a determination of no historical availability of the resource at approximately the specified time, the processor automatically determines another time on the day during which there was historical availability of the resource.

Подробнее
17-09-2020 дата публикации

CONTAINER DOCKERFILE AND CONTAINER MIRROR IMAGE QUICK GENERATION METHODS AND SYSTEMS

Номер: US20200293354A1
Принадлежит: GENETALKS BIO-TECH (CHANGSHA) CO., LTD.

The invention discloses a container Dockerfile and container mirror image quick generation methods and systems. The container Dockerfile quick generation method includes the steps of for a to-be-packaged target application, running and performing tracking execution on the target application, and recording operation system dependencies of the target application in the running process; organizing and constructing a file list required for packaging the target application to a container mirror image; and according to the file list required for packaging the target application to the container mirror image, generating a Dockerfile and container mirror image file creation directory used for packaging the target application to the container mirror image. Any target application can be automatically packaged by the invention to a container; the construction of an executable minimal environmental closure of the target application is finished; the packaged container is smaller than a manually made ...

Подробнее
17-09-2020 дата публикации

MEMORY SYSTEM FOR MEMORY SHARING AND DATA PROCESSING SYSTEM INCLUDING THE SAME

Номер: US20200293451A1
Принадлежит: SK hynix Inc.

A data processing system includes a host processor, a processor suitable for processing a task instructed by the host processor, a memory, shared by the host processor and the processor, that is suitable for storing data processed by the host processor and the processor, respectively, and a memory controller suitable for checking whether a stored data processed by the host processor and the processor are reused, and for sorting and managing the stored data as a first data and a second data based on the check result.

Подробнее
23-07-2020 дата публикации

SYSTEM AND METHOD TO IMPLEMENT AUTOMATED APPLICATION CONSISTENT VIRTUAL MACHINE IMAGE BACKUP

Номер: US20200233750A1
Принадлежит:

A method for performing backup operations includes selecting an application executing on a virtual machine (VM) to quiesce, generating, using a pre-snapshot template for the application, a pre-snapshot script for the application, generating a snapshot of the virtual machine after the pre-snapshot script has executed on the VM, and initiating a backup operation for the VM using the snapshot.

Подробнее
06-02-2018 дата публикации

Graphical user interface for managing virtual machines

Номер: US0009886323B2

A graphical user interface (GUI) for managing virtual machines (VMs) that are running in one or more hosts provides a search interface that is intuitive and presents search results in a tree structure that lists or marks items that meet user-designated search criteria. User-designated search criteria include favorite VMs, powered-on VMs, VMs running in a specified host, and text-based search criteria. Both VMs that are running locally in a local host and VMs that are running remotely in a remote host are listed so long as they meet the user-designated search criteria and thus can be managed using the GUI.

Подробнее
01-10-2020 дата публикации

REAL-TIME REPLICATING GARBAGE COLLECTION

Номер: US20200310963A1
Принадлежит:

A method and a system for garbage collection on a system. The method includes initiating a garbage collection process on a system by a garbage collector. The garbage collector includes one or more garbage collector threads. The method also includes marking a plurality of referenced objects using the garbage collector threads and one or more application threads during a preemption point. The method includes replicating the referenced objects using the garbage collector threads and marking for replication any newly discovered referenced objects found by scanning the application thread stack from a low-water mark. The method also includes replicating the newly discovered referenced objects and overwriting any reference to the old memory location.

Подробнее
12-04-2012 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM

Номер: US20120086964A1

Disclosed is an image processing apparatus, comprising: a control section including a plurality of operation units; and a nonvolatile storage section to store a program to execute analysis processing and rendering processing by a control of the control section, the analysis processing analyzing page description language format data to generate intermediate language format data by the band, and the rendering processing generating rendering data based on the generated intermediate language format data, wherein the control section assigns the operation units the analysis and rendering processing to be executed, based on the stored program, wherein at least one of the operation units is capable of executing the analysis and rendering processing, and wherein when the operation unit is executing one of the analysis and rendering processing, and when an event which suspends the processing being executed occurs, the operation unit executes the other processing.

Подробнее
12-06-2012 дата публикации

Detecting the starting and ending of a task when thread pooling is employed

Номер: US0008201176B2

Starting and ending of a task is detected, where thread pooling is employed. Threads perform a wait operation on a given object are monitored, and threads performing a notify/notify-all operation on the given object are monitored. A labeled directed graph is constructed. Each node of the graph corresponds to one of the threads. Each edge of the graph has a label and corresponds to performance of the wait or notify/notify-all operation. An identifier of the given object is a label of a number of the edges. A set of nodes is selected that each has an edge having the same label. The threads of these nodes are worker threads of a thread pool. The threads of the nodes that are connected to the set of nodes are master threads. An object having an identifier serving as the label of the edges to the set of nodes is a monitoring mechanism.

Подробнее
12-01-2012 дата публикации

Methods for supporting users with task continuity and completion across devices and time

Номер: US20120011511A1
Принадлежит: Microsoft Corp

Concepts and technologies are described herein for providing task continuity and supporting task completion across devices and time. A task management application is configured to monitor one or more interactions between a user and a device. The interactions can include the use of the device, the use of one or more applications, and/or other tasks, subtasks, or other operations. Predictive models constructed from data or logical models can be used to predict the attention resources available or allocated to a task or subtask as well as the attention and affordances available within a context for addressing the task and these inferences can be used to mark or route the task for later reminding and display. In some embodiments, the task management application is configured to remind or execute a follow-up action when a session is resumed. Embodiments include providing users with easy to use gestures and mechanisms for providing input about desired follow up on the same or other devices.

Подробнее
26-01-2012 дата публикации

Rejuvenation processing device, rejuvenation processing system, computer program, and data processing method

Номер: US20120023495A1
Автор: Fumio Machida
Принадлежит: NEC Corp

In a rejuvenation processing device ( 1 ), at least one host machine ( 3 ) is selected as an object to be rejuvenated from among each of the host machines ( 3 ). At least one virtual machine ( 302 ) is selected from among the virtual machines ( 302 ) operating in the host machine ( 3 ) which is not selected as an object to be rejuvenated. The operation of the selected virtual machine ( 302 ) is stopped, and the virtual machine ( 302 ) operating in the host machine ( 3 ) selected as the object to be rejuvenated is migrated to the host machine ( 3 ) in which the virtual machine ( 302 ) operates. The host machine ( 3 ) selected as the object to be rejuvenated is rejuvenated. Thereby, it is possible to provide a rejuvenation processing device capable of simultaneously rejuvenating the host machines and the virtual machines necessary to be rejuvenated, while continuously operating the host machines and the virtual machines which are not necessary to be rejuvenated.

Подробнее
26-01-2012 дата публикации

Maintaining Data States Upon Forced Exit

Номер: US20120023506A1
Принадлежит: Apple Inc

Methods, program products, and systems of maintaining data states upon forced exit are disclosed. In one aspect, an application program executing on the mobile device can maintain a connection to a remote data store and retrieve and cache data from the data store. When the mobile device receives an event that forces the application program to terminate, the mobile device can provide a time window in which the mobile device can perform various state preservation actions. During the time window, the mobile device can store data states, including states of the connection and states of the cached data. When the application program is re-launched, the mobile device can use the stored data states to restore a connection and a displayed view.

Подробнее
15-03-2012 дата публикации

On demand virtual machine image streaming

Номер: US20120066677A1
Автор: Chunqiang Tang
Принадлежит: International Business Machines Corp

On demand image streaming (ODS), in one aspect, may perform both copy-on-write and copy-on-read to gradually bring data on remote storage server to a host's local disk. Prefetching may be performed during the time the resources are otherwise idle to bring in data from the remote storage server to the host's local disk. A new image format and the corresponding block device driver for a hypervisor or the like may be also provided. ODS' image format may include a header and a bitmap that indicates whether the data sectors are on local disk or remote storage server, and an image content, for instance, stored in raw format.

Подробнее
22-03-2012 дата публикации

Scaleable Status Tracking Of Multiple Assist Hardware Threads

Номер: US20120072707A1
Принадлежит: International Business Machines Corp

A processor includes an initiating hardware thread, which initiates a first assist hardware thread to execute a first code segment. Next, the initiating hardware thread sets an assist thread executing indicator in response to initiating the first assist hardware thread. The set assist thread executing indicator indicates whether assist hardware threads are executing. A second assist hardware thread initiates and begins executing a second code segment. In turn, the initiating hardware thread detects a change in the assist thread executing indicator, which signifies that both the first assist hardware thread and the second assist hardware thread terminated. As such, the initiating hardware thread evaluates assist hardware thread results in response to both of the assist hardware threads terminating.

Подробнее
29-03-2012 дата публикации

Application migration and power consumption optimization in partitioned computer system

Номер: US20120079227A1
Автор: Tomohiko Suzuki
Принадлежит: Individual

A storage device including a migration source logical volume of an application copies data stored in the logical volume into a migration destination logical volume of the application. After the copy process is started, the storage device stores data written into the migration source logical volume as differential data without storing the data into the migration source logical volume. When the copy process is completed for the data stored in the migration source logical volume, a management computer starts copying of the differential data, and in a time interval after the copying of the data stored in the migration source logical volume is completed but before the copying of the differential data is completed, a computer being a migration destination of the application is turned ON, thereby reducing power consumption at the time of application migration.

Подробнее
03-05-2012 дата публикации

Dynamic parallel looping in process runtime

Номер: US20120110583A1
Принадлежит: Individual

Systems and methods for dynamic parallel looping in process runtime environment are described herein. A currently processed process-flow instance of a business process reaches a dynamic loop activity including a repetitive task to be executed with each loop cycle. A predefined expression is evaluated on top of the current data context of the process-flow instance to discover a number of loop cycles for execution within the dynamic loop activity. A number of parallel activities corresponding to the repetitive task recurrences are instantiated and executed in parallel. The results of the parallel activities are coordinated to confirm that the dynamic loop activity is completed.

Подробнее
31-05-2012 дата публикации

Systems and methods for reclassifying virtual machines to target virtual machines or appliances based on code analysis in a cloud environment

Номер: US20120136989A1
Принадлежит: Red Hat Inc

Embodiments relate to systems and methods for reclassifying a set of virtual machines in a cloud-based network. The systems and methods can analyze virtual machine data to determine performance metrics associated with the set of virtual machines, as well as target data to determine a set of target machines to which the set of virtual machines can be reassigned or reclassified. In embodiments, benefits of reassigning any of the set of virtual machines to any of the set of target virtual machines can be determined. Based on the benefits, the systems and methods can reassign or reclassify appropriate virtual machines to appropriate target virtual machines.

Подробнее
07-06-2012 дата публикации

Controlling runtime execution from a host to conserve resources

Номер: US20120139929A1
Принадлежит: Microsoft Corp

A runtime management system is described herein that allows a hosting layer to dynamically control an underlying runtime to selectively turn on and off various subsystems of the runtime to save power and extend battery life of devices on which the system operates. The hosting layer has information about usage of the runtime that is not available within the runtime, and can do a more effective job of disabling parts of the runtime that will not be needed without negatively affecting application performance or device responsiveness. The runtime management system includes a protocol of communication between arbitrary hosts and underlying platforms to expose a set of options to allow the host to selectively turn parts of a runtime on and off depending on varying environmental pressures. Thus, the runtime management system provides more effective use of potentially scarce power resources available on mobile platforms.

Подробнее
14-06-2012 дата публикации

Addressing system degradation by application disabling

Номер: US20120150785A1
Автор: Keshava Subramanya
Принадлежит: Microsoft Corp

In some embodiments, a routine is executed that identifies one or more applications that consume resources of the computer without a benefit that justifies such consumption, with the routine comprising evaluating at least some data from a source external to the computer, and an ability of the one or more identified applications to start on the computer absent user input requesting or authorizing use of such an application is disabled. In some embodiments, the external source of data may, for example, comprise a remote database identifying potentially undesirable applications and/or a remote software reputation service. In some embodiments, a routine is executed that identifies one or more applications that consume resources of the computer without a benefit that justifies such consumption and that also identifies one or more resources or utilities that utilize the one or more applications. An ability of the one or more identified applications to start on the computer absent user input requesting or authorizing use of such an application is disabled, and an ability of the one or more resources or utilities to attempt to utilize the one or more applications absent user input requesting or authorizing such an attempt is also disabled.

Подробнее
28-06-2012 дата публикации

Method and manager physical machine for virtual machine consolidation

Номер: US20120166644A1

A method and a manager physical machine (PM) for virtual machine (VM) consolidation are provided. The method is performed by the manager PM. A network connects the manager PM and a plurality of server PMs. A plurality of VMs is running on the server PMs. The method includes the following steps. The manager PM classifies the server PMs into redundant PMs and surviving PMs. The manager PM determines migration paths of the VMs running on the redundant PMs to the surviving PMs. The manager PM determines a parallel migration sequence of the VMs running on the redundant PMs based on the migration paths. The manager PM migrates the VMs running on the redundant PMs to the surviving PMs in parallel according to the parallel migration sequence.

Подробнее
19-07-2012 дата публикации

Computer system and migration method of virtual machine

Номер: US20120185856A1
Принадлежит: NEC Corp

A computer system of the present invention is provided with an open flow controller 3 , and a switch 4 i . The switch 4 i notifies a MAC address contained in packet data to the open flow controller 3 when the packet data from the virtual machine that the migration has completed does not fit with a rule shown by the flow set to the switch itself. The open flow controller 3 sets a communication flow for a migration destination VM generated according to the notified MAC address to the switch 4 i . The switch 4 i transfers the packet data for said virtual machine which follows a rule 444 shown by the communication flow for said migration destination VM, to the migration destination virtual machine based on an action 445 shown by the communication flow for said migration destination VM.

Подробнее
30-08-2012 дата публикации

Mechanism for Virtual Machine Resource Reduction for Live Migration Optimization

Номер: US20120221710A1
Автор: Michael S. Tsirkin
Принадлежит: Red Hat Israel Ltd

A mechanism for virtual machine resource reduction for live migration optimization is disclosed. A method of the invention includes monitoring a rate of state change of a virtual machine (VM) undergoing a live migration, determining that the rate of state change of the VM exceeds a rate of state transfer of the VM during the live migration process, and adjusting one or more resources of the VM to decrease the rate of state change of the VM to be less than the rate of state transfer of the VM.

Подробнее
04-10-2012 дата публикации

Thread folding tool

Номер: US20120254880A1
Автор: Kirk J. Krauss
Принадлежит: International Business Machines Corp

A computer-implemented method of performing runtime analysis on and control of a multithreaded computer program. One embodiment of the present invention can include identifying threads of a computer program to be analyzed. Under control of a supervisor thread, a plurality of the identified threads can be folded together to be executed as a folded thread. The execution of the folded thread can be monitored to determine a status of the identified threads. An indicator corresponding to the determined status of the identified threads can be presented in a user interface that is presented on a display.

Подробнее
01-11-2012 дата публикации

Lock based moving of threads in a shared processor partitioning environment

Номер: US20120278809A1
Принадлежит: International Business Machines Corp

The present invention provides a computer implemented method and apparatus to assign software threads to a common virtual processor of a data processing system having multiple virtual processors. A data processing system detects cooperation between a first thread and a second thread with respect to a lock associated with a resource of the data processing system. Responsive to detecting cooperation, the data processing system assigns the first thread to the common virtual processor. The data processing system moves the second thread to the common virtual processor, whereby a sleep time associated with the lock experienced by the first thread and the second thread is reduced below a sleep time experienced prior to the detecting cooperation step.

Подробнее
22-11-2012 дата публикации

Cloud computing roaming services

Номер: US20120297071A1
Принадлежит: International Business Machines Corp

The present invention provides approaches for Cloud roaming services. It allows Cloud services to be offered to requestors that are abstracted from the underlying Cloud provider used to fulfill those services. The present invention provides the ability for Cloud providers to be dynamically associated with currently available Cloud services for requestors. The system and method describes two scenarios—a pull and push model—to align requestor required services to currently available services from Cloud providers. The requestor has a profile determining a prioritized list of Cloud providers that may be used, OR the primary Cloud provider may outsource services to a partner cloud. The algorithm shows how the provider decides which requestors have access to which services—based on current utilization and forecast. Also, location, roaming, network bandwidth and requestor processing capabilities may be sent to the provider of Cloud services to determine whether a change is needed.

Подробнее
13-12-2012 дата публикации

Multi-core processor system, computer product, and interrupt method

Номер: US20120317403A1
Принадлежит: Fujitsu Ltd

A multi-core processor system has a first core executing an OS and multiple applications, and a second core to which a first thread of the applications is assigned. The multi-core processor system includes a processor configured to receive from the first core, an interrupt signal specifying an event that has occurred with an application among the applications, determine whether the event specified by the received interrupt signal is any one among a start event for exclusion and a start event for synchronization for the first thread currently under execution by the second core, save from the second core, the first thread currently under execution, upon determining the specified event to be a start event, and assign a second thread different from the saved first thread and among a group of execution-awaiting threads of the applications, as a thread to be executed by the second core.

Подробнее
27-12-2012 дата публикации

Migrating business process instances

Номер: US20120330703A1
Принадлежит: International Business Machines Corp

Migration of a business process instance derived from a business process model having compensation logic is provided. A new business process version of the business process model is modeled. The business process model is statically analyzed to create a static process control flow. A potential compensation control flow is derived based on the business process instance. Changes between the new business process version and a previous business process version of the business process model are identified. The identified changes are walked to separate and group changes related to the compensation logic and changes related to a normal control flow of the business process model into change groups. The business process instance is migrated based on migration conditions which are determined based on the change groups.

Подробнее
03-01-2013 дата публикации

Controlling network utilization

Номер: US20130007254A1
Автор: Robert Fries
Принадлежит: Microsoft Corp

Network utilization of an arbitrary application is controlled by tracking network usage statistics of virtual machines (VMs), including at least VMs hosting parts of the application. For network utilization control, VMs serve as network proxies for elements of the application. A specification for a network requirement of the application is evaluated against the network usage statistics. When a network capacity requirement to/from/through an element of the application is not satisfied, one or more VMs are adapted to satisfy the requirement. For example, a VM may be migrated from a host or network location that has excess network bandwidth. Or, for example, network bandwidth availability for an under-requirement VM may be increased and bandwidth availability for a VM at an appropriate host or network location may be decreased. Thus, application-level communication path requirements may be satisfied using VM adaptations.

Подробнее
24-01-2013 дата публикации

Apparatus and Method for Handling Tasks Within a Computing Device

Номер: US20130024818A1
Принадлежит: Nokia Oyj

A task manager for a computing device which provides a user interface to the currently running tasks on the computing device. The user interface comprises a representation for each task which is a reduced size version of the display which would be visible to the user if that task were in the foreground. Preferably, the task manager sets out the representations so that no more than a maximum number of representations is visible at one time.

Подробнее
31-01-2013 дата публикации

Virtual Machines for Aircraft Network Data Processing Systems

Номер: US20130031543A1
Автор: Ian Gareth Angus
Принадлежит: Boeing Co

A method and apparatus are provided for operating a network data processing system on an aircraft. A number of operations are performed in a virtual machine on the aircraft. The virtual machine runs on a processor unit in the network data processing system on the aircraft to create a simulated computer environment. The virtual machine accesses resources of the processor unit for performing the number of operations using a host operating system on the processor unit. A current state of the aircraft is identified by the network data processing system. Running of the virtual machine is managed based on the current state of the aircraft and a policy for managing the virtual machine for different states of the aircraft.

Подробнее
28-02-2013 дата публикации

Communication terminal and application control method

Номер: US20130053015A1
Автор: Ryo Nakajima
Принадлежит: NTT DOCOMO INC

A communication terminal includes: an application control unit that controls execution of an application; a communication control unit that controls a communication unit to establish communication with a communication network, and a suspension control unit that, after detecting a terminal operation that becomes a factor in suspending the application, transmits a suspend command to suspend the application that is running to the application control unit. The suspension control unit is, in response to the terminal operation thus detected, capable of selecting a keep-alive state for transmitting the suspend command to the application control unit without disconnecting communication connection with the communication network by the communication unit. The communication terminal prevents unnecessary disconnection of communication by a suspend operation and improving convenience and comfortableness of application operation.

Подробнее
28-02-2013 дата публикации

Method for live migration of virtual machines

Номер: US20130054813A1
Принадлежит: Radware Ltd

A method for an assisted live migration of virtual machines is disclosed. The method comprises receiving an assist request for assisting in a migration of a virtual machine, wherein the assist request includes at least a comfort load level; determining a current load of the virtual machine to be migrated; comparing the current load to the comfort load level; reducing a load on the virtual machine to be migrated until the current load is lower than the comfort load level; and initiating a live migration of the virtual machine to be migrated when the current load is lower than the comfort load level.

Подробнее
28-02-2013 дата публикации

Cancellable Command Application Programming Interface (API) Framework

Номер: US20130055266A1
Принадлежит: Microsoft Corp

Embodiments are provided that include the use of a cancelable command application programming interface (API) framework that provides cooperative multitasking for synchronous and asynchronous operations based in part on a command timing sequence and a cancelable command API definition. A method of an embodiment enables a user or programmer to use a cancelable command API definition as part of implementing a responsive application interface using a command timing sequence to control execution of active tasks. A cancelable command API framework of an embodiment includes a command block including a command function, a task engine to monitor the command function, and a timer component to control execution of asynchronous and synchronous tasks based in part on first and second control timing intervals associated with a command timing sequence. Other embodiments are also disclosed.

Подробнее
14-03-2013 дата публикации

Managing thread execution in a non-stop debugging environment

Номер: US20130067438A1
Автор: Cary L. Bates
Принадлежит: International Business Machines Corp

Managing thread execution in a non-stop debugging environment that includes a debugger configured to debug a multi-threaded debuggee, where encountering an event by one of the threads stops execution of only the one thread without concurrently stopping execution of other threads, and managing thread execution includes: setting, by the debugger responsive to one or more user requests, one or more threads of the debuggee for auto-resumption; encountering, by a thread of the debuggee, an event stopping execution of the thread; determining whether the thread is set for auto-resumption; if the thread is set for auto-resumption, resuming, by the debugger, execution of the thread automatically without user interaction; and if the thread is not set for auto-resumption, processing, by the debugger, the event stopping execution of the thread.

Подробнее
21-03-2013 дата публикации

Image forming apparatus and method of upgrading firmware

Номер: US20130074060A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

An image forming apparatus is provided. The image forming apparatus includes a first storage unit to store firmware, a control unit to perform a job by loading the stored firmware, a communication interface unit to receive new firmware, and an upgrade unit to install the received new firmware in the first storage unit, where the control unit, upon receiving a new job request while the new firmware is being installed, controls the upgrade unit to suspend installation of the new firmware and perform the requested new job.

Подробнее
09-05-2013 дата публикации

Information processing apparatus, information processing method, and computer product

Номер: US20130117762A1
Принадлежит: Fujitsu Ltd

An information processing apparatus includes a processor that is configured to detect a changeover request in a first mode in which a first OS executes a process that includes a second OS different from the first OS; and change the first mode over to a second mode in which the second OS executes a process that includes the first OS, upon detecting the changeover request.

Подробнее
30-05-2013 дата публикации

Scaleable Status Tracking Of Multiple Assist Hardware Threads

Номер: US20130139168A1
Принадлежит: International Business Machines Corp

A processor includes an initiating hardware thread, which initiates a first assist hardware thread to execute a first code segment. Next, the initiating hardware thread sets an assist thread executing indicator in response to initiating the first assist hardware thread. The set assist thread executing indicator indicates whether assist hardware threads are executing. A second assist hardware thread initiates and begins executing a second code segment. In turn, the initiating hardware thread detects a change in the assist thread executing indicator, which signifies that both the first assist hardware thread and the second assist hardware thread terminated. As such, the initiating hardware thread evaluates assist hardware thread results in response to both of the assist hardware threads terminating.

Подробнее
13-06-2013 дата публикации

Virtual computer system and control method of migrating virtual computer

Номер: US20130152083A1
Принадлежит: HITACHI LTD

A live migration in a virtual computer system. On a source physical computer, the control information area of the source logical FC-HBA (managed by an OS) is copied to the control information area of a dummy logical FC-HBA managed by a hypervisor. After an FC login to the dummy FC-HBA, an address conversion table is rewritten so that a host physical address for referring to the control information area of a logical HBA 1 ′ can be referred to using a guest logical address for referring to the control information area of the source FC-HBA. After the FC logout of the source FC-HBA, using a WWN of the FC used for the FC logout, a login to the destination logic FC-HBA is performed. Next, the OS on the source computer is taken over by the destination computer. Therefore, the disk accessed on the source computer can be accessed from the destination FC-HBA.

Подробнее
20-06-2013 дата публикации

Estimating migration costs for migrating logical partitions within a virtualized computing environment based on a migration cost history

Номер: US20130160007A1
Принадлежит: International Business Machines Corp

Responsive to a hypervisor determining that insufficient local resources are available for reservation to meet a performance parameter for at least one resource specified in a reservation request for a particular logical partition managed by the hypervisor in a host system, the hypervisor identifies another logical partition managed by the hypervisor in the host system that is assigned at the least one resource meeting the performance parameter specified in the reservation request. The hypervisor estimates a first cost of migrating the particular logical partition and a second cost of migrating the another logical partition to at least one other host system communicatively connected in a peer-to-peer network based on at least one previously recorded cost stored by the host system of migrating a previous logical partition to the at least one other host system.

Подробнее
27-06-2013 дата публикации

Controlling multiple external device coupled to user equipment

Номер: US20130166785A1
Принадлежит: KT Corp

Described embodiments provide a method and user equipment for controlling a plurality of coupled external devices. The method may include determining whether one of applications installed in user equipment is activated upon receipt of a user input when the user equipment is coupled to a plurality of external devices, selecting one of the coupled external devices as a target external device to be mapped, when the application is determined as being activated, and mapping the selected coupled external device with the activated application and establishing a signal route between the user equipment and the selected coupled external device in association with the activated application.

Подробнее
15-08-2013 дата публикации

Virtual Machine Splitting Method and System

Номер: US20130212281A1
Принадлежит: Telefonaktiebolaget LM Ericsson AB

A system, computer readable medium and method for splitting a virtual machine that runs on a first physical machine that includes at least a processor and a memory. The method includes receiving instructions for splitting in two or more groups plural processes running on the virtual machine; grouping the plural processes in the two or more groups in the virtual machine; splitting the virtual machine into two or more new virtual machines based on an underlying virtualization engine running on the first physical machine; and maintaining active in each new virtual machine those processes that belong to a corresponding group of the two or more groups.

Подробнее
12-09-2013 дата публикации

COMPUTER SYSTEM, MIGRATION METHOD, AND MANAGEMENT SERVER

Номер: US20130238804A1
Принадлежит: Hitachi, Ltd.

A computer system, comprising: a plurality of physical computers; and a management server for managing the plurality of physical computers, wherein at least one virtual computer operates on each of the plurality of physical computers, wherein the at least one virtual computer executes at least one piece of service processing including at least one piece of sub processing, wherein the management server is configured to calculate a required resource amount which is a resource amount of a computer resource required for the virtual computer subject to the migration based on used resource amount for the each of the plurality of the pieces of sub processing; search for a physical computer of a migration destination; and migrate the virtual computer subject to the migration to the physical computer of the migration destination. 1. A computer system , comprising:a plurality of physical computers; anda management server for managing the plurality of physical computers,wherein at least one virtual computer operates on each of the plurality of physical computers, which is assigned an assigned resource generated by dividing a computer resource included in the each of the plurality of physical computers into a plurality of parts,wherein the at least one virtual computer executes at least one piece of service processing including at least one piece of sub processing,wherein the each of the plurality of physical computers includes: a first processor; a first main storage medium coupled to the first processor; a sub storage medium coupled to the first processor; a first network interface coupled to the first processor; a virtual management module for managing the at least one virtual computer; and a used resource amount obtaining module for obtaining a used resource amount which is information on a used amount of the assigned resource used by executing the at least one piece of service processing,wherein the management server includes: a second processor; a second storage medium ...

Подробнее
26-09-2013 дата публикации

SYSTEM AND METHOD FOR SUPPORTING LIVE MIGRATION OF VIRTUAL MACHINES BASED ON AN EXTENDED HOST CHANNEL ADAPTOR (HCA) MODEL

Номер: US20130254404A1
Принадлежит: ORACLE INTERNATIONAL CORPORATION

A system and method can support virtual machine live migration in a network. A fabric adaptor can be associated with a plurality of virtual host channel adapters (vHCAs), and wherein each said virtual host channel adapter (vHCA) is associated with a separate queue pair (QP) space. At least one virtual machine operates to perform a live migration from a first host to a second host, wherein said at least one virtual machine is attached with a said virtual host channel adapter (vHCA) that is associated with a queue pair (QP) in a said queue pair (QP) space, and wherein said queue pair (QP) operates to signal a peer QP about the live migration and provide said peer QP with address information after migration. 1. A system for supporting virtual machine live migration in a network , comprising:one or more microprocessors;a fabric adaptor associated with on the one or more microprocessors, wherein the fabric adaptor is associated with a plurality of virtual host channel adapters (HCAs), and wherein each said virtual host channel adapter (vHCA) is associated with a separate queue pair (QP) space;at least one virtual machine operates to perform a live migration from a first host to a second host, wherein said at least one virtual machine is attached with a said virtual host channel adapter (vHCA) that is associated with a queue pair (QP) in a said queue pair (QP) space, andwherein said queue pair (QP) operates to signal a peer QP about the live migration and provide said peer QP with address information after migration.2. The system according to claim 1 , further comprising:at least one virtual machine monitor that manages the one or more virtual machines, wherein each said virtual machine is associated with a private virtual address space.3. The system according to claim 1 , wherein:the fabric adaptor implements at least one of a virtual switch model and an extended host channel adaptor (HCA) model.4. The system according to claim 1 , wherein:a suspended state can be ...

Подробнее
03-10-2013 дата публикации

Emulating a data center network on a single physical host with support for virtual machine mobility

Номер: US20130263118A1
Принадлежит: International Business Machines Corp

Methods and arrangements for emulating a data center network. A first end host and a second end host are provided. A base hypervisor is associated with each of the first and second end hosts, and the first and second end hosts are interconnected. A virtual hypervisor is associated with at least one virtual machine running on at least one of the base hypervisors, and virtual hypervisors are interconnected within one of the first and second end hosts. A virtual machine is nested within the virtual hypervisor, and the virtual machine is migrated from one virtual hypervisor to a destination virtual hypervisor to further be nested within the destination virtual hypervisor.

Подробнее
03-10-2013 дата публикации

Method and system for tracking data correspondences

Номер: US20130263132A1
Принадлежит: VMware LLC

One embodiment is a method for tracking data correspondences in a computer system including a host hardware platform, virtualization software running on the host hardware platform, and a virtual machine running on the virtualization software, the method including: (a) monitoring one or more data movement operations of the computer system; and (b) storing information regarding the one or more data movement operations in a data correspondence structure, which information provides a correspondence between data before one of the one or more data movement operations and data after the one of the one or more data movement operations. The “monitoring” may comprise monitoring data movement at one or more of an interface between the host hardware platform and the virtualization software, and an interface between the virtual machine and the virtualization software

Подробнее
10-10-2013 дата публикации

System and method for migrating application virtual machines in a network environment

Номер: US20130268643A1
Принадлежит: Cisco Technology Inc

A method includes managing a virtual machine (VM) in a cloud extension, where the VM is part of a distributed virtual switch (DVS) of an enterprise network, abstracting an interface that is transparent to a cloud infrastructure of the cloud extension, and intercepting network traffic from the VM, where the VM can communicate securely with the enterprise network. The cloud extension comprises a nested VM container (NVC) that includes an emulator configured to enable abstracting the interface, and dual transmission control protocol/Internet Protocol stacks for supporting a first routing domain for communication with the cloud extension, and a second routing domain for communication with the enterprise network. The NVC may be agnostic with respect to operating systems running on the VM. The method further includes migrating the VM from the enterprise network to the cloud extension through suitable methods.

Подробнее
10-10-2013 дата публикации

Method for managing services on a network

Номер: US20130268648A1
Принадлежит: Thales SA

The invention relates to a method for managing services on a network, comprising: at least two interconnected computer sites, each of which is capable of implementing at least one service that can be accessed from the network; at least one service implemented on a network site; a means for transferring a service from an initial site to a separate destination site. Each is associated with security attributes and the method includes transferring at least one service from an initial site to a destination site of the network following a predetermined transfer sequence which depends on the security attributes.

Подробнее
24-10-2013 дата публикации

Remediating Resource Overload

Номер: US20130283266A1
Принадлежит: International Business Machines Corp

A method, an apparatus and an article of manufacture for remediating overload in an over-committed computing environment. The method includes measuring resource usage of each of multiple virtual machines on each of at least one hypervisor in a computing environment, upon detection of a resource overload on one of the at least one hypervisor, determining at least one operation that is to be taken for at least one of the multiple virtual machines on the hypervisor to remediate resource overload while increasing values of running virtual machines, and sending a command to the hypervisor to issue the at least one operation.

Подробнее
31-10-2013 дата публикации

Methods and Apparatus to Migrate Virtual Machines Between Distributive Computing Networks Across a Wide Area Network

Номер: US20130290468A1

Methods and apparatus to migrate virtual machines between distributive computing networks across a network are disclosed. A disclosed example method includes establishing a data link across a network between a first distributive computing network and a second distributive computing network, the first distributive computing network including a virtual machine operated by a first host communicatively coupled to a virtual private network via a first virtual local area network, communicatively coupling a second host included within the second distributive computing network to the virtual private network via a second virtual local area network, and migrating the virtual machine via the data link by transmitting a memory state of at least one application on the first host to the second host while the at least one application is operating.

Подробнее
21-11-2013 дата публикации

VIRTUAL MACHINE MIGRATION METHOD, SWITCH, AND VIRTUAL MACHINE SYSTEM

Номер: US20130311991A1
Принадлежит:

The present invention provides a virtual machine migration method, a switch, a virtual machine system. A switch receives a message sent by a server, where the message is used to enable the switch to discover a connected virtual machine interface; obtains, from the message, an identifier for indicating whether a virtual machine is migrated; and determines whether the virtual machine is a virtual machine migrated to the server according to the identifier indicating whether the virtual machine is migrated. According to the embodiments of the present invention, it may be determined whether an added virtual machine on a server is a newly created one or a migrated one. 1. A virtual machine migration method implemented by a switch , comprising:receiving a message sent by a server, wherein the message is used to enable the switch to discover a connected virtual machine interface;obtaining, from the message, an identifier for indicating whether a virtual machine is migrated;determining, according to the identifier, that the virtual machine is migrated to the server; and migrating a service corresponding to the virtual machine bound to a port of the switch.2. The virtual machine migration method according to claim 1 , wherein the message is a Virtual Station Interface Discovery and Configuration Protocol (VDP) message;a type length value (TLV) information string of the message comprises at least an identifier field indicating whether the virtual machine is migrated;or, a Reason field of the TLV information string of the message comprises at least one flag bit indicating whether the virtual machine is migrated.3. The virtual machine migration method according to claim 2 , wherein the step of migrating comprises:obtaining information from the message; wherein the information comprises at least one of media access control (MAC) information, virtual local area network (VLAN) information and virtual station interface (VSI) instance identifier information;generating a Dynamic Host ...

Подробнее
26-12-2013 дата публикации

Scheduling a processor to support efficient migration of a virtual machine

Номер: US20130346613A1
Принадлежит: VMware LLC

A virtualized computer system implements a process to migrate a virtual machine (VM) from a source host to a destination host. During this process, a processing unit at the source host, which is executing instructions of the VM, is scheduled so that the rate of modification of guest physical memory pages is reduced. The determination of when to schedule the processing unit in this manner may be made based on a current rate of modification of the pages, a transmission rate of guest physical memory pages from the source host to the destination host, or a prior VM migration performance.

Подробнее
26-12-2013 дата публикации

MANAGEMENT SERVER, AND VIRTUAL MACHINE MOVE CONTROL METHOD

Номер: US20130346973A1
Принадлежит: FUJITSU LIMITED

A program that performs a virtual machine move control predicts a resource shortage, predicted to occur for a predetermined time period, of a physical server that includes multiple virtual machines and that is included in a management server; specifies a virtual machine that eliminate the resource shortage by moving, at a time point at which the predicted resource shortage occurs, among the virtual machine included in the physical server, for which the resource shortage is predicted, to another physical server; and moves the specified virtual machine to the other physical server on the basis of the resource usage of the specified virtual machine for the predetermined time period and on the basis of a time point that is associated with the resource usage. 1. A computer readable storage medium having stored therein a program causing a computer to execute a process comprising:predicting a resource shortage, predicted to occur for a predetermined time period, of a physical machine that includes multiple virtual machines;specifying a virtual machine that eliminates the resource shortage by moving, at a time point at which the predicted resource shortage occurs, among the virtual machine, which is included in the physical machine for which the resource shortage is predicted, to another physical machine; andmoving the specified virtual machine to the other physical machine on the basis of the resource usage of the specified virtual machine for the predetermined time period and on the basis of a time point that is associated with the resource usage.2. The program according to claim 1 , wherein claim 1 , when the resource shortage is eliminated by moving the multiple virtual machines claim 1 , the specifying includes specifying a combination of the virtual machines claim 1 , the specified combination being such that the number of virtual machines is the minimum.3. The program according to claim 2 , wherein claim 2 , when multiple combinations are present claim 2 , the ...

Подробнее
02-01-2014 дата публикации

Automatic transfer of workload configuration

Номер: US20140007092A1
Принадлежит: Microsoft Corp

The present invention extends to methods, systems, and computer program products for automatically transferring configuration of a virtual machine from one cluster to another cluster. The invention enables an administrator to transfer configuration of a virtual machine by simply specifying a virtual machine to be transferred. The invention then inspects the configuration of the virtual machine on the old cluster as well as the configuration of the old cluster, including the storage (e.g. virtual hard disk) used by the cluster, and then configures a new virtual machine on a new cluster accordingly to match the configuration of the old virtual machine. Similar techniques can also be applied to transfer configuration of an SMB file server.

Подробнее
02-01-2014 дата публикации

Method and Apparatus for Migrating Virtual Machine Parameters and Virtual Machine Server

Номер: US20140007100A1
Принадлежит: Huawei Technologies Co., Ltd

The present invention provides a method and an apparatus for migrating virtual machine parameters and a virtual machine server. The method includes receiving a virtual machine parameter migration message at a not-running stage of a virtual machine, where the virtual machine parameter migration message is used to cause an upstream network device of the virtual machine to migrate virtual machine parameters of the virtual machine at the not-running stage of the virtual machine, and the virtual machine parameter migration message includes an identifier of the virtual machine. The present invention may implement migration of the virtual machine parameters correctly and solve a service interruption problem in virtual machine migration. 1. A method for migrating virtual machine parameters , comprising:receiving a virtual machine parameter migration message at a not-running stage of a virtual machine, wherein the virtual machine parameter migration message is used to cause an upstream network device of the virtual machine to migrate virtual machine parameters of the virtual machine at the not-running stage of the virtual machine, and the virtual machine parameter migration message comprises an identifier of the virtual machine.2. The method according to claim 1 , wherein the receiving the virtual machine parameter migration message comprises:receiving, by a migration management apparatus, the virtual machine parameter migration message; andinitiating, by the migration management apparatus, migration of virtual machine parameters at the not-running stage of the virtual machine.3. The method according to claim 2 , wherein the initiating claim 2 , by the migration management apparatus claim 2 , migration of virtual machine parameters at the not-running stage of the virtual machine comprises one of the following:after receiving the virtual machine parameter migration message, acquiring, by the migration management apparatus, virtual machine parameters of an upstream network ...

Подробнее
02-01-2014 дата публикации

METHOD, APPARATUS AND SYSTEM FOR RESOURCE MIGRATION

Номер: US20140007129A1
Принадлежит: Huawei Technologies Co., Ltd.

Embodiments of the present invention disclose a method and an apparatus for resource migration, related to the field of processors, to avoid cross-node memory access and cross-node cache access, save inter-node high-speed bandwidth, and improve system performance. The method of the present invention includes: when a node is removed, obtaining a process in the node; determining a destination node to which the process and the memory corresponding to the process are migrated, according to mapping between the process and the memory corresponding to the process, and affinity of the process; and migrate the process in the node and the memory corresponding to the process to the destination node. The embodiments of the present invention are mainly applied to a resource migration procedure. 1. A method for resource migration , the method comprising:when a node is removed, obtaining process information in the node, wherein the process information comprises mapping between a process and a memory corresponding to the process and affinity of the process;determining a destination node to which the process and the memory corresponding to the process are migrated, according to the mapping between the process and the memory corresponding to the process, and the affinity of the process; andmigrating the process in the node and the memory corresponding to the process to the destination node.2. The method for resource migration according to claim 1 , wherein the destination node is a to-be-immigrated node claim 1 , the node is a to-be-emigrated node claim 1 , and the to-be-immigrated node and the to-be-emigrated node are not the same node.3. The method for resource migration according to claim 1 , wherein determining a destination node to which the process and the memory corresponding to the process are migrated claim 1 , according to the mapping between the process and the memory corresponding to the process claim 1 , and the affinity of the process claim 1 , comprises:analyzing the ...

Подробнее
09-01-2014 дата публикации

Thread folding tool

Номер: US20140013329A1
Автор: Kirk J. Krauss
Принадлежит: International Business Machines Corp

A computer-implemented method of performing runtime analysis on and control of a multithreaded computer program. One embodiment of the present invention can include identifying threads of a computer program to be analyzed. Under control of a supervisor thread, a plurality of the identified threads can be folded together to be executed as a folded thread. The execution of the folded thread can be monitored to determine a status of the identified threads. An indicator corresponding to the determined status of the identified threads can be presented in a user interface that is presented on a display.

Подробнее
16-01-2014 дата публикации

Cooling appliance rating aware data placement

Номер: US20140019784A1
Принадлежит: International Business Machines Corp

A dataset is identified as a heat-intensive dataset based, at least in part, on the dataset being related to heat generation at a source storage device exceeding a heat rise limit. The source storage device hosts the heat-intensive dataset and the heat-intensive dataset comprises non-executable data. A first cooling area of a plurality of cooling areas is selected to accommodate the heat generation based, at least in part, on cooling characteristics of a plurality of cooling appliances of the plurality of cooling areas. The source storage device is associated with a second cooling area. A target storage device associated with the first cooling area is determined. The heat-intensive dataset is moved from the source storage device to the target storage device.

Подробнее
16-01-2014 дата публикации

MIGRATION MANAGEMENT APPARATUS AND MIGRATION MANAGEMENT METHOD

Номер: US20140019974A1
Принадлежит: FUJITSU LIMITED

A migration management apparatus includes a first decision unit, a second decision unit, and a migration processing unit. The first decision unit simulates the migration of each virtual machine being a migration target to decide a migration destination. The second decision unit decides a migration mode of the virtual machine whose migration destination has been decided by the first decision unit based on the power status of the virtual machine. The migration processing unit, upon the migration destinations and migration modes of the virtual machines being the migration targets having been decided, migrates the virtual machines to the respective migration destinations decided by the first decision unit in the respective migration modes decided by the second decision unit. 1. A migration management apparatus comprising:a first decision unit that simulates the migration of each virtual machine being a migration target to decide a migration destination;a second decision unit that decides a migration mode of the virtual machine whose migration destination has been decided by the first decision unit based on the power status of the virtual machine; anda migration processing unit that, upon the migration destinations and migration modes of the virtual machines being the migration targets having been decided, migrates the virtual machines to the respective migration destinations decided by the first decision unit in the respective migration modes decided by the second decision unit.2. The migration management apparatus according to claim 1 , further comprising:an identification unit that identifies migration destination candidates of the virtual machines based on operating environments where the virtual machines run; anda determination unit that simulates the migration of the virtual machine for the migration destination candidate identified by the identification unit and determines whether or not the virtual machine is migratable,wherein the first decision unit decides the ...

Подробнее
30-01-2014 дата публикации

Transferring a state of an application from a first computing device to a second computing device

Номер: US20140032706A1
Принадлежит: Google LLC

The disclosed subject matter relates to computer implemented methods for transferring a state of an application from a first computing device to a second computing device. In one aspect, a method includes receiving a first request from a first computing device to transfer a state of a first application from the first computing device to the second computing device. The method further includes sending to the second computing device, a second request for an approval to initiate the transfer. The method further includes receiving from the second computing device an approval to initiate the transfer. The method further includes receiving from the first computing device, based on the received approval, the state of the first application. The method further includes sending the received state of the first application to the second device.

Подробнее
30-01-2014 дата публикации

Media response to social actions

Номер: US20140033202A1
Автор: Jon Lorenz, Marcos Weskamp
Принадлежит: Adobe Systems Inc

A method includes enabling accessing of content via a first device. The access of the content may be suspended in response to receiving a suspending signal associated with a second device coupled to the first device in a communication session. The access of the content may be resumed via at least one of the first device or a third device coupled to the first device in the communication session.

Подробнее
06-02-2014 дата публикации

Virtual machine migration into the cloud

Номер: US20140040888A1
Принадлежит: V3 Systems Inc

The migration of virtual machines internal to a cloud computing environment. The cloud maintains the replicas for virtual machines that could be migrated. The cloud also is aware of location of user data for each of the virtual machines. The replica together with the user data, represents the virtual machine state. If migration to the cloud computing environment is to occur for any given virtual machine, the cloud computing environment correlates the replica with the user data for that virtual machine, and then uses the correlation to instantiate the virtual machine in the cloud.

Подробнее
27-02-2014 дата публикации

Client placement in a computer network system using dynamic weight assignments on resource utilization metrics

Номер: US20140059207A1
Принадлежит: VMware LLC

A system and method for placing a client in a computer network system uses continuously variable weights to resource utilization metrics for each candidate device, e.g., a host computer. The weighted resource utilization metrics are used to compute selection scores for various candidate devices to select a target candidate device for placement of the client.

Подробнее
13-03-2014 дата публикации

SYSTEMS AND METHODS FOR PERFORMING DATA MANAGEMENT OPERATIONS USING SNAPSHOTS

Номер: US20140075440A1
Принадлежит: COMMVAULT SYSTEMS, INC.

A system stores a snapshot and an associated data structure or index to storage media to create a secondary copy of a volume of data. In some cases, the associated index includes application specific data about a file system or other application that created the data to identify the location of the data. The associated index may include three entries, and may be used to facilitate the recovery of data via the snapshot. The snapshot may be used by ancillary applications to perform various functions, such as content indexing, data classification, deduplication, e-discovery, and other functions. 1. A method for creating snapshots of virtual machines , wherein the method is performed by a computing system having a processor and memory , the method comprising: 'wherein the one or more virtual machines are hosted by at least one first virtual machine host;', 'receiving an indication of one or more virtual machines,'}creating snapshots of the one or more virtual machines; andutilizing the snapshots of the one or more virtual machines, hosting the one or more virtual machines on at least one second virtual machine host that is distinct from the at least one first virtual machine host.2. The method of claim 1 , further comprising:exposing the snapshots of the one or more virtual machines to the at least one second virtual machine host;registering the one or more virtual machines on the at least one second virtual machine host; and 'wherein powering on of the one or more virtual machines on the at least one second virtual machine host verifies that the snapshots of the one or more virtual machines were properly created.', 'powering on the or more virtual machines on the at least one second virtual machine host,'}3. The method of wherein the snapshots of the one or more virtual machines reference multiple data objects claim 1 , and wherein the method further comprises creating at least one index associated with the snapshots claim 1 , wherein the index includes context ...

Подробнее
27-03-2014 дата публикации

Workload transitioning in an in-memory data grid

Номер: US20140089260A1
Принадлежит: International Business Machines Corp

Embodiments of the present invention disclose a method, system, and computer program product for transitioning a workload of a grid client from a first grid server to a second grid server. A replication process is commenced transferring application state from the first grid server to the second grid server. Prior to completion of the replication process: the grid client is rerouted to communicate with the second grid server. The second grid server receives a request from the grid client. The second grid server determines whether one or more resources necessary to handle the request have been received from the first grid server. Responsive to determining that the one or more resources have not been received from the first grid server, the second grid server queries the first grid server for the one or more resources. The second grid server responds to the request from the grid client.

Подробнее
27-03-2014 дата публикации

Virtual Machine Merging Method and System

Номер: US20140089919A1
Принадлежит: Telefonaktiebolaget LM Ericsson AB

A system, computer readable medium and method for merging a first virtual machine and a second virtual machine that runs on a same or different physical machine. The method includes a step of receiving instructions for merging processes of the first virtual machine with processes of the second virtual machine; a step of merging the first virtual machine with the second virtual machine onto a first physical machine; a step of merging an operating system of the first virtual machine with an operating system of the second virtual machine onto the first physical machine; and a step of maintaining active in the merged virtual machine each process that was active prior to merging the first and second virtual machines.

Подробнее
10-04-2014 дата публикации

Method and apparatus implemented in processors for real-time scheduling and task organization based on response time order of magnitude

Номер: US20140101663A1
Автор: Lawrence J. Dickson
Принадлежит: Individual

A task scheduling method is disclosed, where each processor core is programmed with a short list of priorities, each associated with a minimum response time. The minimum response times for adjacent priorities are different by at least one order of magnitude. Each process is assigned a priority based on how its expected response time compares with the minimum response times of the priorities. Lower priorities may be assigned a timeslice period that is a fraction of the minimum response time. Also disclosed is a task division method of dividing a complex task into multiple tasks is; one of the tasks is an input gathering authority task having a higher priority, and it provides inputs to the other tasks which have a lower priority. A method that permits orderly shutdown or scaling back of task activities in case of resource emergencies is also described.

Подробнее
06-01-2022 дата публикации

Memory pool data placement technologies

Номер: US20220004330A1
Принадлежит: Intel Corp

Examples described herein relate to a network interface device, when operational, configured to: select data of a region of addressable memory addresses to migrate from a first memory pool to a second memory pool to lower a transit time of the data of the region of addressable memory addresses to a computing platform. In some examples, selecting data of a region of addressable memory addresses to migrate from a first memory pool to a second memory pool is based at least, in part, on one or more of: (a) memory bandwidth used to access the data; (b) latency to access the data from the first memory pool by the computing platform; (c) number of accesses to the data over a window of time by the computing platform; (d) number of accesses to the data over a window of time by other computing platforms over a window of time; (e) historic congestion to and/or from one or more memory pools accessible to the computing platform; and/or (f) number of different computing platforms that access the data.

Подробнее
06-01-2022 дата публикации

Workload assessment and configuration simulator

Номер: US20220004427A1
Принадлежит:

An application may be migrated from a first to a second computing system. Configuration parameter values associated with executing the migrated application on the second computing system may be determined by computational optimization based on configuration parameter values and/or monitored performance metrics associated with the application on the first computing system. Configuration parameter values associated with executing the migrated application on the second computing system may be determined by performing simulations of the migrated application configured for execution on the second computing system based on multiple sets of configuration parameter values, monitoring performance metrics associated with the simulations, and performing computational optimization based on the multiple sets of configuration parameter values and monitored performance metrics associated with the simulations. Configuration parameter values associated with executing the migrated application on the second computing system may be updated based on monitored performance metrics associated with executing the migrated application. 1. A first computing system , comprising:at least one first processor;a first communication interface communicatively coupled to the at least one first processor; and generate, based on a target code base, based on at least a first portion of a tracking code base, and based on a second computing system, a target application configured for execution on the second computing system;', 'cause execution, based on a first set of configuration parameter values, of the target application on the second computing system;', 'monitor, based on the first portion of the tracking code base, one or more performance metrics of the target application;', 'determine, based on the first set of configuration parameter values and based on the monitored one or more performance metrics, one or more second sets of configuration parameter values;', 'generate, based on a migrated code ...

Подробнее
06-01-2022 дата публикации

TECHNIQUES FOR CONTAINER SCHEDULING IN A VIRTUAL ENVIRONMENT

Номер: US20220004431A1
Принадлежит: VMWARE, INC.

The present disclosure relates generally to virtualization, and more particularly to techniques for deploying containers in a virtual environment. The container scheduling can be based on information determined by a virtual machine scheduler. For example, a container scheduler can receive a request to deploy a container. The container scheduler can send container information to the virtual machine scheduler. The virtual machine scheduler can use the container information along with resource utilization of one or more virtual machines to determine an optimal virtual machine for the container. The virtual machine scheduler can send an identification of the optimal virtual machine back to the container scheduler so that the container scheduler can deploy the container on the optimal virtual machine. 1. A method , comprising:transmitting, to a first scheduling process from a second scheduling process, first information identifying a plurality of virtual machines executing on a plurality of physical hosts, wherein the first scheduling process has access to resource utilization data for only virtual machines of the plurality of virtual machines that are executing at least one container;receiving, from the first scheduling process by the second scheduling process, second information identifying one or more virtual machine of the plurality of virtual machines as a virtual machine candidate on which to deploy a first container; anddeploying the first container on a virtual machine of the one or more virtual machines of the plurality of virtual machines, wherein the second scheduling process has access to the resource utilization data for each physical host of the plurality of physical hosts.2. The method of claim 1 , wherein the second scheduling process is a virtual machine scheduling process claim 1 , and wherein the first scheduling process is a container scheduling process.3. The method of claim 1 , wherein the first information includes a resource requirement of a ...

Подробнее
05-01-2017 дата публикации

METHOD FOR COMPOSING AND EXECUTING A REAL-TIME TASK SEQUENCE PLAN

Номер: US20170004011A1
Принадлежит: KRONO-SAFE

A method for executing two tasks in timesharing, includes: decomposing offline each task in a repetitive sequence of consecutive frames, and defining a start date and deadline by which an associated atomic operation must respectively start and end; verifying for each frame of a first of the repetitive sequences the corresponding operation can be performed between any two successive operations of a group of frames of the second repetitive sequence, overlapping the first repetitive sequence frame; and if the verification is satisfied, allowing the execution of the two tasks. Scheduling the operations of the two tasks, if two operations can start, executing the operation having the shorter deadline; and if a single operation can start, executing it if its execution need is less than the time remaining until the next frame start date of the other sequence, plus the time margin associated with the next frame of the other sequence. 1. A method for executing two tasks in timesharing , comprising the steps of:decomposing offline each task in a repetitive sequence of consecutive frames in a time base associated with the task, wherein each frame is associated with an atomic operation having an execution need, and defines a start date from which the operation may start and a deadline by which the operation must end, whereby each frame defines a time margin in which the operation may start;verifying for each frame of a first of the repetitive sequences that the corresponding operation can be performed between any two successive operations of a group of frames of the second repetitive sequence, overlapping the frame of the first repetitive sequence, while respecting the start dates and deadlines of the operations; and if two operations can start, executing the operation having the shorter deadline; and', 'if a single operation can start, executing it only if its execution need is less than the time remaining until the start date of the next frame of the other sequence, plus the ...

Подробнее
07-01-2016 дата публикации

METHOD AND APPARATUS FOR ACCELERATING SYSTEM RUNNING

Номер: US20160004574A1
Принадлежит:

The invention discloses a method and apparatus for accelerating. It comprises a method and apparatus for accelerating. The method comprises: an acceleration enabling step of constructing and displaying an acceleration panel containing a one-key acceleration control when a preset enabling condition is triggered; and an acceleration execution step of detecting the one-key acceleration control within the acceleration panel in real time, and swapping memory occupied by all currently running processes to virtual memory to assist the system in running acceleration when the one-key acceleration control is triggered. The method and the apparatus of the invention can organize the system running condition for a user at a fastest speed, free redundant resources, increase the real-time system running speed of the user, and well solve the problem in the prior art that the system running speed can not be increased effectively. 1. An apparatus for accelerating system running , comprising:at least one processor; andone non-transitory computer readable medium coupled to the processor, the medium storing instructions that when executed by the processor cause the processor to perform operations for accelerating system running, which comprise:an acceleration enabling step of constructing and displaying an acceleration panel containing a one-key acceleration control when a preset enabling condition is triggered; andan acceleration execution step of detecting the one-key acceleration control within the acceleration panel in real time, and swapping memory occupied by all currently running processes to virtual memory to assist the system in running acceleration when the one-key acceleration control is triggered.2. The apparatus as claimed in claim 1 , whereinthe acceleration enabling step further comprises: scanning the system running environment to obtain currently running closeable processes and software according to a set obtaining reference when the preset enabling condition is ...

Подробнее
07-01-2021 дата публикации

STORAGE DEVICE WITH REDUCED COMMUNICATION OVERHEAD USING HARDWARE LOGIC

Номер: US20210004179A1
Принадлежит:

A storage device includes an input stage receiving a first command, a queue manager allocating a first queue entry for the first command, a pre-processor storing the first command in the first queue entry and updating a task list with the first command and a core executing the first command in accordance with an order specified in the updated task list. At least one of the queue manager and the pre-processor is implemented in a customized logic circuit. 1. A storage device comprising:an input stage configured to receive a first command;a queue manager configured to allocate a first queue entry for the first command;a pre-processor configured to store the first command in the first queue entry and update a task list with the first command; anda core configured to execute the first command in accordance with an order specified in the updated task list,wherein at least one of the queue manager and the pre-processor is implemented in a customized logic circuit.2. The storage device of claim 1 , configured such that:during a time when the core executes the first command,the queue manager allocates a second queue entry for a second command other than the first command, andthe pre-processor stores the second command in the second queue entry and updates the task list with the second command stored in the second queue entry.3. The storage device of claim 1 ,wherein the core is configured to generate a first value according to the execution of the first command,the storage device further includes a post-processor configured to generate and output a second command in accordance with a preset format on the basis of the first value, andthe post-processor is implemented in a customized logic circuit.4. The storage device of claim 3 ,wherein a packet structure of the first value is different from a packet structure of the second command.5. The storage device of claim 3 , further comprising:a memory configured to store a head task of the updated task list and a tail task thereof, ...

Подробнее
04-01-2018 дата публикации

FAULT-TOLERANT VARIABLE REGION REPAVING DURING FIRMWARE OVER THE AIR UPDATE

Номер: US20180004505A1
Принадлежит:

Variables utilized in device firmware that provides various boot and runtime services are repaved in a fault-tolerant manner within a secure store in a durable, non-volatile device memory during an FOTA update process. A spare region in the secure store is utilized to temporarily hold a back-up of a primary region in which the firmware variables are written. Using a transaction-based fault-tolerant write (FTW) process, the variables in the primary region can be repaved with variables contained in a firmware update payload that is delivered from a remote service. In the event of a fault in the variable region repaving process, either the primary or spare region will remain valid so that firmware in a known good state can be utilized to enable the device to boot successfully and the variable region repaving in the FOTA update process may be restarted. 1. A method for updating firmware on a device , comprising:exposing a secure non-volatile memory store on the device, comprising a primary region and a spare region, each of the primary region and spare region including a working store configured to store transaction records and a variable store configured to store variable records;copying variable records in the primary region and writing the variable records to the spare region;erasing content in the working store within the primary region;erasing variable records in the primary region;copying variable records from a firmware update payload received at the device and writing the copied variable records into the primary region; anderasing variable records in the spare region.2. The method of in which the variable records represent UEFI (Unified Extensible Firmware Interface) variables.3. The method of in which the memory store is one of SPI (serial peripheral interface) claim 1 , Flash memory claim 1 , or eMMC (embedded multimedia card) memory.4. The method of in which each of the writing steps comprises a fault-tolerant writing (FTW) protocol using the transaction ...

Подробнее
04-01-2018 дата публикации

NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM, LIVE MIGRATION METHOD, AND LIVE MIGRATION APPARATUS

Номер: US20180004564A1
Автор: KONISHI Yotaro
Принадлежит: FUJITSU LIMITED

A non-transitory computer-readable storage medium that stores a live migration program that causes a computer to execute a process, the process including starting live migration of a virtual machine to another computer in response to reception of an instruction of the live migration of the virtual machine, an emulator which becomes a substitute for hardware capable of dynamically defining a function is implemented in the virtual machine, causing the emulator to execute a requested process requested from the virtual machine to the hardware when a degree of progress taken to transmit data of the live migration of the virtual machine satisfies a first condition, and switching the computer executing the virtual machine to the other computer when the degree of progress satisfies a second condition to determine whether the computer executing the virtual machine is switched and when the hardware is not executing a process of the virtual machine. 1. A non-transitory computer-readable storage medium that stores a live migration program that causes a computer to execute a process , the process comprising:starting live migration of a virtual machine to another computer in response to reception of an instruction of the live migration of the virtual machine, an emulator which becomes a substitute for hardware capable of dynamically defining a function is implemented in the virtual machine;causing the emulator to execute a requested process requested from the virtual machine to the hardware when a degree of progress taken to transmit data of the live migration of the virtual machine satisfies a first condition; andswitching the computer executing the virtual machine to the other computer when the degree of progress satisfies a second condition to determine whether the computer executing the virtual machine is switched and when the hardware is not executing a process of the virtual machine.2. The non-transitory computer-readable storage medium according to claim 1 , wherein the ...

Подробнее
07-01-2021 дата публикации

ARCHIVING VIRTUAL MACHINES IN A DATA STORAGE SYSTEM

Номер: US20210004259A1
Принадлежит:

The data storage system according to certain aspects can manage the archiving of virtual machines to (and restoring of virtual machines from) secondary storage. The system can determine whether to archive virtual machines based on usage data or information. The usage information may include storage usage, CPU usage, memory usage, network usage, events defined by a virtual machine software or application, etc. The system may archive virtual machines that are determined to have a low level of utilization. For example, a virtual machine can be archived when its usage level falls below a threshold level. The system may create a virtual machine placeholder for an archived virtual machine, which may be a “light” or minimal version of the virtual machine that acts as if it is the actual virtual machine. By using a virtual machine placeholder, a virtual machine may appear to be active and selectable by the user. 1. A non-transitory computer-readable storage medium storing instructions that when executed by a computer cause the computer to perform a method for archiving virtual machines , the method comprising:archiving a virtual machine (VM), wherein the VM is executed by a first hypervisor, onto one or more secondary storage devices such that the VM is no longer active, wherein the archiving the VM is initiated according to a storage policy associated with the VM, wherein the storage policy comprises a collection of settings or preferences for performing data management operations in an information management system;creating a VM placeholder configured to consume fewer computing resources than the VM, wherein the VM placeholder is executed by a second hypervisor;subsequent to archiving the VM, displaying a representation of the VM on a client computing device;detecting user selection of the representation of the VM; andin response to detecting the user selection of the representation of the VM, reactivating the VM.2. The non-transitory computer-readable storage medium of ...

Подробнее
02-01-2020 дата публикации

APPARATUS AND METHOD FOR CONFIGURING SETS OF INTERRUPTS

Номер: US20200004537A1
Принадлежит:

An apparatus and method are described for efficiently processing and reassigning interrupts. For example, one embodiment of an apparatus comprises: a plurality of cores; and an interrupt controller to group interrupts into a plurality of interrupt domains, each interrupt domain to have a set of one or more interrupts assigned thereto and to map the interrupts in the set to one or more of the plurality of cores. 1a plurality of cores; andan interrupt controller to group interrupts into a plurality of interrupt domains, each interrupt domain to have a set of one or more interrupts assigned thereto and to map the interrupts in the set to one or more of the plurality of cores.. An apparatus comprising: This application is a continuation of U.S. application Ser. No. 14/861,618, filed Sep. 22, 2015, which claims priority under 35 U.S.C. § 119 to Indian Patent Application No. 4721/CHE/2014, filed Jan. 6, 2015, entitled, “Apparatus And Method For Configuring Sets Of Interrupts” which claims the benefit of Indian Provisional Patent Application No. 4721/CHE/2014, filed Sep. 26, 2014, entitled, “Apparatus And Method For Configuring Sets Of Interrupts” all of which is herein incorporated by reference.Embodiments of the invention relate generally to the field of computer systems. More particularly, the embodiments of the invention relate to an apparatus and method for programming sets of interrupts.In computing systems, an interrupt is a signal generated by hardware or software indicating an event that needs immediate attention from the processor (i.e., requiring an interruption of the current thread the processor is executing). The processor responds by suspending its current execution thread, saving the state (so that it can re-start execution where it left off), and executing a function referred to as an interrupt handler to service the event. The interruption is temporary; after the interrupt handler completes, the processor resumes execution of the thread.Hardware ...

Подробнее
07-01-2021 дата публикации

MANAGED ORCHESTRATION OF VIRTUAL MACHINE INSTANCE MIGRATION

Номер: US20210004262A1
Принадлежит:

Techniques for managing the migration of virtual machine instances are described herein. A migration of a virtual machine from a source host to a destination host is determined to be predicted to fail. The migration is cancelled by stopping the virtual machine at the destination host as a result of said determination. 1. A computer-implemented method , comprising:determining that a migration of a virtual machine instance from a source host to a destination host is predicted to fail; andas a result of said determining, cancelling the migration by stopping the virtual machine instance at the destination host.2. The computer-implemented method of claim 1 , wherein determining that the migration is predicted to fail is based at least in part on a historical system state of a previous migration.3. The computer-implemented method of claim 1 , wherein cancelling the migration further includes removing a connection between the virtual machine instance at the destination host and a block storage device.4. The computer-implemented method of claim 1 , further comprising initiating the migration by starting to copy claim 1 , while the virtual machine instance is running on the source host claim 1 , a set of state information of the virtual machine instance from the source host to the destination host.5. The computer-implemented method of claim 4 , wherein:initiating the migration further includes locking a virtual machine abstraction associated with the virtual machine instance by preventing the virtual machine instance from processing requests that would change the virtual machine abstraction; andcancelling the migration further includes unlocking the virtual machine abstraction.6. The computer-implemented method of claim 1 , further comprising causing packets that are received from an external entity by the virtual machine instance running on the source host to be forwarded to the destination host.7. The computer-implemented method of claim 6 , further comprising causing ...

Подробнее
02-01-2020 дата публикации

SERVER COMPUTER EXECUTION OF CLIENT EXECUTABLE CODE

Номер: US20200004567A1
Принадлежит:

Techniques for improving server-side execution of script code include in one embodiment: receiving, at a server computer, a request from a client computer to provide a service of an application that the server computer hosts; acquiring a particular runtime from among a plurality of pre-computed runtimes in a runtime pool, each of the pre-computed runtimes in the runtime pool comprising an executable combination of computer program script code and context data that is programmed to create and use one or more data items having global scope; using the server computer, providing the request to the particular runtime and executing the particular runtime to cause generating a response to the request; transmitting the response to the client computer; cleaning up the one or more data items having global scope and returning the particular runtime to the runtime pool after completing the cleaning up. 1. A data processing method comprising:receiving, at a server computer, a request from a client computer to provide a service of an application that the server computer hosts;acquiring a pre-computed runtime instance from a plurality of pre-computed runtime instances provided in a runtime instance pool at the server computer, each pre-computed runtime instance comprising an executable combination of computer program script code and context data, the context data comprising a set of variables, data items or memory locations having global scope that the respective computer program script code accesses during execution;using the server computer, providing the request to the pre-computed runtime instance and executing the pre-computed runtime instance to cause generating a response to the request, wherein executing the pre-computed runtime instance includes writing data values to the set of variables, data items or memory locations having global scope included in the context data of the pre-computed runtime instance;after generating the response, deleting the data values written to ...

Подробнее
02-01-2020 дата публикации

METHOD FOR FINGERPRINT RECOGNITION AND RELATED PRODUCTS

Номер: US20200004578A1
Автор: WANG Jian
Принадлежит:

Implementations of the present disclosure provide a method for fingerprint recognition and related products. The method is applied to a terminal having an Android operating system. The terminal includes a fingerprint application (FingerprintService) and a fingerprint communication process (fingerprintd). The method includes the following. The FingerprintService detects whether a lag appears during calling a process (binder) via the fingerprintd. In response to the lag appearing during calling the binder via the fingerprintd, the FingerprintService restarts the fingerprintd with a first PID, where the first PID is stored in a local memory in advance. The fingerprintd obtains a second PID and transmits the second PID to the FingerprintService, where the second PID is obtained during restarting the fingerprintd with the first PID. 1. A method for fingerprint recognition , the method being applied to a terminal comprising a fingerprint application (FingerprintService) and a fingerprint communication process (fingerprintd) , the method comprising:detecting, via the FingerprintService, whether a lag appears during calling a process (binder) via the fingerprintd;restarting, in response to the lag appearing during calling the binder via the fingerprintd, the fingerprintd with a first process identifier (PID) via the FingerprintService, the first PID being stored in the terminal in advance; andobtaining a second PID and transmitting the second PID to the FingerprintService via the fingerprintd, the second PID being obtained during restarting the fingerprintd with the first PID.2. The method of claim 1 , wherein detecting claim 1 , via the FingerprintService claim 1 , whether the lag appears during calling the binder via the fingerprintd comprises:detecting, via the FingerprintService, whether the fingerprintd succeeds in calling the binder within a preset period; anddetermining, in response to the fingerprintd failing to call the binder within the preset period, that the lag ...

Подробнее
02-01-2020 дата публикации

Method for Controlling Process and Related Device

Номер: US20200004579A1
Автор: Li Hui, Zeng Yuanqing
Принадлежит:

A method for controlling process is provided. The method for controlling process includes the follows. When it is determined that a duration that each of N processes in a kernel space of a terminal device is in an uninterruptible sleep state reaches or exceeds a preset period, whether the N processes have undergone a searched and killed operation within the preset period is detected. N is an integer greater than or equal to 1. When the N processes have undergone the searched and killed operation within the preset period, states of the N processes are changed, and an operating system is controlled to run the N processes according to the changed states of the N processes. Related terminal devices are also provided. 1. A method for controlling process , comprising:detecting whether N processes in a kernel space have undergone a searched and killed operation within a preset period based on that duration that each of the N processes is in an uninterruptible sleep state exceeds the preset period, wherein the N processes being in the uninterruptible sleep state indicates that the N processes are in a sleep state and unable to be interrupted, and N is an integer greater than or equal to 1; andchanging states of the N processes by a terminal device and controlling an operating system to run the N processes according to the changed states of the N processes, based on a determination that within the preset period the N processes have undergone the searched and killed operation performed by a watchdog program, wherein within the preset period the N processes having undergone the searched and killed operation performed by the watchdog program indicates that the watchdog program is unable to immediately kill the N processes in the uninterruptible sleep state and the N processes in the uninterruptible sleep state are unable to be interrupted, and the N processes in the uninterruptible sleep state being unable to be interrupted indicates that the N processes are unable to respond ...

Подробнее
02-01-2020 дата публикации

Cognitive cloud migration optimizer

Номер: US20200004582A1
Принадлежит: International Business Machines Corp

Methods, computer program products, and systems are presented. The methods include, for instance: input data from the source environment, including application hosting data of each server in the source environment and one or more cloud type of the source environment. Candidate cloud types for target platform are listed and servers of the source environment are screened for eligibility for the migration. The target platform is selected by applying preconfigured selection rules on the application hosting data of each eligible server in the source environment. Migration recommendations for each eligible server in the source environment, including selected cloud type corresponding to the target platform, are produced.

Подробнее
02-01-2020 дата публикации

MULTITHREADED PROCESSOR CORE WITH HARDWARE-ASSISTED TASK SCHEDULING

Номер: US20200004587A1
Принадлежит:

Embodiments of apparatuses, methods, and systems for a multithreaded processor core with hardware-assisted task scheduling are described. In an embodiment, a processor includes a first hardware thread, a second hardware thread, and a task manager. The task manager is to issue a task to the first hardware thread. The task manager includes a hardware task queue in which to store a plurality of task descriptors. Each of the task descriptors is to represent one of a single task, a collection of iterative tasks, and a linked list of tasks. 1. A processor comprising:a first hardware thread;a second hardware thread; anda task manager to issue a task to the first hardware thread, the task manager including a hardware task queue in which to store a plurality of task descriptors, each of the task descriptors to represent one of a single task, a collection of iterative tasks, and a linked list of tasks.2. The processor of claim 1 , wherein the first hardware thread includes a load/store queue to request the task from the task manager by issuing a read request to the task manager.3. The processor of claim 1 , wherein a task descriptor representing a collection of iterative tasks is to include a count value to specify a number of iterations.4. The processor of claim 1 , wherein a task descriptor of a linked list of tasks is to include a pointer to a head of the linked list of tasks.5. The processor of claim 1 , wherein an instruction set architecture of the first hardware thread and an instruction set architecture of the second hardware thread are compatible.6. The processor of claim 5 , further comprising a thread engine to migrate a software thread from the first hardware thread to the second hardware thread.7. The processor of claim 6 , wherein the thread engine is to migrate the software thread to improve performance of a graph application.8. The processor of claim 6 , wherein the first hardware thread is in a first single-threaded pipeline and the second hardware thread is ...

Подробнее
07-01-2021 дата публикации

Self-Debugging

Номер: US20210004315A1
Принадлежит: Nagravision SA

In overview, methods, computer programs products and devices for securing software are provided. In accordance with the disclosure, a method may comprise attaching a debugger process to a software process. During execution of the software process, operations relevant to the functionality of the code process are carried out within the debugger process. As a result, the debugger process cannot be replaced or subverted without impinging on the functionality of the software process. The software process can therefore be protected from inspection by modified or malicious debugging techniques. 1. A method of generating protected code , comprising:identifying one or more functions in code to be compiled for a first process to be migrated to a second process, wherein the one of the first and second processes is a debugger for the other of the first and second processes;migrating the identified function or functions to the second process;modifying the first process to allow transfer of state between the first and second processes; andcompiling the first and second processes to generate binary code.2. A method according to claim 1 , wherein the code to be compiled is source code or bitcode.3. A method according to either or claim 1 , further comprising injecting an initializer into one of the first and second processes to invoke execution of the other of the first and second processes.4. A method according to any one of the preceding claims claim 1 , further comprising injecting one or more initializers into the first or second processes to register functions present in the other of the first and second process.5. A method according to any one of the preceding claims claim 1 , wherein each of the first and second processes is a debugger for the other of the first and second processes.6. A method according to any one of the preceding claims claim 1 , further comprising providing a third process which is a debugger for one of the first and second processes.7. A computer ...

Подробнее
02-01-2020 дата публикации

CLOUD OVERSUBSCRIPTION SYSTEM

Номер: US20200004601A1
Принадлежит:

A cloud oversubscription system comprising an overload detector configured to model a time series of data of at least one virtual machine on a host as a vector-valued stochastic process including at least one model parameter, the overload detector communicating with an inventory database, the overload detector configured to obtain an availability requirement for each of the at least one virtual machine; a model parameter estimator communicating with the overload detector, the model parameter estimator communicating with a database containing resource measurement data for at least one virtual machine on a host at a selected time interval, the model parameter estimator is configured to estimate the at least one model parameter from the resource measurement data; a loading assessment module communicating with the model parameter module to obtain the at least one model parameter for each of the at least one host running at least one virtual machine and determine a probability of overload based on the at least one model parameter, wherein the loading assessment module communicates the probability of overload to the overload detector; wherein the overload detector compares the probability of overload to the availability requirement to identify a probable overload condition value; and wherein the overload detector communicates the probable overload condition value to a recommender, wherein the recommender generates an alert when the overload condition value exceeds the service level agreement requirements for any of the at least one virtual machine. 1. A cloud oversubscription system comprising:an overload detector configured to model a time series of data of at least one virtual machine on a host as a vector-valued stochastic process including at least one model parameter, the overload detector communicating with an inventory database, the overload detector configured to obtain an availability requirement for each of the at least one virtual machine;a model parameter ...

Подробнее
02-01-2020 дата публикации

PROACTIVE CLUSTER COMPUTE NODE MIGRATION AT NEXT CHECKPOINT OF CLUSTER CLUSTER UPON PREDICTED NODE FAILURE

Номер: US20200004648A1
Принадлежит:

While scheduled checkpoints are being taken of a cluster of active compute nodes distributively executing an application in parallel, a likelihood of failure of the active compute nodes is periodically and independently predicted. Responsive to the likelihood of failure of a given active compute node exceeding a threshold, the given active compute node is proactively migrated to a spare compute node of the cluster at a next scheduled checkpoint. Another spare compute node of the cluster can perform prediction and migration. Prediction can be based on both hardware events and software events regarding the active compute nodes. 1. A method comprising:while scheduled checkpoints are being taken of a cluster of active compute nodes distributively executing an application in parallel, periodically predicting independently of the scheduled checkpoints, by a processor, a likelihood of failure of each active computing node; andresponsive to the likelihood of failure of a given active compute node exceeding a threshold, proactively migrating, by the processor, the given active compute node to a spare compute node of the cluster at a next scheduled checkpoint.2. The method of claim 1 , wherein predicting the likelihood of failure of each active computing node does not affect when the checkpoints are taken.3. The method of claim 1 , wherein the spare compute node is a first spare compute node of the cluster claim 1 ,and wherein the processor predicting the likelihood of failure of each active computing node and proactively migrating the given active compute node to the first spare compute node is a part of a second spare compute node of the cluster.4. The method of claim 1 , wherein predicting the likelihood of failure of each active computing node is based on both hardware events regarding the active compute nodes and software events regarding the active compute nodes.5. The method of claim 4 , wherein the software events comprise:a last number of ranks of the application ...

Подробнее
02-01-2020 дата публикации

CORE MAPPING

Номер: US20200004721A1
Принадлежит:

The disclosed technology is generally directed to peripheral access. In one example of the technology, stored configuration information is read. The stored configuration information is associated with mapping a plurality of independent execution environments to a plurality of peripherals such that the peripherals of the plurality of peripherals have corresponding independent execution environments of the plurality of independent execution environments. A configurable interrupt routing table is programmed based on the configuration information. An interrupt is received from a peripheral. The interrupt is routed to the corresponding independent execution environment based on the configurable interrupt routing table. 120-. (canceled)21. An apparatus , comprising:a plurality of processing cores;a plurality of peripherals; anda configurable interrupt routing table that selectively maps each of the plurality of peripherals to an individual processing core of the plurality of processing cores, wherein the mapping of each of the plurality of peripherals to the individual processing core is configurable while a lock bit of the apparatus is not set, wherein the mapping of each of the plurality of peripherals to the individual processing core is locked in response to the lock bit of the apparatus being set, and wherein, once locked, the mapping of each of the plurality of peripherals to the individual processing core remains locked until a reboot of the apparatus.22. The apparatus of claim 21 , wherein the configurable interrupt routing table includes a plurality of configuration registers.23. The apparatus of claim 21 , wherein a first processing core of the plurality of processing cores is associated with at least two independent execution environments.24. The apparatus of claim 23 , wherein a first independent execution environment associated with the first processing core is a Secure World operating environment of the first processing core claim 23 , and wherein a second ...

Подробнее
03-01-2019 дата публикации

STREAMING ENGINE WITH SHORT CUT START INSTRUCTIONS

Номер: US20190004853A1
Принадлежит:

A streaming engine employed in a digital data processor specifies a fixed read only data stream recalled memory. Streams are started by one of two types of stream start instructions. A stream start ordinary instruction specifies a register storing a stream start address and a register of storing a stream definition template which specifies stream parameters. A stream start short-cut instruction specifies a register storing a stream start address and an implied stream definition template. A functional unit is responsive to a stream operand instruction to receive at least one operand from a stream head register. The stream template supports plural nested loops with short-cut start instructions limited to a single loop. The stream template supports data element promotion to larger data element size with sign extension or zero extension. A set of allowed stream short-cut start instructions includes various data sizes and promotion factors. 1. A digital data processor comprising:an instruction memory storing instructions each specifying a data processing operation and at least one data operand field;an instruction decoder connected to said instruction memory for sequentially recalling instructions from said instruction memory and determining said specified data processing operation and said specified at least one operand;at least one functional unit connected to said data register file and said instruction decoder for performing data processing operations upon at least one operand corresponding to an instruction decoded by said instruction decoder and storing results; an address generator for generating stream memory addresses corresponding to said stream of an instruction specified sequence of a plurality of data elements,', 'a stream head register storing a data element of said stream next to be used by said at least one functional unit;, 'a streaming engine connected to said instruction decoder operable in response to a stream start instruction to recall from memory a ...

Подробнее
03-01-2019 дата публикации

Method and computing device for increasing throughputs of services processed by threads

Номер: US20190004856A1
Принадлежит: TmaxData Co Ltd

A method for increasing throughputs of multiple services processed by multiple threads on conditions that the multiple services include at least a first, a second, and a third services and the multiple threads include at least a first and a second thread including steps of: (a) if the first service being processed by the first thread calls the second service, supporting the second thread to process the second service; and (b) while the second service is being processed, supporting the first thread to process the third service; and (c) if the processing of the second service is completed, supporting (i) the first thread or (ii) one or more other threads except the first thread to resume a processing of an unprocessed part of the first service, by using a result value acquired by the processing of the second service.

Подробнее
07-01-2021 дата публикации

CAMPAIGN MANAGEMENT SYSTEM - SUSPENSION

Номер: US20210004874A1
Принадлежит: BSI Business Systems Integration AG

The invention relates to a computer implemented campaign management system (CAMS), the system including a graphical user interface (GUI), the management system (CAMS) processing a plurality of participant records (PREC) according to a user-configured process structure (PS) of a plurality of configured logic templates (CLT); wherein the participant records (PREC) comprise participant attributes (PA), and wherein the processing of participant records involves processing or modification of one or more of the participant attributes (PA) of the participant record (PREC), wherein the management system reads, processes and/or modifies participant attributes (PA) of participant records (PREC) by a sequence of one of more executable program fragments (EPF) according to the process structure (PS) and wherein the execution of one of more executable program fragments (EPF) can be suspended in response to user action (UACT) and/or participant action (PACT) and wherein the execution of the process structure can be resumed in response to user action (UACT) and/or participant action (PACT). 1. A computer implemented campaign management system , the system including a graphical user interface , the management system processing a plurality of participant records according to a user-configured process structure of a plurality of configured logic templates;wherein the participant records comprise participant attributes, and wherein the processing of participant records involves processing or modification of one or more of the participant attributes of the participant record,wherein the management system reads, processes and/or modifies participant attributes of participant records by a sequence of one of more executable program fragments according to the process structure and wherein the execution of one of more executable program fragments can be suspended in response to user action and/or participant action and wherein the execution of the process structure can be resumed in response ...

Подробнее
01-01-2015 дата публикации

Virtual machines management apparatus, virtual machines management method, and computer readable storage medium

Номер: US20150007178A1
Принадлежит: Toshiba Corp

A virtual machines management apparatus includes a virtual machine controller, a history storage, and a planning module. The virtual machine controller is configured to migrate virtual machines between plural physical servers. The history storage is configured to store, for each set of first virtual machines that were migrated to a same migration destination physical server parallel in time among the virtual machines migrated, history information. The planning module is configured to determine as to whether it is possible to start migrating a planning target virtual machine to a candidate migration destination physical server at a candidate migration start time based on a residual resource amount of the candidate migration destination physical server, a resource consumption of the planning target virtual machine, a sum of resource consumptions of migration-scheduled virtual machines, and the history information.

Подробнее
04-01-2018 дата публикации

SECURE DOMAIN MANAGER

Номер: US20180007023A1
Принадлежит: Intel Corporation

Particular embodiments described herein provide for an electronic device that can be configured to determine that a secure domain has been created on a device, where keys are required to access the secure domain, obtain the keys that are required to access the secure domain from a network element, and encrypt the keys and store the encrypted keys on the device. In an example, only the secure domain can decrypt the encrypted keys and the device is a virtual machine. 1. At least one machine readable medium comprising one or more instructions that when executed by at least one processor , cause the at least one processor to:determine that a secure domain has been created on a device in a cloud network, wherein keys are required to access the secure domain;obtain the keys that are required to access the secure domain from a network element; andencrypt the keys and store the encrypted keys on the device.2. The at least one machine readable medium of claim 1 , wherein only the secure domain can decrypt the encrypted keys.3. The at least one machine readable medium of claim 1 , wherein the device is a virtual machine.4. The at least one machine readable medium of claim 3 , further comprising one or more instructions that when executed by the at least one processor claim 3 , cause the at least one processor to:migrate the secure domain to a target device.5. The at least one machine readable medium of claim 4 , wherein the target device is verified by a platform manager before it can receive the secure domain.6. The at least one machine readable medium of claim 4 , wherein a secure channel of communication is created between the device and the target device.7. The at least one computer-readable medium of claim 4 , wherein speculative copies of unmodified evolved packet core pages are sent to the target device before suspension of the secure domain on the secure device.8. The at least one machine readable medium of claim 7 , wherein the virtual machine that includes the ...

Подробнее
20-01-2022 дата публикации

PLATFORM HEALTH ENGINE IN INFRASTRUCTURE PROCESSING UNIT

Номер: US20220019461A1
Принадлежит:

A platform health engine for autonomous self-healing in platforms served by an Infrastructure Processing Unit (IPU), including: an analysis processor configured to apply analytics to telemetry data received from a telemetry agent of a monitored platform managed by the IPU, and to generate relevant platform health data; a prediction processor configured to predict, based on the relevant platform health data, a future health status of the monitored platform; and a dispatch processor configured to dispatch a workload of the monitored platform to another platform managed if the predicted future health status of the monitored platform is failure. 1. A platform health engine for autonomous self-healing in platforms served by an Infrastructure Processing Unit (IPU) , comprising:an analysis processor configured to apply analytics to telemetry data received from a telemetry agent of a monitored platform managed by the IPU, and to generate relevant platform health data;a prediction processor configured to predict, based on the relevant platform health data, a future health status of the monitored platform; anda dispatch processor configured to dispatch a workload of the monitored platform to another platform if the predicted future health status of the monitored platform is failure.2. The platform health engine of claim 1 , further comprising:a collection processor configured to collect, and provide to the analysis processor, the telemetry data from the telemetry agent of the monitored platform.3. The platform health engine of claim 2 , wherein if the predicted future health status of the monitored platform is unclear or failure claim 2 , the prediction processor is configured to request the collection processor to provide the analysis processor with additional telemetry data of the monitored platform.4. The platform health engine of claim 3 , wherein the prediction processor is configured to predict claim 3 , more accurately based on the additional telemetry data claim 3 , ...

Подробнее
20-01-2022 дата публикации

GRAPHICS PROCESSORS

Номер: US20220020108A1
Автор: Uhrenholt Olof Henrik
Принадлежит: ARM LIMITED

To suspend the processing for a group of one or more execution threads currently executing a shader program for an output being generated by a graphics processor, the issuing of shader program instructions for execution by the group of one or more execution threads is stopped, and any outstanding register-content affecting transactions for the group of one or more execution threads are allowed to complete. Once all outstanding register-content affecting transactions for the group of one or more execution threads have completed, the content of the registers associated with the threads of the group of one or more execution threads, and a set of state information for the group of one or more execution threads, including at least an indication of the last instruction in the shader program that was executed for the threads of the group of one or more execution threads, are stored to memory. 1. A method of operating a data processor that includes a programmable execution unit operable to execute programs , and in which when executing a program , the programmable execution unit executes the program for respective groups of one or more execution threads , each execution thread in a group of execution threads corresponding to a respective work item of an output being generated , and each execution thread having an associated set of registers for storing data for the execution thread , the method comprising:in response to a command to suspend the processing of an output being generated by the data processor: stopping the issuing of program instructions for execution by the group of one or more execution threads;', 'waiting for any outstanding transactions that affect the content of the registers associated with the threads of the group of one or more execution threads for the group of one or more execution threads to complete; and', storing to memory:', 'the content of the registers associated with the threads of the group of one or more execution threads; and', 'a set of ...

Подробнее
12-01-2017 дата публикации

COOPERATIVE THREAD ARRAY GRANULARITY CONTEXT SWITCH DURING TRAP HANDLING

Номер: US20170010914A1
Принадлежит:

Techniques are provided for restoring threads within a processing core. The techniques include, for a first thread group included in a plurality of thread groups, executing a context restore routine to restore from a memory a first portion of a context associated with the first thread group, determining whether the first thread group completed an assigned function, and, if the first thread group completed the assigned function, then exiting the context restore routine, or if the first thread group did not complete the assigned function, then executing one or more operations associated with a trap handler routine. 1. A method for restoring threads within a processing core , the method comprising:for a first thread group included in a plurality of thread groups, executing a context restore routine to restore from a memory a first portion of a context associated with the first thread group;determining whether the first thread group completed an assigned function; andif the first thread group completed the assigned function, then exiting the context restore routine, orif the first thread group did not complete the assigned function, then executing one or more operations associated with a trap handler routine.2. The method of claim 1 , further comprising:locating a first entry associated with the first thread group in a data structure that includes trap information; andupdating an identifier within the first entry to identify the current operation as a context restore operation.3. The method of claim 1 , wherein executing one or more operations associated with a trap handler routine comprises:determining that a context restore operation is in process, andrestoring from the memory a second portion of the context associated with the first thread group.4. The method of claim 3 , wherein determining that a context restore operation is in process comprises:retrieving an entry associated with the first thread group from a data structure that includes trap information; ...

Подробнее
14-01-2016 дата публикации

INTELLIGENT APPLICATION BACK STACK MANAGEMENT

Номер: US20160011904A1
Принадлежит:

Intelligent application back stack management may include generating a first back stack for activities of an application that have been executed by a device that executes the application. The first back stack may include a back stack size limit. A further back stack may be generated for selected ones of the activities of the application if a total number of the activities of the application and further activities of the application exceeds the back stack size limit. The first back stack may be an in-memory back stack for the device that executes the application, and the further back stack may include an external on-device back stack for the device that executes the application and/or a Cloud storage based back stack. Intelligent application back stack management may further include regenerating an activity of the selected ones of the activities that is pulled from the further back stack. 1. An intelligent application back stack management system comprising:at least one processor; and a first back stack for activities of an application that have been executed by a device that executes the application, wherein the first back stack includes a back stack size limit, and', 'at least one further back stack for selected ones of the activities of the application if a total number of the activities of the application and further activities of the application that have been executed by the device that executes the application exceeds the back stack size limit,, 'an application back stack generator, executed by the at least one processor, to generate'}wherein the first back stack is an in-memory back stack for the device that executes the application, and the at least one further back stack is at least one of an external on-device back stack for the device that executes the application and a Cloud storage based back stack.2. The intelligent application back stack management system according to claim 1 , further comprising:an application back stack controller, executed by the ...

Подробнее
14-01-2016 дата публикации

Computing session workload scheduling and management of parent-child tasks

Номер: US20160011906A1
Принадлежит: International Business Machines Corp

A single workload scheduler schedules sessions and tasks having a tree structure to resources, wherein the single workload scheduler has scheduling control of the resources and the tasks of the parent-child workload sessions and tasks. The single workload scheduler receives a request to schedule a child session created by a scheduled parent task that when executed results in a child task; the scheduled parent task is dependent on a result of the child task. The single workload scheduler receives a message from the scheduled parent task yielding a resource based on the resource not being used by the scheduled parent task, schedules tasks to backfill the resource, and returns the resource yielded by the scheduled parent task to the scheduled parent task based on receiving a resume request from the scheduled parent task or determining dependencies of the scheduled parent task have been met.

Подробнее
14-01-2016 дата публикации

Safe consolidation and migration

Номер: US20160011913A1
Принадлежит:

A method, apparatus and computer program product for program migration, the method comprising: receiving a target host and an application to be migrated to a target host; estimating a target load of the application to be migrated; generating a synthetic application which simulates a simulated load, the simulated load being smaller than the target load; loading the synthetic application to the target host; monitoring behavior of the target host, the synthetic application, or a second application executed thereon; subject to the behavior being satisfactory: if the simulated load is smaller than the target load, then repeating said generating, said loading and said monitoring, wherein said loading is repeated with increased load; and otherwise migrating the application to the target. 1. A computer-implemented method performed by a computerized device , comprising:receiving a target host and an application to be migrated to a target host;estimating a target load of the application to be migrated;generating a synthetic application which simulates a simulated load, the simulated load being smaller than the target load;loading the synthetic application to the target host;monitoring behavior of the target host, the synthetic application, or a second application executed thereon; if the simulated load is smaller than the target load, then repeating said generating, said loading and said monitoring, wherein said loading is repeated with increased load; and', 'otherwise migrating the application to the target., 'subject to the behavior being satisfactory2. The computer-implemented method of claim 1 , wherein said monitoring comprises determining whether applications executed by the host prior to the synthetic applications receive sufficient resources or their performance is not degraded.3. The computer-implemented method of claim 1 , wherein said monitoring comprises determining a percentage of the time at which the target host is in idle state claim 1 , or a percentage of the ...

Подробнее
11-01-2018 дата публикации

Unmanned Ground and Aerial Vehicle Attachment System

Номер: US20180011751A1
Автор: Klein Matias
Принадлежит:

Techniques are disclosed for hot swapping one or more module devices on a single host device. A module device can perform module-specific tasks that are defined in its module software driver. Using one or more application programming interfaces, the host device communicates with the module device's module software driver to allow the module device to perform module-specific tasks while removably connected to the host device. 1. One or more non-transitory computer-readable media storing computer-executable instructions that upon execution cause one or more processors to perform acts comprising:connecting a module device to a host device to establish a module-host connection;detecting said module-host connection via a host bus connection polling;identifying a module device identifier associated with said module device;retrieving a module software driver correlating to said module device identifier, wherein said module software driver defines a module-specific task corresponding to said module device; andgenerating a message comprising an API call from said module device to said host device to enable said module device to perform said module-specific task while connected to said host device.2. The one or more non-transitory computer-readable media of claim 1 , wherein the acts further comprise:generating a message comprising a second API call from said module device to cloud services to access one or more databases hosted on said cloud services.3. The one or more non-transitory computer-readable media of claim 1 , wherein said module device identifier is stored in a module database hosted on said cloud services.4. The one or more non-transitory computer-readable media of claim 1 , wherein the acts further comprise:registering said module device in a module database hosted on cloud services.5. The one or more non-transitory computer-readable media of claim 1 , wherein the acts further comprise:suspending said module device from performing said module-specific task until ...

Подробнее
14-01-2021 дата публикации

METHOD AND SYSTEM FOR CONSTRUCTING LIGHTWEIGHT CONTAINER-BASED USER ENVIRONMENT (CUE), AND MEDIUM

Номер: US20210011740A1

A method and system for constructing a lightweight container-based user environment (CUE), and a medium, the method including: preparing, by a main process, for communication, cloning a child process, and then becoming a parent process; elevating, by the child process, permission, executing namespace isolation, and cloning a grandchild process, and setting, by the parent process, cgroups for the grandchild process; and setting, by the grandchild process, permission of the grandchild process to execute a command and a file, preparing an overlay file system, setting a hostname, restricting permission, and executing an initialization script to start the container. Multiple users are allowed to customize their own environments, enabling the users to customize their environments more flexibly, achieving privacy isolation, and making it easier and more secure to update a system. Therefore, it is particularly applicable to a high-performance computing cluster. 1. A method for constructing a lightweight container-based user environment (CUE) , wherein the method comprises the following steps:(1) preparing, by a main process used to execute user environment construction, a socket pair for interprocess communication, calling a clone function clone( ) to obtain a child process, and serving the main process as a parent process of the child process;(2) elevating permission of the child process to root, executing namespace isolation, calling the clone function clone( ) to obtain a grandchild process, and sending a process identification PID of the grandchild process to the parent process, and setting, by the parent process, cgroups for the grandchild process according to the process identification PID; and(3) setting, by the grandchild process, permission of the grandchild process to execute a command and a file, then as an independent process, sequentially preparing an overlay file system of the grandchild process, setting a hostname, and limiting permission by using a ...

Подробнее
10-01-2019 дата публикации

Virtualization of Multiple Coprocessors

Номер: US20190012197A1
Принадлежит: Bitfusion.io, Inc.

In a data processing system running at least one application on a hardware platform that includes at least one processor and a plurality of coprocessors, at least one kernel dispatched by an application is intercepted by an intermediate software layer running logically between the application and the system software. Compute functions are determined within kernel(s), and data dependencies are determined among the compute functions. The compute functions are dispatched to selected ones of the coprocessors based at least in part on the determined data dependencies and kernel results are returned to the application that dispatched the respective kernel. 1. A data processing method comprising:running at least one application, via system software, on a hardware platform that includes at least one processor and a plurality of coprocessors;intercepting, by an intermediate software layer running logically between the application and the system software, at least one kernel, comprising a plurality of kernel tasks, dispatched within a data and command stream issued by the application;each said kernel corresponding to instructions to an intended one of the coprocessors for execution on that intended coprocessor;determining compute functions within the at least one kernel;determining data dependencies among the compute functions;dispatching the compute functions to selected ones of the coprocessors based at least in part on the determined data dependencies; andreturning kernel results to the at least one application that dispatched the respective kernel.2. The method of claim 1 , in which at least two kernels are intercepted claim 1 , each of the plurality of kernel tasks being defined by a respective one of the at least two kernels claim 1 , whereby determining data dependencies among the compute functions is performed at kernel level granularity.3. The method of claim 1 , in which the plurality of kernel tasks comprises at least two sub-tasks defined within a single one of ...

Подробнее
10-01-2019 дата публикации

VIRTUAL MACHINE MIGRATION WITHIN A HYBRID CLOUD SYSTEM

Номер: US20190012199A1
Принадлежит:

An example method of migrating a virtualized computing instance between source and destination virtualized computing systems includes executing a first migration workflow in the source virtualized computing system, where a host computer executing the virtualized computing instance is a source host in the first migration workflow and a first mobility agent simulates a destination host in the first migration workflow. The method further includes executing a second migration workflow in the destination virtualized computing system, where a second mobility agent in the destination virtualized computing system simulates a source host in the second migration workflow and a host computer in the destination virtualized computing system is a destination host in the second migration workflow. The method further includes transferring, during execution of the first and second migration workflows, migration data including the virtualized computing instance between the first mobility agent and the second mobility agent over a network. 1. A method of migrating a virtualized computing instance between source and destination virtualized computing systems , comprising:determining a set of hardware features for the virtualized computing instance to generate migration compatibility data;selecting a destination host computer at the destination virtualized computing system based on the set of hardware features in the migration compatibility data; andmasking at least one hardware feature of the destination host computer that is not present in the migration compatibility data from being visible to the virtualized computing instance.2. The method of claim 1 , further comprising:adding the migration compatibility data to a configuration of the virtualized computing instance; andsending the configuration of the virtualized computing instance from the source virtualized computing system to the destination virtualized computing system;wherein the destination host computer includes a set of ...

Подробнее
10-01-2019 дата публикации

Platform Auto-Configuration and Tuning

Номер: US20190012200A1
Принадлежит: Intel Corporation

A computing platform, including: an execution unit to execute a program, the program including a first phase and a second phase; and a quick response module (QRM) to: receive a program phase signature for the first phase; store the program phase signature in a pattern match action (PMA) table; identify entry of the program into the first phase via the PMA; and apply an optimization to the computing platform. 1. A computing platform , comprising:an execution unit to execute a program, the program comprising a first phase and a second phase; and receive a program phase signature for the first phase;', 'store the program phase signature in a pattern match action (PMA) table;', 'identify entry of the program into the first phase via the PMA; and', 'apply an optimization to the computing platform., 'a quick response module (QRM) to2. The computing platform of claim 1 , wherein the QRM is further to:receive an optimization structure for the first phase; andstore the optimization structure for the first phase,wherein applying the optimization comprises reading the optimization structure for the first phase.3. The computing platform of claim 2 , wherein the optimization structure for the first phase comprises a register having flags for turning features on or off.4. The computing platform of claim 1 , wherein the QRM is embodied at least partly in microcode.5. The computing platform of claim 1 , wherein the QRM is embodied at least partly in hardware.6. The computing platform of claim 1 , wherein the PMA table comprises an array of registers.7. The computing platform of claim 6 , wherein the array of registers includes one register to hold a signature for each of the first phase and the second phase.8. The computing platform of claim 7 , wherein the signature comprises a program counter.9. The computing platform of claim 1 , wherein the PMA table is configured to be stored in cache.10. The computing platform of claim 1 , wherein the PMA table is configured to be stored in ...

Подробнее
14-01-2021 дата публикации

DEVICE SUCH AS A CONNECTED OBJECT PROVIDED WITH MEANS FOR CHECKING THE EXECUTION OF A PROGRAM EXECUTED BY THE DEVICE

Номер: US20210011756A1
Принадлежит:

The present invention relates to a device () such as a connected object comprising a first electronic circuit () comprising: 21. A device () according to claim 1 , wherein the steps implemented automatically and autonomously by the second processing unit further comprise a program suspension command claim 1 , the integrity check and/or compliance step being implemented while the program is suspended.31. A device () according to claim 2 , wherein the suspension command comprises the placement of a stop point at a predetermined location in the program claim 2 , so as to suspend the program at the predetermined location claim 2 , or the placement of an observation point on a variable of the program claim 2 , so as to suspend the program when the variable is modified.4121. A device according to claim 3 , wherein the steps implemented automatically and autonomously by the second processing unit () comprise a step consisting of verifying whether a condition independent of the way in which the program is being executed has been met claim 3 , such as verifying whether a predetermined period of time has elapsed since a previous start of the program claim 3 , a previous resumption of the program claim 3 , or a previous powering-on of the device () claim 3 , the suspension command step being implemented when the condition is met.51126. A device () according to claim 1 , wherein the steps implemented automatically and autonomously by the second processing unit () comprise the command for the first processing unit () to resume the program when the program or the data manipulated by the program is revealed not to have been compromised during the check step claim 1 , and where this command is not implemented when the program or the data manipulated by the program is revealed to have been compromised during the check step.61226. A device according to claim 1 , implemented automatically and autonomously by the second processing unit () comprising a command for a definitive program ...

Подробнее
14-01-2021 дата публикации

INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING APPARATUS

Номер: US20210011758A1
Автор: YAMAZAKI Masanori
Принадлежит: FUJITSU LIMITED

A non-transitory computer-readable recording medium has stored therein a program that causes a first apparatus to execute a process, the process including: when a load of a first resource existing in a first group is equal to or more than a first threshold value, searching the first group for a first destination resource that is a migration destination of a first task performed using the first resource, the first apparatus being included in the first group; when the first destination resource is not found in the first group, selecting a second group based on first information; transmitting a first request to search for the first destination resource to a second apparatus included in the second group; and when a second resource that is the first destination resource is found in the second group, updating the first information based on second information that is transmitted from the second apparatus. 1. A non-transitory computer-readable recording medium having stored therein a program that causes a first apparatus to execute a process , the process comprising:when a load of a first resource existing in a first group is equal to or more than a first threshold value, searching the first group for a first destination resource that is a migration destination of a first task performed using the first resource, the first group being included in a system that includes a plurality of groups communicably coupled to each other, each of the plurality of groups including a plurality of resources and an apparatus for managing the plurality of resources, the first apparatus being included in the first group;when the first destination resource is not found in the first group, selecting a second group based on first information that identifies a group including a resource having a load measured to be less than a second threshold value, by referring to a degree of separation from the first group;transmitting a first request to search for the first destination resource to a second ...

Подробнее
09-01-2020 дата публикации

MIGRATION MANAGEMENT METHOD, MIGRATION SYSTEM, AND STORAGE MEDIUM

Номер: US20200012516A1
Принадлежит: FUJITSU LIMITED

A migration management method includes referring to a performance deterioration rate of a specific virtual server when utilization of a virtual server other than the specific virtual server in virtual servers that work on a physical server is changed in a stepwise manner, and calculating a first index value relating to a load state of the physical server before the performance deterioration rate exceeds a threshold based on a number of virtual CPUs allocated to each of the virtual servers and utilization of the virtual CPUs; calculating a second index value relating to the load state based on the number of virtual CPUs and the utilization while the specific virtual server is activated on the physical server; and conducting migration to another physical server for a virtual server other than the specific virtual server in the virtual servers when the calculated second index value exceeds the calculated first index value. 1. A migration management method executed by a computer , the migration management method comprising:referring to a performance deterioration rate of a specific virtual server when utilization of a virtual server other than the specific virtual server in virtual servers that work on a physical server is changed in a stepwise manner, and calculating a first index value relating to a load state of the physical server before the performance deterioration rate of the specific virtual server exceeds a threshold based on a number of virtual CPUs allocated to each of the virtual servers that work on the physical server and utilization of the virtual CPUs;calculating a second index value relating to the load state of the physical server based on the number of virtual CPUs allocated to each of the virtual servers that work on the physical server and the utilization of the virtual CPUs while the specific virtual server is activated on the physical server; andconducting migration to another physical server for a virtual server other than the specific virtual ...

Подробнее
09-01-2020 дата публикации

COMPUTER SYSTEM INFRASTRUCTURE AND METHOD OF HOSTING AN APPLICATION SOFTWARE

Номер: US20200012517A1
Принадлежит:

A computer system infrastructure includes at least one edge computer system and at least one cloud computer system, wherein the edge computer system is connectable to the cloud computer system, both in the edge computer system and in the cloud computer system a virtual environment for hosting an application software is configured, respectively, the virtual environment of the edge computer system and the virtual environment of the cloud computer system are configured as unified host environments for the application software, respectively, the application software is provided within one of the virtual environments of the edge computer system and the cloud computer system, and the edge computer system and the cloud computer system are configured to transfer the application software between the two virtual environments of the edge computer system and the cloud computer system. 1. A computer system infrastructure comprising at least one edge computer system and at least one cloud computer system , whereinthe edge computer system is connectable to the cloud computer system,both in the edge computer system and in the cloud computer system a virtual environment for hosting an application software is configured, respectively, the virtual environment of the edge computer system and the virtual environment of the cloud computer system are configured as unified host environments for the application software, respectively,the application software is provided within one of the virtual environments of the edge computer system and the cloud computer system, andthe edge computer system and the cloud computer system are configured to transfer the application software between the two virtual environments of the edge computer system and the cloud computer system.2. The computer system infrastructure according to claim 1 , wherein claim 1 , in the edge computer system claim 1 , a first resource control component is configured to determine resource information of the edge computer system ...

Подробнее
09-01-2020 дата публикации

UNIFIED EVENTS FRAMEWORK

Номер: US20200012541A1
Принадлежит:

Methods, systems, and computer-readable storage media for detecting and managing events from data of an Internet-of-Things (IoT) network, and actions can include receiving a first call from a first application, the first call including timeseries data from one or more IoT devices in a first IoT network, retrieving a rule set for processing the timeseries data, and determining that an anomaly is represented in the timeseries data based on the rule set, and in response, generating an event, the event having a configuration that is customized by an enterprise associated with the first application, executing an event workflow to transition the event between states, and transmitting an event response to the first application. 1. A computer-implemented method for detecting and managing events from data of an Internet-of-Things (IoT) network , the method being executed by one or more processors and comprising:storing, by a unified events framework, one or more event configurations, each event configuration being customized by an enterprise associated with a first application, being specific to a respective event and comprising an event type, one or more statuses defining respective states of the respective event, a severity and a code associated with one of messaging and notification of the respective event,receiving, by the unified events framework, a first call from the first application, the first call comprising timeseries data from one or more IoT devices in a first IoT network;retrieving, by the unified events framework, a rule set for processing the timeseries data; and generating an event based on an event configuration that defines the event,', 'executing an event workflow to transition the event between states, and', 'transmitting an event response to the first application., 'determining, by the unified events framework, that an anomaly is represented in the timeseries data based on the rule set, and in response2. The method of claim 1 , wherein the event is ...

Подробнее
14-01-2021 дата публикации

METHOD FOR BOOKMARK FUNCTION APPLICABLE IN MESSENGER APPLICATION AND ELECTRONIC APPARATUS THEREOF

Номер: US20210014180A1
Автор: Lee Ki-Man
Принадлежит:

A method and apparatus for a bookmark function is applicable in a messenger application. The method includes checking whether other application is selected during the use of a messenger application, storing a predetermined dialog location in the messenger application when the other application is selected, backgrounding the messenger application and foregrounding the other application, and returning to the predetermined dialog location when the messenger application is foregrounded. 1. A method of a first electronic device , the method comprising:displaying, on a touchscreen, a chat history comprising at least one first chat message on a first user interface (UI) for a messenger application;displaying, on the touchscreen, in response to detecting an input for executing another application, a second UI for the other application in place of the first UI for the messenger application such that the messenger application is operated in a background state;receiving at least one second chat message from a second electronic device while the messenger application is operating in the background state; andin response to detecting an input for re-executing the messenger application operated in the background state, displaying, on the touchscreen, the first UI including the chat history that separately displays the at least one second chat message received by the messenger application in the background state and the at least one first chat message previously displayed in the chat history, wherein the first UI is displayed based on a pre-determined time-based location.2. The method of claim 1 , further comprising displaying a second object on the first UI when returning to the messenger application from the other application.3. The method of claim 2 , wherein the pre-determined location is identified before displaying the second UI for the other application.4. The method of claim 1 , further comprising displaying at least one of a message lastly received among the at least one ...

Подробнее
09-01-2020 дата публикации

SYSTEM AND METHOD FOR EFFICIENT VIRTUALIZATION IN LOSSLESS INTERCONNECTION NETWORKS

Номер: US20200014749A1
Принадлежит:

Systems and methods for supporting efficient virtualization in a lossless interconnection network. An exemplary method can provide, one or more switches, including at least a leaf switch, a plurality of host channel adapters, wherein each of the host channel adapters comprise at least one virtual function, at least one virtual switch, and at least one physical function, a plurality of hypervisors, and a plurality of virtual machines, wherein each of the plurality of virtual machines are associated with at least one virtual function. The method can arrange the plurality of host channel adapters with one or more of a virtual switch with prepopulated local identifiers (LIDs) architecture or a virtual switch with dynamic LID assignment architecture. The method can assign each virtual switch with a LID. The method can calculate one or more linear forwarding tables based at least upon the LIDs assigned to each of the virtual switches. 1. A system for supporting efficient virtualization in a lossless interconnection network , comprising:one or more microprocessors;a first subnet comprising a plurality of switches, wherein each of the plurality of switches are associated with a linear forwarding table (LFT) of a plurality of LFTs, and wherein each switch is assigned a switch tuple of a plurality of switch tuples;wherein a virtual machine performs a live migration within the first subnet, wherein during the live migration, a local identifier (LID) of the virtual machine is updated; andwherein, as a result of the migration of the virtual machine, a set of the plurality of LFTs are updated, the set of the plurality of LFTs being determined at least based upon a comparison of at least two switch tuples of the plurality of switch tuples.2. The system of claim 1 , 'migrating the virtual machine from a first host channel adapter to a second host channel adapter.', 'wherein the live migration of the virtual machine comprises3. The system of claim 2 ,wherein the first host channel ...

Подробнее