Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 14951. Отображено 200.
10-10-2014 дата публикации

СИСТЕМА И СПОСОБ ДЛЯ КООРДИНАЦИИ ОДНОВРЕМЕННЫХ РЕДАКЦИЙ СОВМЕСТНО ИСПОЛЬЗУЕМЫХ ЦИФРОВЫХ ДАННЫХ

Номер: RU2530249C2
Принадлежит: УОТЧИТУ, ИНК. (US)

Изобретение относится к области систем совместного использования цифровых данных. Техническим результатом является обеспечение возможности координации одновременных команд от множества пользовательских компьютеров в электронной сети для управления совместно используемыми данными и их редактирования на множестве компьютеров. Координирующее устройство может принимать команды для редактирования совместно используемых цифровых данных от множественных независимо работающих пользовательских компьютеров. Координирующее устройство может определять, что две или более команды от соответствующих пользовательских компьютеров являются взаимоисключающими, избыточными или иначе конфликтующими. Координирующее устройство может внедрять одну из множественных команд в глобальную очередь команд и может отменять другую(ие). Координирующее устройство может передавать глобальные команды на все пользовательские компьютеры для локального выполнения для реализации на них одних и тех же совместно используемых цифровых ...

Подробнее
10-12-2013 дата публикации

МЕХАНИЗМ ЗАПРОСА ПОЗДНЕЙ БЛОКИРОВКИ ДЛЯ ПРОПУСКА АППАРАТНОЙ БЛОКИРОВКИ (HLE)

Номер: RU2501071C2
Принадлежит: ИНТЕЛ КОРПОРЕЙШН (US)

Изобретение относится к вычислительной технике. Технический результат заключается в обеспечении достоверности данных. Устройство запроса поздней блокировки, содержащее логический узел декодирования, выполненный с возможностью распознавания команды блокировки в начале критического участка для получения блокировки для критического участка; логический узел выполнения, выполненный с возможностью пропуска, по меньшей мере, части команды блокировки так, чтобы получить блокировку для критического участка, сохранения адреса и значения блокировки, на которые ссылается команда блокировки в записи блокировки, и выполнения критического участка без блокировки для критического участка; и логический узел запроса поздней блокировки, соединенный с логическим узлом выполнения, причем логический узел запроса поздней блокировки выполнен с возможностью инструктировать логический узел выполнения осуществлять попытку выполнения, по меньшей мере, части команды блокировки для получения блокировки для критического ...

Подробнее
06-12-2021 дата публикации

СПОСОБ И СИСТЕМА ЦИКЛИЧЕСКОГО РАСПРЕДЕЛЕННОГО АСИНХРОННОГО ОБМЕНА СООБЩЕНИЯМИ СО СЛАБОЙ СИНХРОНИЗАЦИЕЙ ДЛЯ РАБОТЫ С БОЛЬШИМИ ГРАФАМИ

Номер: RU2761136C1

Изобретение относится к области информационных технологий. Технический результат заключается в повышении эффективности и скорости передачи сообщений между вычислительными узлами сети. Технический результат достигается за счет этапов, на которых: a) определяют множество точек сетевого обмена сообщениями, где каждая точка представляет собой вычислительный узел; b) формируют модель графа для обмена сообщениями между точками сетевого обмена, в котором вершинами являются точки сетевого обмена, а ребрами - факт отправки сообщений; c) определяют точку сетевого обмена - координатора исполнения обмена сообщениями; d) осуществляют отправку сообщений от точки координатора по меньшей мере одной точке сетевого обмена; e) получают ответ от точки сетевого обмена, получившей сообщение на этапе d), причем упомянутый ответ содержит идентификаторы смежных точек сетевого обмена, в которые будут направлены сообщения от данной точки сетевого обмена, при этом упомянутая точка сетевого обмена формирует маршруты ...

Подробнее
20-05-2016 дата публикации

УПРАВЛЕНИЕ ВЕРСИЯМИ ПРЕЦЕДЕНТОВ

Номер: RU2014140740A
Принадлежит:

... 1. Способ, содержащий этапы, на которых:компьютерная система записывает множество прецедентов, причем каждый прецедент используется с программной системой, причем версия программной системы одна и та же для каждого прецедента, причем каждый прецедент содержит:набор входных данных; иидентификацию родительского прецедента, из которого входные данные в прецеденте были скопированы, если только прецедент не является исходным прецедентом, причем идентификация родительского прецедента создает отношение "родитель - дочерний элемент" между прецедентом и родительским прецедентом;компьютерная система создает иерархию отношений "родитель - дочерний элемент" среди множества прецедентов;компьютерная система отображает поднабор множества прецедентов в ответ на поиск множества прецедентов;выбирают меньший поднабор множества прецедентов в качестве избранной модели для среды, в которой некая работа должна быть выполнена;планируют выполнение работы с использованием избранной модели для создания плана работы ...

Подробнее
23-04-1986 дата публикации

Многоканальное устройство приоритета

Номер: SU1226463A1
Принадлежит:

Изобретение относится к вычислительной технике и может быть использовано для приоритетного управления обращением к блоку оперативной памяти процессоров многопроцессорной системы. Цель изобретения - расширение области применения устройства за счет возможности его работы с группами разноприоритетных источников запросов. Это достигается путем введения в каждый канал элемента ИЛИ-НЕ и элемента задержки с соответствующими функциональными связями между ними и известными блоками устройства..Все каналы устройства могут работать параллельно, т.е. обслуживать заявки одновременно . I ил. 1C N9 CD 4 Од СО ...

Подробнее
07-07-1986 дата публикации

Многоканальное устройство для обработки запросов

Номер: SU1242954A2
Принадлежит:

Изобретение относится к вычислительной технике, в частности к устройствам для управления очередностью обслуживания, может быть использовано при построении различных устройств автоматики и информационно-измерительной техники и является дополнительным к авт.св. № 1075263. Цель изобретения - расширение функциональных возможностей. Многоканальное устройство для обработки запросов содержит первый счетчик, первый дешифратор, элемент задержки, группу триггеров, первую группу элементов И, второй и первый элемент И. Новым является введение второго счетчика, второго дешифратора , элемента ИЛИ и второй группы элементов И, первые и вторые входы каждого из которых соединены с соответствующими выходами, кроме первых, соответственно второго и первого дешифраторов, а выходы элементов И второй группы соединены с соответствующими входами элемента ИЛИ, выход которого соединен с входом сброса первого счетчика, причем вход второго счетчика соединен с первым выходом первого дешифратора . 1 ил. о (Л ю 4 to ;0 ...

Подробнее
23-05-1991 дата публикации

Приоритетное устройство

Номер: SU1651286A1
Принадлежит:

Изобретение относится к вычислительной технике и может быть использовано в цифровых вычислительных устройствах и системах для обслуживания нескольких активных абонентов в режиме разделения времени, Целью изобретения является расширение области применения устройства за счет обеспечения режима разделения времени . Приоритетное устройство содержит три группы элементов И, группу элементов ИЛИ, группу триггеров, группу элементов задержки, коммутатор запросов, пять элементов И. счетчик, блок памяти, дешифратор , две схемы сравнения, три регистра, три элемента ИЛ И, три триггера и блок времени, включающий в себя регистр, счетчик, счетчик-делитель , три триггера, схему сравнения , элемент задержки, элемент ИЛИ и два элемента И. В устройстве обеспечивается обработка запросов в режиме разделения времени, причем прерывание принятого на обслуживание запроса возможно только через фиксированное время - квант времени обслуживания. 2 ил.

Подробнее
23-06-1986 дата публикации

Многоканальное устройство для приоритетного подключения абонентов к общей магистрали

Номер: SU1239717A1
Принадлежит:

Изобретение относится к области вычислительной техники и может быть применено в многонашинных и многопроцессорных вычислительных системах, использзпощих для обмена данными общую магистраль . Целью изобретения является расширение функциональных возможностей за счет динамического определения приоритетов запросов внутри каждого канала. Цель достигается путем изменения схемы каналов- введением в каждый канал дополннтель- но двух триггеров, формирователя импульсов , элемента 1-ШИ, элемента И и блока задержки, соединенных соответственно с другими узлами и элементами устройства. Кроме того, блок задержки имеет характерное для данного устройства схемное вьшолнение. 1 з.п.- ф-лы, 2 ил. I СО 1О со ...

Подробнее
15-03-1987 дата публикации

Устройство для обслуживания групповых приоритетных запросов

Номер: SU1297047A1
Принадлежит:

Изобретение относится к вычис- лительной технике и может быть использовано в многоканальных системах с приоритетным обслуживанием абонеН тов. Цель изобретения - повышение быстродействия устройства. Устройство содержит два регистра, две группы элементов И, группу элементов ИЛИ, регистр готовности каналов, элемент задержки, группу элементов ИЛИ и группу узлов выбора запросов,каждый из которых, кроме первого, содержит две группы элементов И и шифратор, а первый узел состоит из группы элементов И и шифратора. В устройстве производится.одновременное распределение свободных каналов. 1 ил. ;О 4 Ч ...

Подробнее
15-12-1974 дата публикации

УСТРОЙСТВО ДЛЯ СИНХРОНИЗАЦИИ ВЫЧИСЛИТЕЛЬНОЙ СИСТЕМЫ

Номер: SU453695A1
Автор:
Принадлежит:

Подробнее
08-01-1981 дата публикации

ZEIT-INTERVALL-MULTIPLEX-KOMMUNIKATIONSANORDNUNG

Номер: DE0003008437A1
Принадлежит:

Подробнее
21-01-2021 дата публикации

ELEKTRONISCHE STEUEREINHEIT

Номер: DE102020208367A1
Принадлежит:

Eine ECU (1) enthält eine Vielzahl von Prozessorkernen (11, 12, 13), ein RAM (21) und eine MPU (15). Der RAM (21) speichert Aktualisierungsdaten (31), auf die von jedem der Prozessorkerne (11, 12, 13) lesbar zugegriffen werden kann. Die MPU (15) steuert die Prozessorkerne (11, 12, 13), so dass, wenn ein spezifischer Prozessorkern (11) auf die Aktualisierungsdaten (31) zugreift, der/die Prozessorkern(e) (12, 13) mit Ausnahme des spezifischen Prozessorkerns nicht auf die Aktualisierungsdaten (31) zugreifen kann/können.

Подробнее
19-07-1973 дата публикации

RECHENMASCHINE, DIE IN ZWEI TEILE AUFGETEILT IST

Номер: DE0002300853A1
Принадлежит:

Подробнее
08-08-2002 дата публикации

STEUERUNG VON GEMEINSAMEN PLATTENDATEN IN EINER DUPLEX-RECHNEREINHEIT

Номер: DE0069617709T2
Принадлежит: NOKIA NETWORKS OY, NOKIA NETWORKS OY, ESPOO

Подробнее
15-02-2007 дата публикации

Verfahren und Vorrichtung zur Festlegung eines Startzustandes bei einem Rechnersystem mit wenigstens zwei Ausführungseinheiten durch markieren von Registern

Номер: DE102005037226A1
Принадлежит:

Verfahren zur Festlegung eines Startzustandes bei einem Rechnersystem mit wenigstens zwei Ausführungseinheiten, wobei zwischen einem Performanzmodus und einem Vergleichsmodus umgeschaltet wird und bei Umschaltung aus dem Performanzmodus in den Vergleichsmodus ein Startzustand für den Vergleichsmodus erzeugt wird, dadurch gekennzeichnet, dass für den Startzustand potentiell anzugleichende Speicher oder Speicherbereiche mit einer Kennung versehen werden, ob die Daten und/oder Befehle in diesen Speichern oder Speicherbereichen für den Startzustand geändert werden müssen oder nicht geändert werden müssen.

Подробнее
05-12-2012 дата публикации

Load and store exclusive instructions for data access conflict resolution in multiprocessor systems with separate processor caches

Номер: GB0002491350A
Принадлежит:

A data processing system has several processors, each with its own cache. The processors have load exclusive and store exclusive instructions. When a load exclusive instruction is executed by a processor, the data is loaded into the processors cache and a flag is set indicating that that processor has requested exclusive access. When a store exclusive instruction is executed by a first processor, the instruction fails if the data is no longer in the first processorâ s cache. If the data is in the first processorâ s cache and is in no other processorâ s cache, then the instruction succeeds. If the data is in the first processorâ s cache and in another processorâ s cache, then the flag is checked. If the flag is not set then the instruction fails. Otherwise, the flags for the data in the other processorsâ caches are cleared, the corresponding cache lines are invalidated and the instruction succeeds.

Подробнее
08-07-1992 дата публикации

COMMITMENT ORDERING FOR GUARANTEEING SERIALIZABILITY ACROSS DISTRIBUTED TRANSACTIONS

Номер: GB0009210876D0
Автор:
Принадлежит:

Подробнее
16-02-2000 дата публикации

Data processing apparatus

Номер: GB0002340265A
Принадлежит:

Data processing apparatus is disclosed in which a core program object interacts with and controls operation of a plurality of plug-in program objects operable to carry out data processing tasks, the apparatus providing for communication between the core program object and each such data processing task:(i) a synchronous interface to allow interaction between the core program object and a plug-in program object operable to carry out that task; and(ii) an asynchronous interface to allow interaction between the core program object and a hardware device operable to carry out that task.

Подробнее
30-11-2005 дата публикации

Control of access to a shared resource in a data processing apparatus

Номер: GB0002414573A
Принадлежит:

A data processing apparatus comprises a plurality of processors operable to perform respective data processing operations requiring access to the shared resource, and a path interconnecting the plurality of processors. An access control mechanism is operable to control access to the shared resource by the plurality of processors, each processor being operable to enter a power saving mode if access to the shared resource is locked. Further, each processor is operable, when that processor has access to the shared resource, to issue a notification on the path when access to the shared resource is no longer required by that processor. A processor in the power saving mode is arranged, upon receipt of that notification, to exit the power saving mode and to seek access to the shared resource.

Подробнее
27-09-1989 дата публикации

Arbitration system

Номер: GB0002215874A
Принадлежит:

In a system having a shared resource, such as a bus or memory, with which various means may communicate upon a request (REQA to REQN) being granted (GRANTA to GRANTN) by an arbiter, in order to reduce the arbitration time a state machine 16 and latch 14 run asynchronously so that immediately one request has been granted and acted upon the state machine will commence arbitration in respect of any remaining requests.

Подробнее
25-07-1984 дата публикации

RAPID MESSAGE TRANSMISSION SYSTEM BETWEEN COMPUTERS

Номер: GB0002083668B
Автор:
Принадлежит: JEUMONT SCHNEIDER

Подробнее
25-09-2019 дата публикации

Data processing systems

Номер: GB0002539958B
Принадлежит: ADVANCED RISC MACH LTD, ARM Limited

Подробнее
13-01-2016 дата публикации

A data processing apparatus and method for performing lock-protected processing operations for multiple threads

Номер: GB0002528056A
Принадлежит:

Processing circuitry 10 performs processing operations required by a plurality of threads 17, 18, 19, the processing operations including a lock-protected processing operation with which a lock 55 is associated, where the lock needs to be acquired before the processing circuitry performs the lock-protected processing operation. Baton maintenance circuitry 35 is used to maintain a baton in association with the plurality of threads, the baton forming a proxy for the lock, and the baton maintenance circuitry being configured to allocate the baton between the threads. Via communication between the processing circuitry and the baton maintenance circuitry, once the lock has been acquired for one of the threads, the processing circuitry performs the lock protected processing operation for multiple threads before the lock is released, with the baton maintenance circuitry identifying a current thread amongst the multiple threads for which the lock-protected processing operation is to be performed ...

Подробнее
27-01-2021 дата публикации

Epoch-based determination of completion of barrier termination command

Номер: GB0002585914A
Принадлежит:

An apparatus comprises transaction handling circuitry to issue memory access transactions, each memory access transaction specifying an epoch identifier indicative of a current epoch in which the memory access transaction is issued; transaction tracking circuitry to track, for each of at least two epochs, a number of outstanding memory access transactions issued in that epoch; barrier termination circuitry to signal completion of a barrier termination command when the transaction tracking circuitry indicates that there are no outstanding memory access transactions remaining which were issued in one or more epochs preceding a barrier point, and epoch changing circuitry to change the current epoch to a next epoch, in response to a barrier point signal representing said barrier point. This helps to reduce the circuit area overhead for tracking completion of memory access transactions preceding a barrier point.

Подробнее
13-10-2021 дата публикации

Sync group selection

Номер: GB0002593861A
Принадлежит:

Implicit sync group selection is performed by having dual interfaces to a gateway. A subsystem coupled to the gateway selects a sync group to be used for an upcoming exchange by selecting the interface to which a sync request is written to. The gateway propagates the sync requests and/or acknowledgments in dependence upon configuration settings for the sync group that is associated with the interface to which the sync request was written to.

Подробнее
13-02-1991 дата публикации

MEMORY ACCESS METHOD AND SYSTEM

Номер: GB0009027947D0
Автор:
Принадлежит:

Подробнее
27-05-2020 дата публикации

Direction indicator

Номер: GB0002569272B
Принадлежит: GRAPHCORE LTD, Graphcore Limited

Подробнее
12-12-1979 дата публикации

Multi-processor systems using microprocessors

Номер: GB2022299A
Автор: Boning, Werner
Принадлежит:

In a multi-processor system a plurality of microprocessors (1,2) are coupled to a common system bus (10). A master processor (2) has a hold input (33) (HOLD) for bus requests from other processor(s), and an output (34) (HOLDA) for the emission of an acknowledgement confirming that no further access to the system bus by the master processor will take place for the duration thereof. A second slave processor (1) has a transmitter (BUS REQ) for the emission of bus requests requesting access to the system bus to the hold request input of the master processor, and a receiver (BRPI) for receiving the acknowledgements from the acknowledgement output of the master processor which inform the slave processor of its entitlement to access to the system bus. The slave accesses the bus only during receipt of the acknowledgement. Where more than one slave processor is provided the acknowledgement signal from the master processor is fed to a first one of the slaves and is forwarded to successive slaves ...

Подробнее
17-05-1978 дата публикации

DEADLOCK DETECTION IN COMPUTER

Номер: GB0001511282A
Автор:
Принадлежит:

... 1511282 Data processing HONEYWELL INFORMATION SYSTEMS Inc 21 April 1975 [19 April 1974] 16318/75 Heading G4A A data processing system includes a deadlock detection system which prevents a first process 1 waiting for a resource A which is assigned to a second process 3 which is already waiting, directly or indirectly via a resource C and process 7, for a second resource F which is currently assigned to the first process 1. If the first process requests assignment of any resource a specified class each resource of that class is checked to see if it is waiting, indirectly, for the first process (Fig. 1I, not shown). The deadlock detection system may be included in a data processing system substantially the same as that described in Specification 1,511,281.

Подробнее
24-06-2020 дата публикации

Gateway pull model

Номер: GB0002579412A
Автор: BRIAN MANULA, Brian Manula
Принадлежит:

A gateway for interfacing a host with a subsystem for acting as a work accelerator to the host A computer system comprising: {i) a computer subsystem configured to act as a work accelerator, and {ii) a gateway connected to the computer subsystem, the gateway enabling the transfer of data to the computer subsystem from external storage at pre-compiled data exchange synchronisation points attained by the computer subsystem, which act as a barrier between a compute phase and an exchange phase of the computer subsystem, wherein the computer subsystem is configured to pull data from a gateway transfer memory of the gateway in response to the pre-compiled data exchange synchronisation point attained by the subsystem, wherein the gateway comprises at least one processor configured to perform at least one operation to pre-load at least some of the data from a first memory of the gateway to the gateway transfer memory in advance of the pre-complied data exchange synchronisation point attained by ...

Подробнее
14-06-2023 дата публикации

Data processing systems

Номер: GB0002604150B
Принадлежит: ADVANCED RISC MACH LTD [GB]

Подробнее
16-03-2022 дата публикации

Asynchronous data movement pipeline

Номер: GB0002598809A
Принадлежит:

Apparatuses, systems, and techniques to parallelize operations in one or more programs with data copies from global memory to shared memory in each of the one or more programs. In at least one embodiment, a program performs operations on shared data and then asynchronously copies shared data to shared memory, and continues performing additional operations in parallel while the shared data is copied to shared memory until an indicator provided by an application programming interface to facilitate parallel computing, such as CUDA, informs said program that shared data has been copied to shared memory.

Подробнее
19-04-2023 дата публикации

Synchronization barrier

Номер: GB0002611847A
Принадлежит:

A processor comprises one or more circuits to perform a memory barrier operation to cause accesses to memory by a plurality of groups of threads to occur in an order indicated by the memory barrier operation. The memory barrier operation may store synchronisation information for the plurality of groups of threads in a single addressable memory location, the synchronisation information being stored as a bit field with distinct groups of bits indicating synchronisation of individual groups of threads. Individual bits of the bit field may represent a subgroup of threads capable of being executed in parallel on a symmetric multiprocessor. The synchronisation information may be manipulated using an atomic logical operation. The memory barrier operation may cause the plurality of groups of threads to be executed in parallel. An individual group of the plurality of groups of threads may be a cooperative thread group spanning a plurality of warps.

Подробнее
29-11-2023 дата публикации

Apparatus and method in which control functions and synchronization events are performed

Номер: GB0002619126A
Принадлежит:

An apparatus comprises a plurality of processing elements and control circuitry to communicate with the plurality of processing elements by a data communication path. The control circuitry, in response to a request 1130, 1134 issued by a given processing element of the plurality of processing elements, initiates a hybrid operation by issuing a command 1140, 1142 defining the hybrid operation to a group of processing elements comprising at least a subset of the plurality of processing elements, the hybrid operation comprising performance of a control function selected from a predetermined set of one or more control functions, and a synchronization event. The synchronization event comprises each of the group of processing elements providing confirmation 1160 that any control functions pending at that processing element have reached at least a predetermined stage of execution. The given (initiating) processing element inhibits the issuance of any further requests to the control circuitry until ...

Подробнее
25-02-1988 дата публикации

SYSTEM FOR SNAPPING THE TRANSMISSION OF MESSAGES BETWEEN COMPUTERS

Номер: AT0000385142B
Автор:
Принадлежит:

Подробнее
15-05-2012 дата публикации

MULTIPROCESSOR SYSTEM AND PROCEDURE FOR ITS EXCLUSIVE CONTROL

Номер: AT0000555437T
Принадлежит:

Подробнее
15-07-2014 дата публикации

Verfahren zum Schutz vor Unterbrechung einer festgelegten Befehlssequenz eines Prozesses durch einen anderen Prozess in einer Datenverarbeitungsanlage

Номер: AT0000513762A1
Принадлежит:

Die Erfindung betrifft ein Verfahren zum Schutz vor Unterbrechung einer festgelegten Befehlssequenz eines Prozesses durch einen anderen Prozess in einer Datenverarbeitungsanlage, wobei die Prozesse auf mindestens einem Prozessor ablaufen. Um eine atomare Abarbeitung von Prozessen unter Verhinderung einer Prioritätsumkehr zu ermöglichen, ist vorgesehen,-dass ein erster Prozess (EOS-Task1) am Beginn seiner Ausführung der festgelegten Befehlssequenz beim Betriebssystemkern (EOS-Kernel) wegen einer Sperre (EOS-Lock) des Prozessors für andere Prozesse anfragt, diese Sperre vom Betriebssystemkern gewährt wird, sofern kein anderer Prozess diese Sperre bereits eingeschaltet hat, und die Sperre zugunsten des ersten Prozesses (EOS-Task1) vom Betriebssystem so lange aufrecht erhalten wird, bis der erste Prozess am Ende seiner Ausführung der festgelegten Befehlssequenz die Sperre wieder aufhebt, und-dass ein zweiter Prozess (E0S-Task2), der während einer Sperre des Prozessors zugunsten eines ersten ...

Подробнее
15-04-2006 дата публикации

PROCEDURE FOR PREVENTING BUFFER BLOCKADE IN DATA FLOW COMPUTATIONS

Номер: AT0000321304T
Принадлежит:

Подробнее
15-01-1999 дата публикации

DATA PROCESSING SYSTEM AND - PROCEDURES WITH BLOCKABLE MEMORY AREAS

Номер: AT0000175506T
Принадлежит:

Подробнее
19-12-1991 дата публикации

SYNCHRONIZING AND PROCESSING OF MEMORY ACCESS OPERATIONS IN MULTIPROCESSOR SYSTEM

Номер: AU0005395290A
Автор: NAME NOT GIVEN
Принадлежит:

Подробнее
08-10-1987 дата публикации

MULTIPROCESSOR SYSTEM

Номер: AU0007095087A
Принадлежит:

Подробнее
29-04-2002 дата публикации

Synchronized computing

Номер: AU2002213377A8
Автор: Narayan, Shankar
Принадлежит:

Подробнее
19-07-2007 дата публикации

System and method for managing storage resources in a clustered computing environment

Номер: AU2007202999A1
Принадлежит:

Подробнее
01-04-1997 дата публикации

Controlling shared disk data in a duplexed computer unit

Номер: AU0006932996A
Принадлежит:

Подробнее
04-02-2016 дата публикации

Synchronized processing of data by networked computing resources

Номер: AU2016200212A1
Принадлежит:

Systems 100, 1000, methods, and machine-interpretable programming or other instruction products for the management of data processing by multiple networked computing resources 106, 1106. In particular, the disclosure relates to the synchronization of related requests for processing of data using distributed network resources. WO 2011/069234 PCT/CA2010/000872 ...

Подробнее
06-05-1993 дата публикации

SEMAPHORE MECHANISM FOR A DATA PROCESSING SYSTEM

Номер: AU0002744892A
Принадлежит:

Подробнее
30-11-1995 дата публикации

Data transfer system in information processing system

Номер: AU0002026795A
Принадлежит:

Подробнее
10-02-1987 дата публикации

DISTRIBUTED ARBITRATION FOR MULTIPLE PROCESSORS

Номер: CA0001217872A1
Принадлежит: NA

Подробнее
07-01-1975 дата публикации

HIGH SPEED BUFFER OPERATION IN A MULTI-PROCESSING SYSTEM

Номер: CA0000960782A1
Принадлежит:

Подробнее
30-12-1986 дата публикации

DEADLOCK DETECTION AND PREVENTION MECHANISM FOR A COMPUTER SYSTEM

Номер: CA1216072A

A method and apparatus for detecting a deadlock condition where two or more processes are waiting for events which cannot happen. Firmware is provided to examine the request of a first process of a group of processes for assignment of a first resource of a group of resources, and to determine whether said first resource is or is not currently assigned to a second process of said group of processes which said second process is already waiting directly or indirectly or a second resource of said group of resources which said second resource is currently assigned to the said first process.

Подробнее
19-02-1980 дата публикации

MEMORY ACCESS CONTROL SYSTEM

Номер: CA1072216A
Принадлежит: FUJITSU LTD, FUJITSU LIMITED

A memory access control system which is provided between one or more accessing devices and a main memory composed of a plurality of independently accessible logic stores and receives a request from the accessing device based on the status of the main memory to permit access to one of the logic stores. The memory access control system comprises a shift register composed of stages corresponding to the cycle time of the main memory for storing address information sufficient for identifying a busy one of the logic stores and sequentially shifting the stored content in synchronism with a clock signal and a comparator circuit for comparing the content of each stags of the shift register with address information of the logic store designated based on the request from the accessing device, receiving the request based on the result of the comparison and generating a control signal for accessing to the designated logic store. Using the shift register, one of the logic stores to be accessed can be ...

Подробнее
13-12-1983 дата публикации

MULTI-PROCESSOR SYSTEM

Номер: CA1158779A
Принадлежит: TIMEPLEX INC, TIMEPLEX, INC.

MULTI PROCESSOR SYSTEM There is disclosed a multi-processor system having a master processor and a plurality of slaves. Each processor is provided with its own memory. Although each slave processor can access only its respective memory, the master processor can access either its own memory or any one of the slave memories. Maximum throughput (efficiency) is achieved by suspending operation of a single slave processor for only a single memory cycle, i.e., the time required for the master processor to access the respective slave memory. Each processor/memory is on a single card, with all of the cards being connected to a common bus. The cards are virtually identical, and master/slave distinctions are determined by a single slot bit on each card. A unique addressing scheme is implemented for access from the master to a selected slave.

Подробнее
30-12-1986 дата публикации

DEADLOCK DETECTION AND PREVENTION MECHANISM FOR A COMPUTER SYSTEM

Номер: CA0001216072A1
Принадлежит:

Подробнее
28-09-1982 дата публикации

COMMAND STACKING APPARATUS FOR USE IN A MEMORY CONTROLLER

Номер: CA0001132716A1
Автор: WEBSTER MARVIN K
Принадлежит:

Подробнее
30-08-1988 дата публикации

APPARATUS FOR THE MULTIPLE DETECTION OF INTERFERENCES

Номер: CA0001241450A1
Автор: TRINCHIERI MARIO G
Принадлежит:

Подробнее
10-02-2005 дата публикации

SYSTEM AND METHOD FOR SYNCHRONIZING OPERATIONS AMONG A PLURALITY OF INDEPENDENTLY CLOCKED DIGITAL DATA PROCESSING DEVICES

Номер: CA0002533852A1
Принадлежит:

A system is described for maintaining synchrony of operations among a plurality of devices hat have independent clocking arrangements. The system includes a task distribution device that distributes tasks to a synchrony group comprising a plurality of devices that are to perform the tasks distributed by the task distribution device in synchrony. The task distribution device distributes each task to the members of the synchrony group over a network. Each task is associated with a time stamp that indicates a time, relative to a clock maintained by the task distribution device, at which the members of the synchrony group are to execute the task. Each member of the synchrony group periodically obtains from the task distribution device an indication of the current time indicated by its clock, determines a time differential between the task distribution device's clock and its respective clock and determines therefrom a time at which, according to its respective clock, the time stamp indicates ...

Подробнее
25-12-1984 дата публикации

DUAL PORT MEMORY INTERLOCK

Номер: CA0001180125A1
Автор: ADCOCK RALPH L
Принадлежит:

Подробнее
20-02-2003 дата публикации

VICTIM SELECTION FOR DEADLOCK DETECTION

Номер: CA0002455917A1
Принадлежит:

A mechanism and system are described for either releasing held resources in the case of a deadlock or to postpone requests for resources when a potential deadlock is detected. One technique involves a three pass algorithm for selecting a candidate, where the candidate is either a possessory entity or resource is used. The three passes are as follows: (1) determining the subset of candidates, which have the CAN-BE-VICTIM flag set on; (2) If pass one results in a subset with more than one candidates in it, process that subset to determine a second subset of candidates based on resource priority associated with a resource type; (3) If the second pass results in a subset with more than one candidate in it, process that subset to select the candidate that has been running or held the shortest length of time.

Подробнее
05-12-2002 дата публикации

DEVICE AND METHOD FOR SYNCHRONISING A SYSTEM OF COUPLED DATA PROCESSING FACILITIES

Номер: CA0002411788A1
Принадлежит:

The invention relates to a system and method for synchronising coupled multi- computer systems, in particular those used in railway technology. Said system and method increase availability and reliability. Multi-computer systems that use the inventive system only require one hardware timing module, thus eliminating the risks caused by a synchronisation of hardware timing modules. In order for a coupled computer (R1, R2) to have a clock pulse (pulse), the latter is simulated by the time synchronisation method. As each computer (R1, R2) is usually equipped with a hardware timing module, the allocation of the active hardware timing module to a computer (R1, R2) can be altered if necessary. Subsystem steps (RD, PC1, PC2, OT) have been introduced into the inventive system to maintain an appropriate separation of the synchronisation process (SYN & CHK) from the applications (APP). Said subsystem steps (RD, PC1, PC2, OT) are independent of the operating system (OS-LAY) and the hardware (HW-LAY ...

Подробнее
20-04-2017 дата публикации

METHOD FOR EFFICIENT TASK SCHEDULING IN THE PRESENCE OF CONFLICTS

Номер: CA0002999976A1
Принадлежит:

Embodiments include computing devices, apparatus, and methods implemented by a computing device for task scheduling in the presence of task conflict edges on a computing device. The computing device may determine whether a first task and a second task are related by a task conflict edge. In response to determining that the first task and the second task are related by the task conflict edge, the computing device may determine whether the second task acquires a resource required for execution of the first task and the second task. In response to determining that the second task fails to acquire the resource, the computing device may assign a dynamic task dependency edge from the first task to the second task.

Подробнее
02-02-2021 дата публикации

SYNCHRONIZATION IN A MULTI-TILE, MULTI-CHIP PROCESSING ARRANGEMENT

Номер: CA3021409C
Принадлежит: GRAPHCORE LTD, GRAPHCORE LIMITED

A method of operating a system comprising multiple processor tiles divided into a plurality of domains wherein within each domain the tiles are connected to one another via a respective instance of a time-deterministic interconnect and between domains the tiles are connected to one another via a non-time-deterministic interconnect. The method comprises: performing a compute stage, then performing a respective internal barrier synchronization within each domain, then performing an internal exchange phase within each domain, then performing an external barrier synchronization to synchronize between different domains, then performing an external exchange phase between the domains.

Подробнее
22-06-2000 дата публикации

APPARATUS AND METHOD FOR GENERATING MUSIC DATA

Номер: CA0002320207A1
Автор: YAMANOUE, KAORU
Принадлежит:

It is an object of the present invention to, in the music data generating apparatus, reduce the complex processings in each of the computing units and access to the shared region of the main memory unit and to generate more efficiently the music data. According to the music data generating apparatus of the present, the computing units (31 to 33) of the operating device unit (2) perform specified operations on data necessary for generating music that is stored in a main memory (3) and store "1" as the respective flags indicating the completion of the operating process in a synchronization notification information designating part (9) of the main memory (3). A synchronization notification information processing part (5) has a synchronization notification information switching part (13) which, in response to a control signal from a CPU (42), determines whether or not all of the processes of the computing units (31 to 33) have been completed according to whether or not all of the stored flags ...

Подробнее
11-01-2011 дата публикации

SYNCHRONIZED PROCESSING OF DATA BY NETWORKED COMPUTING RESOURCES

Номер: CA0002927532A1
Принадлежит:

... ²Systems 100, 1000, methods, and machine-interpretable programming or ²other instruction products for the management of data processing by multiple ²networked computing resources 106, 1106. In particular, the disclosure ²relates to the synchronization of related requests for processing of data ²using ²distributed network resources.²² ...

Подробнее
18-09-2014 дата публикации

SYSTEMS AND METHODS FOR PARTITIONING COMPUTING APPLICATIONS TO OPTIMIZE DEPLOYMENT RESOURCES

Номер: CA0002939379A1
Принадлежит:

Systems and methods for dynamic partitioning of computing applications that involves partitioning a computing application based on processing requirements and available hardware resources to optimize resource usage and security across multiple platforms, and handle interprocess communications across the platforms.

Подробнее
09-05-1989 дата публикации

SYNCHRONIZATION SERVICE FOR A DISTRIBUTED OPERATING SYSTEM OR THE LIKE

Номер: CA0001253971A1
Принадлежит:

Подробнее
17-02-1998 дата публикации

SHARED MEMORY ACCESS AND DATA STRUCTURE ACCESS CONTROL

Номер: CA0002057446C

A method and circuit for granting access to a block of memory in a multiprocessor system having a shared memory is provided. When a memory request for exclusive access to a block of memory is granted, the starting address for that block of memory is placed in a register bank, thereby opening a semaphore. The starting address of a memory block of a subsequent memory access request is compared with the starting addresses corresponding to open semaphores within the register bank and access is denied to the requested block of memory if a match is found. The starting address associated with a request which is denied access is placed in a temporary buffer and the request is later granted access after the corresponding open semaphore becomes closed. A request which is granted memory access to a memory block which results in an open semaphore, has exclusive access to that block of memory until the semaphore is closed.

Подробнее
03-01-1992 дата публикации

ARRANGEMENT FOR RESERVING AND ALLOCATING A PLURALITY OF COMPETING DEMANDS FOR AN ORDERED BUS COMMUNICATIONS NETWORK

Номер: CA0002032617A1
Автор: MUEHRCKE, ERIC B.
Принадлежит:

Подробнее
08-12-1996 дата публикации

FAIL-FAST, FAIL-FUNCTIONAL, FAULT-TOLERANT MULTIPROCESSOR SYSTEM

Номер: CA0002178391A1
Принадлежит:

A multiprocessor system includes a number of subprocessor systems, each substantially identically constructed, and each comprising a central processing unit (CPU), and at least one I/O device, interconnected by routing apparatus that also interconnects the sub-processor systems. A CPU of any one of the sub-processor systems may communicate, through the routing elements, with any I/O device of the system, or with any CPU of the system. Communications between I/O devices and CPUs is by packetized messages. Interrupts from I/O devices are communicated from the I/O devices to the CPUs (or from one CPU to another CPU) as message packets. CPUs and I/O devices may write to, or read from, memory of a CPU of the system. Memory protection is provided by an access validation method maintained by each CPU in which CPUs and/or I/O devices are provided with a validation to read/write memory of that CPU, without which memory access is denied.

Подробнее
08-12-1996 дата публикации

FAIL-FAST, FAIL-FUNCTIONAL, FAULT-TOLERANT MULTIPROCESSOR SYSTEM

Номер: CA0002178392A1
Принадлежит:

A multiprocessor system includes a number of subprocessor systems, each substantially identically constructed, and each comprising a central processing unit (CPU), and at least one I/O device, interconnected by routing apparatus that also interconnects the sub-processor systems. A CPU of any one of the sub-processor systems may communicate, through the routing elements, with any I/O device of the system, or with any CPU of the system. Communications between I/O devices and CPUs is by packetized messages. Interrupts from I/O devices are communicated from the I/O devices to the CPUs (or from one CPU to another CPU) as message packets. CPUs and I/O devices may write to, or read from, memory of a CPU of the system. Memory protection is provided by an access validation method maintained by each CPU in which CPUs and/or I/O devices are provided with a validation to read/write memory of that CPU, without which memory access is denied.

Подробнее
08-12-1996 дата публикации

FAIL-FAST, FAIL-FUNCTIONAL, FAULT-TOLERANT MULTIPROCESSOR SYSTEM

Номер: CA0002178405A1
Принадлежит:

A multiprocessor system includes a number of subprocessor systems, each substantially identically constructed, and each comprising a central processing unit (CPU), and at least one I/O device, interconnected by routing apparatus that also interconnects the sub-processor systems. A CPU of any one of the sub-processor systems may communicate, through the routing elements, with any I/O device of the system, or with any CPU of the system. Communications between I/O devices and CPUs is by packetized messages. Interrupts from I/O devices are communicated from the I/O devices to the CPUs (or from one CPU to another CPU) as message packets. CPUs and I/O devices may write to, or read from, memory of a CPU of the system. Memory protection is provided by an access validation method maintained by each CPU in which CPUs and/or I/O devices are provided with a validation to read/write memory of that CPU, without which memory access is denied.

Подробнее
13-03-2001 дата публикации

COMMAND SET AND PROCEDURE FOR SYNCHRONIZATION OF FREQUENCY HOPPING CONTROL CLOCKS

Номер: CA0002142794C

A wireless Local Area Network (LAN), in which each wireless adapter includes a controller and a transceiver which arc located a finite distance from each other. Each controller and transceiver has their own hop clock and hop table. A data interface which includes a command set is defined between the controller and the transceiver. The comma nd set includes commands and procedures for synchronizing the hop clocks and hop tables between the controller and the transceiver.

Подробнее
05-01-2012 дата публикации

Managing Shared Resources In A Multi-Computer System With Failover Support

Номер: US20120005348A1
Принадлежит: International Business Machines Corp

Managing shared resources in a multi-computer system with failover support, including: reading priority detection signals from a computer inserted into the multiple-computer system, the priority detection signals representing a priority of the inserted computer; reading planar detection signals from the computer, the planar detection signals representing an insertion state of all computers currently inserted into the multiple-computer system; determining if the computer has the highest priority among all the computers inserted into the multiple-computer system in accordance with the priority detection signals and the planar detection signals; and, in response to determining that the computer has the highest priority, monitoring shared resources and outputting a specific output signal associated with the highest priority computer, the specific output signal providing an identification of the highest priority computer to other computers currently inserted into the multiple-computer system and representing control, by the highest priority computer, of the shared resources.

Подробнее
12-01-2012 дата публикации

Multithread processor and digital television system

Номер: US20120008674A1
Принадлежит: Panasonic Corp

A multithread processor including: an execution unit including a physical processor; and a translation lookaside buffer (TLB) which converts, to a physical address, a logical address output from the execution unit, and logical processors are implemented on the physical processor, a first logical processor that is a part of the logical processors constitutes a first subsystem having a first virtual space, a second logical processor that is a part of the logical processors and different from the first logical processor constitutes a second subsystem having a second virtual space, each of the first and the second subsystems has processes to be assigned to the logical processors, and the logical address includes: a first TLB access virtual identifier for identifying one of the first and the second subsystems; and a process identifier for identifying a corresponding one of the processes in each of the first and the second subsystems.

Подробнее
26-01-2012 дата публикации

Apparatus and method for thread scheduling and lock acquisition order control based on deterministic progress index

Номер: US20120023505A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

Provided is a method and apparatus for ensuring a deterministic execution characteristic of an application program to perform data processing and execute particular functions in a computing environment using a micro architecture. A lock controlling apparatus based on a deterministic progress index (DPI) may include a loading unit to load a DPI of a first core and a DPI of a second core among DPIs of a plurality of cores at a lock acquisition point in time of each thread, a comparison unit to compare the DPI of the first core and the DPI of the second core, and a controller to assign a lock to a thread of the first core when the DPI of the first core is less than the DPI of the second core and when the second core corresponds to a last core to be compared among the plurality of cores.

Подробнее
02-02-2012 дата публикации

Method and system for using a virtualization system to identify deadlock conditions in multi-threaded programs by controlling scheduling in replay

Номер: US20120030657A1
Автор: Min Xu, Qi Gao
Принадлежит: Individual

A method and system for determining potential deadlock conditions in a target multi-threaded software application. The target application is first run in a virtual machine and the events within the application are recorded. The recorded events are replayed and analyzed to identify potential lock acquisition conflicts occurring between threads of the application. The potential lock acquisition conflicts are identified by analyzing the order in which resource locks are obtained and pairs of resources that have respective locks obtained in different orders are analyzed. These analyzed pairs are used to define a different order of events in the target application that, when the target application is re-run with the second order of events, may trigger a deadlock condition. The target application is then re-run with the different order of events in an attempt to trigger and then identify potential deadlock situations.

Подробнее
02-02-2012 дата публикации

High performance locks

Номер: US20120030681A1
Автор: Kirk J. Krauss
Принадлежит: International Business Machines Corp

Systems and methods of enhancing computing performance may provide for detecting a request to acquire a lock associated with a shared resource in a multi-threaded execution environment. A determination may be made as to whether to grant the request based on a context-based lock condition. In one example, the context-based lock condition includes a lock redundancy component and an execution context component.

Подробнее
09-02-2012 дата публикации

Apparatus and methods to concurrently perform per-thread as well as per-tag memory access scheduling within a thread and across two or more threads

Номер: US20120036509A1
Принадлежит: Sonics Inc

A method, apparatus, and system in which an integrated circuit comprises an initiator Intellectual Property (IP) core, a target IP core, an interconnect, and a tag and thread logic. The target IP core may include a memory coupled to the initiator IP core. Additionally, the interconnect can allow the integrated circuit to communicate transactions between one or more initiator Intellectual Property (IP) cores and one or more target IP cores coupled to the interconnect. A tag and thread logic can be configured to concurrently perform per-thread and per-tag memory access scheduling within a thread and across multiple threads such that the tag and thread logic manages tags and threads to allow for per-tag and per-thread scheduling of memory accesses requests from the initiator IP core out of order from an initial issue order of the memory accesses requests from the initiator IP core.

Подробнее
01-03-2012 дата публикации

Data synchronization and disablement of dependent data fields

Номер: US20120054263A1
Принадлежит: SAP SE

Methods for synchronizing data in client-server architectures are described. A client stores data in first and second fields. The value stored in the second field depends on the value stored in the first. When the client writes a new value to the first field, it disables writing to the second field. The client sends a refresh request and receives a refresh response from a server. The refresh request and response contain differences in data stored in the client and server fields. If the refresh response includes a new value for the second field, the client writes the value to the second field. A round trip pending flag associated with the first field is set. A data invalid flag associated with the second field is set. New data can be written to a third field when the value of the third field does not depend on the value of the first field.

Подробнее
15-03-2012 дата публикации

Send-Side Matching Of Data Communications Messages

Номер: US20120066284A1
Принадлежит: International Business Machines Corp

Send-side matching of data communications messages in a distributed computing system comprising a plurality of compute nodes organized for collective operations, including: issuing by a receiving node to source nodes a receive message that specifies receipt of a single message to be sent from any source node, the receive message including message matching information, a specification of a hardware-level mutual exclusion device, and an identification of a receive buffer; matching by two or more of the source nodes the receive message with pending send messages in the two or more source nodes; operating by one of the source nodes having a matching send message the mutual exclusion device, excluding messages from other source nodes with matching send messages and identifying to the receiving node the source node operating the mutual exclusion device; and sending to the receiving node from the source node operating the mutual exclusion device a matched pending message.

Подробнее
22-03-2012 дата публикации

Scaleable Status Tracking Of Multiple Assist Hardware Threads

Номер: US20120072707A1
Принадлежит: International Business Machines Corp

A processor includes an initiating hardware thread, which initiates a first assist hardware thread to execute a first code segment. Next, the initiating hardware thread sets an assist thread executing indicator in response to initiating the first assist hardware thread. The set assist thread executing indicator indicates whether assist hardware threads are executing. A second assist hardware thread initiates and begins executing a second code segment. In turn, the initiating hardware thread detects a change in the assist thread executing indicator, which signifies that both the first assist hardware thread and the second assist hardware thread terminated. As such, the initiating hardware thread evaluates assist hardware thread results in response to both of the assist hardware threads terminating.

Подробнее
22-03-2012 дата публикации

Shared Request Grouping in a Computing System

Номер: US20120072915A1
Принадлежит: International Business Machines Corp

A queuing module is configured to determine the presence of at least one shared request in a request queue, and in the event at least one shared request is determined to be present in the queue; determine the presence of a waiting exclusive request located in the queue after the at least one shared request, and in the event a waiting exclusive request is determined to be located in the queue after the at least one shared request: determine whether grouping a new shared request with the at least one shared request violates a deferral limit of the waiting exclusive request; and, in the event grouping the new shared request with the at least one shared request does not violate the deferral limit of the waiting exclusive request, group the new shared request with the at least one shared request.

Подробнее
19-04-2012 дата публикации

Device, system, and method of distributing messages

Номер: US20120096105A1
Автор: Tzah Oved
Принадлежит: Voltaire Ltd

Device, system, and method of distributing messages. For example, a data publisher capable of communication with a plurality of subscribers via a network fabric, the data publisher comprising: a memory allocator to allocate a memory area of a local memory unit of the data publisher to be accessible for Remote Direct Memory Access (RDMA) read operations by one or more of the subscribers; and a publisher application to create a message log in said memory area, to send a message to one or more of the subscribers using a multicast transport protocol, and to store in said memory area a copy of said message. A subscriber device handles recovery of lost messages by directly reading the lost messages from the message log of the data publisher using RDMA read operation(s).

Подробнее
17-05-2012 дата публикации

Event processing system, distribution controller, event processing method, distribution control method, and program storage medium

Номер: US20120124594A1
Автор: Sawako Mikami
Принадлежит: NEC Corp

The number of dispatch rules set to each dispatcher is reduced in an event processing system. An event processing system includes a distribution controller 100 and a plurality of dispatchers 300 . To each of the plurality of dispatchers 300 , dispatching target attribute information, that is attribute information included in an event 601 that is a dispatching target for the dispatchers 300 , has been assigned. The distribution controller 100 sets a dispatch rule 604 including a dispatch condition indicating one or more conditions of attribute information included in an event 601 and a destination of an event 601 that satisfies the dispatch condition to the dispatchers 300 , to which the dispatching target attribute information matching any one of the conditions of attribute information included in the dispatch condition of the dispatch rule 604 has been assigned, among the plurality of dispatchers 300.

Подробнее
24-05-2012 дата публикации

Advanced contention detection

Номер: US20120131127A1
Автор: John M. Holt
Принадлежит: Waratek Pty Ltd

A multiple computer system disclosed in which n computers (M 1 , M 2 . . . Mn) each run a different portion of a single application program written to execute only on a single computer. Local memory of each computer is maintained by updating all computer with every change made to addressed memory location. Contention can arise when the same memory location is substantially updated simultaneously by two or more machines because of transmission delays and latency of the communication network interconnecting all the computers. Contention detection and resolution is disclosed, in which a count value indicative of the cumulative number of times each memory location has been updated is utilized. A method of echo suppression and echo rejection are disclosed; incrementing the count value by two in the case of sequential transmission to the same memory location (D) is disclosed.

Подробнее
24-05-2012 дата публикации

Complex event processing (cep) adapters for cep systems for receiving objects from a source and outputing objects to a sink

Номер: US20120131599A1
Принадлежит: Microsoft Corp

Methods, systems, and computer-readable media are disclosed for implementing adapters for event processing systems. A particular system includes an input adapter configured to store event objects received from a source at an input queue. The system also includes a query engine configured to remove event objects from the input queue, to perform a query with respect to the removed event objects to generate result objects, and to insert result objects into an output queue. The system also includes an output adapter configured to remove result objects from the output queue and to transmit the result objects to a sink.

Подробнее
31-05-2012 дата публикации

Method for displaying cpu utilization in a multi-processing system

Номер: US20120137295A1
Принадлежит: Alcatel Lucent Canada Inc

Various exemplary embodiments relate to a method of measuring CPU utilization. The method may include: executing at least one task on a multi-processing system having at least two processors; determining that a task is blocked because a resource is unavailable; starting a first timer for the task that measures the time the task is blocked; determining that the resource is available; resuming processing the task; stopping the first timer for the task; and storing the time interval that the task was blocked. The method may determine that a task is blocked when the task requires access to a resource, and a semaphore indicates that the resource is in use. The method may also include measuring the utilization time of each task, an idle time for each processor, and an interrupt request time for each processor. Various exemplary embodiments relate the above method encoded as instructions on a machine-readable medium.

Подробнее
07-06-2012 дата публикации

Hierarchical software locking

Номер: US20120143838A1
Принадлежит: Microsoft Corp

A processor chip may have a built-in hardware lock and deterministic exclusive locking of the hardware lock by execution units executing in parallel on the chip. A set of software locks may be maintained, where the execution units set and release the software locks only by first acquiring a lock of the hardware lock. A first execution unit sets a software lock after acquiring a lock of the hardware lock, and other execution units, even if exclusively locking the hardware lock, are unable to lock the software lock until after the first execution unit has reacquired a lock of the hardware lock and possibly released the software lock while exclusively locking the hardware lock. An execution unit may release a software lock after and while holding a lock of the hardware lock. The hardware lock is released when a software lock has been set or released.

Подробнее
07-06-2012 дата публикации

High Performance Real-Time Read-Copy Update

Номер: US20120144129A1
Автор: Paul E. McKenney
Принадлежит: International Business Machines Corp

A technique for reducing reader overhead when referencing a shared data element while facilitating realtime-safe detection of a grace period for deferring destruction of the shared data element. The grace period is determined by a condition in which all readers that are capable of referencing the shared data element have reached a quiescent state subsequent to a request for a quiescent state. Common case local quiescent state tracking may be performed using only local per-reader state information for all readers that have not blocked while in a read-side critical section in which the data element is referenced. Uncommon case non-local quiescent state tracking may be performed using non-local multi-reader state information for all readers that have blocked while in their read-side critical section. The common case local quiescent state tracking requires less processing overhead than the uncommon case non-local quiescent state tracking.

Подробнее
21-06-2012 дата публикации

Fast and linearizable concurrent priority queue via dynamic aggregation of operations

Номер: US20120159498A1
Автор: Terry Wilmarth
Принадлежит: Intel Corp

Embodiments of the invention improve parallel performance in multi-threaded applications by serializing concurrent priority queue operations to improve throughput. An embodiment uses a synchronization protocol and aggregation technique that enables a single thread to handle multiple operations in a cache-friendly fashion while threads awaiting the completion of those operations spin-wait on a local stack variable, i.e., the thread continues to poll the stack variable until it has been set or cleared appropriately, rather than rely on an interrupt notification. A technique for an enqueue/dequeue (push/pop) optimization uses re-ordering of aggregated operations to enable the execution of two operations for the price of one in some cases. Other embodiments are described and claimed.

Подробнее
28-06-2012 дата публикации

Determining the processing order of a plurality of events

Номер: US20120167105A1
Принадлежит: International Business Machines Corp

A method for operating a multi-threading computational system includes: identifying related events; allocating the related events to a first thread; allocating unrelated events to one or more second threads; wherein the events allocated to the first thread are executed in sequence and the events allocated to the one or more second threads are executed in parallel to execution of the first thread.

Подробнее
28-06-2012 дата публикации

Thread synchronization methods and apparatus for managed run-time environments

Номер: US20120167106A1
Принадлежит: Intel Corp

A example method disclosed herein comprises initiating a first optimistically balanced synchronization to acquire a lock of an object, the first optimistically balanced synchronization comprising a first optimistically balanced acquisition and a first optimistically balanced release to be performed on the lock by a same thread and at a same nesting level, releasing the lock after execution of program code covered by the lock if a stored state of the first optimistically balanced release indicates that the first optimistically balanced release is still valid, the stored state of the first optimistically balanced release being initialized prior to execution of the program code to indicate that the first optimistically balanced release is valid, and throwing an exception after execution of the program code covered by the lock if the stored state of the first optimistically balanced release indicates that the first optimistically balanced release is no longer valid.

Подробнее
05-07-2012 дата публикации

Method and system for providing remote control from a remote client computer

Номер: US20120169874A1
Принадлежит: Calgary Scientific Inc

A method and system for remotely controlling a device via a computer network is provided. A client computer generates a client difference program indicative of a change of a state of the device last received from a server computer and transmits the same to the server computer. Upon receipt, the server computer executes the client difference program and determines an updated state of the device, generates control data indicative of the updated state of the device, provides the control data to the device, and generates and transmits a server difference program having encoded a difference between the state of the device and a state of the device last transmitted to the client computer. The client computer executes the server difference program for updating the state of the device last received from the server computer and for displaying the same in a human comprehensible fashion.

Подробнее
05-07-2012 дата публикации

Processing user input events in a web browser

Номер: US20120174121A1
Принадлежит: Research in Motion Ltd

A method and computing device are provided for processing user events received via a user interface, such as a touchscreen, in multiple threads. When a user event is received for a target element in a webpage, the user event is dispatched to both a main browser thread and a secondary thread. The secondary thread processes user events in accordance with established default actions defined within the browser, while the main thread processes the user events in accordance with any event handlers defined for that target element. The main thread processing may be delayed by other interleaved task, and the secondary thread may be given priority over the main thread. When the secondary thread completes processing, an updated webpage is displayed. When the main thread subsequently completes processing, its updated rendering of the webpage is displayed. The secondary thread thus provides an early user interface response to the user event.

Подробнее
12-07-2012 дата публикации

Method and system for using temporary exclusive blocks for parallel accesses to operating means

Номер: US20120179821A1
Принадлежит: SIEMENS AG

In at least one example embodiment, the invention relates to a computer-implemented method, a computer-implemented system and a computer program product for controlling the access to splittable resources in a distributed client server system operating in parallel. The resource control system is designed for a plurality of clients connected to the system and is used to maintain consistency of the data. When a client makes a first attempt to access a resource of the server, an exclusive lock for the requested resource is allocated to the accessing client, that blocks the access to the resource for other clients, said exclusive lock only being allocated for a pre-determinable period of time and then automatically discontinued.

Подробнее
12-07-2012 дата публикации

Using ephemeral stores for fine-grained conflict detection in a hardware accelerated stm

Номер: US20120179875A1
Принадлежит: Individual

A method and apparatus for fine-grained filtering in a hardware accelerated software transactional memory system is herein described. A data object, which may have an arbitrary size, is associated with a filter word. The filter word is in a first default state when no access, such as a read, from the data object has occurred during a pendancy of a transaction. Upon encountering a first access, such as a first read, from the data object, access barrier operations including an ephemeral/private store operation to set the filter word to a second state are performed. Upon a subsequent/redundant access, such as a second read, the access barrier operations are elided to accelerate the subsequent access, based on the filter word being set to the second state to indicate a previous access occurred.

Подробнее
12-07-2012 дата публикации

Methods and apparatus for detecting deadlock in multithreading programs

Номер: US20120180065A1
Автор: George B. Leeman, Jr.
Принадлежит: International Business Machines Corp

A method of detecting deadlock in a multithreading program is provided. An invocation graph is constructed having a single root and a plurality of nodes corresponding to one or more functions written in code of the multithreading program. A resource graph is computed in accordance with one or more resource sets in effect at each node of the invocation graph. It is determined whether cycles exist between two or more nodes of the resource graph. A cycle is an indication of deadlock in the multithreading program.

Подробнее
12-07-2012 дата публикации

Optimizing Communication of System Call Requests

Номер: US20120180072A1
Принадлежит: Advanced Micro Devices Inc

Provided herein is a method for optimizing communication for system calls. The method includes storing a system call for each work item in a wavefront and transmitting said stored system calls to a processor for execution. The method also includes receiving a result to each work item in the wavefront responsive to said transmitting.

Подробнее
26-07-2012 дата публикации

Component-specific disclaimable locks

Номер: US20120191892A1
Автор: Kirk J. Krauss
Принадлежит: International Business Machines Corp

Systems and methods of protecting a shared resource in a multi-threaded execution environment in which threads are permitted to transfer control between different software components, for any of which a disclaimable lock having a plurality of orderable locks can be identified. Back out activity can be tracked among a plurality of threads with respect to the disclaimable lock and the shared resource, and reclamation activity among the plurality of threads may be ordered with respect to the disclaimable lock and the shared resource.

Подробнее
02-08-2012 дата публикации

Adaptive spinning of computer program threads acquiring locks on resource objects by selective sampling of the locks

Номер: US20120198454A1
Принадлежит: International Business Machines Corp

In the dynamic sampling or collection of data relative to locks for which threads attempting to acquire the lock may be spinning so as to adaptively adjust the spinning of threads for a lock, an implementation for monitoring a set of parameters relative to the sampling of data of particular locks and selectively terminating the sampling when certain parameter values or conditions are met.

Подробнее
02-08-2012 дата публикации

Deadlock Detection Method and System for Parallel Programs

Номер: US20120198460A1
Автор: YAO Qi, Yong Zheng, Zhi Da Luo
Принадлежит: International Business Machines Corp

A deadlock detection method and computer system for parallel programs. A determination is made that a lock of the parallel programs is no longer used in a running procedure of the parallel programs. A node corresponding to the lock that is no longer used, and edges relating to the lock that is no longer used, are deleted from a lock graph corresponding to the running procedure of the parallel programs in order to acquire an updated lock graph. The lock graph is constructed according to a lock operation of the parallel programs. Deadlock detection is then performed on the updated lock graph.

Подробнее
02-08-2012 дата публикации

Fair scalable reader-writer mutual exclusion

Номер: US20120198471A1
Принадлежит: Individual

Implementing fair scalable reader writer mutual exclusion for access to a critical section by a plurality of processing threads is accomplished by creating a first queue node for a first thread, the first queue node representing a request by the first thread to access the critical section; setting at least one pointer within a queue to point to the first queue node, the queue representing at least one thread desiring access to the critical section; waiting until a condition is met, the condition comprising the first queue node having no preceding write requests as indicated by at least one predecessor queue node on the queue; permitting the first thread to enter the critical section in response to the condition being met; and causing the first thread to release a spin lock, the spin lock acquired by a second thread of the plurality of processing threads.

Подробнее
02-08-2012 дата публикации

Method and apparatus for operating system event notification mechanism using file system interface

Номер: US20120198479A1
Принадлежит: International Business Machines Corp

A method and structure for OS event notification, including a central processing unit (CPU) and a memory including instructions for an event notification mechanism for monitoring operating system events in an operating system (OS) being executed by the CPU. The OS includes a kernel having a plurality of kernel subcomponents that provide services to one or more applications executing in the OS in a user mode, using system calls to the kernel. The OS event notification mechanism is capable of monitoring events within the kernel, at a level below the user mode level. The OS event notification mechanism includes Application Program Interfaces (APIs) that are standard for the OS.

Подробнее
13-09-2012 дата публикации

Method and apparatus for synchronizing information between platforms in a portable terminal based on a multi-software platform

Номер: US20120233301A1
Автор: Kang-Ho HUR
Принадлежит: SAMSUNG ELECTRONICS CO LTD

A method of synchronizing information between platforms in a portable terminal based on a multi-software platform is provided. The method includes verifying that a first software platform is changed to a second software platform; and if the first software platform is changed to the second software platform, defining volume information of the changed second software platform with reference to volume information of the first software platform.

Подробнее
13-09-2012 дата публикации

Load balancing on heterogeneous processing clusters implementing parallel execution

Номер: US20120233486A1
Принадлежит: NEC Laboratories America Inc

Methods and systems for managing data loads on a cluster of processors that implement an iterative procedure through parallel processing of data for the procedure are disclosed. One method includes monitoring, for at least one iteration of the procedure, completion times of a plurality of different processing phases that are undergone by each of the processors in a given iteration. The method further includes determining whether a load imbalance factor threshold is exceeded in the given iteration based on the completion times for the given iteration. In addition, the data is repartitioned by reassigning the data to the processors based on predicted dependencies between assigned data units of the data and completion times of a plurality of the processers for at least two of the phases. Further, the parallel processing is implemented on the cluster of processors in accordance with the reassignment.

Подробнее
04-10-2012 дата публикации

System and Method for Resource Locking

Номер: US20120254302A1
Принадлежит: Michael Burrows, Redstone Joshua A, Sean Quinlan

A server system includes a processor and a data structure having an entry for a resource, the entry including a first sequence number. The server has communication procedures for receiving a request from a client to access the resource, where the request includes a second sequence number obtained from a service, and a resource request handling program. Upon receiving the request, the resource request handling program determines whether the server has any record of having previously received a request to access the resource. If not, the server returns a provisional rejection to the client, requiring the client to verify that it holds a lock on the specified resource. A provisional bit in the entry is initially set to indicate that the resource has not been accessed since the system was last initialized. The provisional bit is reset when a request to access the resource is granted.

Подробнее
04-10-2012 дата публикации

Thread folding tool

Номер: US20120254880A1
Автор: Kirk J. Krauss
Принадлежит: International Business Machines Corp

A computer-implemented method of performing runtime analysis on and control of a multithreaded computer program. One embodiment of the present invention can include identifying threads of a computer program to be analyzed. Under control of a supervisor thread, a plurality of the identified threads can be folded together to be executed as a folded thread. The execution of the folded thread can be monitored to determine a status of the identified threads. An indicator corresponding to the determined status of the identified threads can be presented in a user interface that is presented on a display.

Подробнее
04-10-2012 дата публикации

Electronic device, application determination method, and application determination program

Номер: US20120254894A1
Автор: Kenji Fukuda
Принадлежит: Kyocera Corp

The cellular telephone device includes: a control unit that executes any one of a plurality of applications; and a storage unit that stores an execution count of an application executed by the execution unit. The control unit determines an application to be executed after terminating or suspending a predetermined application, based on the execution count stored in the storage unit.

Подробнее
01-11-2012 дата публикации

Distributed shared memory

Номер: US20120278392A1
Автор: Lior Aronovich, Ron Asher
Принадлежит: International Business Machines Corp

Systems and methods for implementing a distributed shared memory (DSM) in a computer cluster in which an unreliable underlying message passing technology is used, such that the DSM efficiently maintains coherency and reliability. DSM agents residing on different nodes of the cluster process access permission requests of local and remote users on specified data segments via handling procedures, which provide for recovering of lost ownership of a data segment while ensuring exclusive ownership of a data segment among the DSM agents detecting and resolving a no-owner messaging deadlock, pruning of obsolete messages, and recovery of the latest contents of a data segment whose ownership has been lost.

Подробнее
22-11-2012 дата публикации

Lock control in multiple processor systems

Номер: US20120297394A1
Принадлежит: International Business Machines Corp

A computer system comprising a plurality of processors and one or more storage devices. The system is arranged to execute a plurality of tasks, each task comprising threads and each task being assigned a priority from 1 to a whole number greater than 1, each thread of a task assigned the same priority as the task and each thread being executed by a processor. The system also provides lock and unlock functions arranged to lock and unlock data stored by a storage device responsive to such a request from a thread. A method of operating the system comprises maintaining a queue of threads that require access to locked data, maintaining an array comprising, for each priority, duration and/or throughput information for threads of the priority, setting a wait flag for a priority in the array according to a predefined algorithm calculated from the duration and/or throughput information in the array.

Подробнее
29-11-2012 дата публикации

Information processing system, exclusive control method and exclusive control program

Номер: US20120304185A1
Автор: Takashi Horikawa
Принадлежит: NEC Corp

Features of an information processing system include a stand-by thread count information updating means that updates stand-by thread count information showing a number of threads which wait for release of lock according to a spinlock method, according to state transition of a thread which requests acquisition of predetermined lock; and a stand-by method determining means that determines a stand-by method of a thread which requests the acquisition of the lock based on the stand-by thread count information updated by the stand-by thread count information updating means and an upper limit value of the number of threads which wait according to the predetermined spinlock method.

Подробнее
06-12-2012 дата публикации

Notification barrier

Номер: US20120311582A1
Автор: John Samuel Bushell
Принадлежит: Apple Inc

The disclosed embodiments provide a system which implements a notification barrier. During operation, the system receives a call to the notification barrier installed on a sender object, wherein the call originates from a receiver object which receives notifications posted by the sender object. In response to the call, the system acquires a notification lock, wherein the notification lock is held whenever the sender is posting a notification. The system then releases the notification lock, wherein releasing the lock indicates to the receiver object that the sender object has no pending posted notifications.

Подробнее
13-12-2012 дата публикации

Multi-core processor system, computer product, and interrupt method

Номер: US20120317403A1
Принадлежит: Fujitsu Ltd

A multi-core processor system has a first core executing an OS and multiple applications, and a second core to which a first thread of the applications is assigned. The multi-core processor system includes a processor configured to receive from the first core, an interrupt signal specifying an event that has occurred with an application among the applications, determine whether the event specified by the received interrupt signal is any one among a start event for exclusion and a start event for synchronization for the first thread currently under execution by the second core, save from the second core, the first thread currently under execution, upon determining the specified event to be a start event, and assign a second thread different from the saved first thread and among a group of execution-awaiting threads of the applications, as a thread to be executed by the second core.

Подробнее
03-01-2013 дата публикации

Hardware Enabled Lock Mediation

Номер: US20130007322A1
Принадлежит: International Business Machines Corp

A tangible storage medium and data processing system build a runtime environment of a system. A profile manager receives a service request containing a profile identifier. The profile identifier specifies a required version of at least one software component. The profile manager identifies a complete installation of the software component, and at least one delta file. The profile manager dynamically constructs a classpath for the required version by preferentially utilizing files from the at least one delta file followed by files from the complete installation. The runtime environment is then built utilizing the classpath.

Подробнее
10-01-2013 дата публикации

Determination of running status of logical processor

Номер: US20130014123A1
Принадлежит: International Business Machines Corp

An embodiment provides for operating an information processing system. An aspect of the invention includes allocating an execution interval to a first logical processor of a plurality of logical processors of the information processing system. The execution interval is allocated for use by the first logical processor in executing instructions on a physical processor of the information processing system. The first logical processor determines that a resource required for execution by the first logical processor is locked by another one of the other logical processors. An instruction is issued by the first logical processor to determine whether a lock-holding logical processor is currently running. The lock-holding logical processor waits to release the lock if it is currently running. A command is issued by the first logical processor to a super-privileged process for relinquishing the allocated execution interval by the first logical processor if the locking holding processor is not running.

Подробнее
10-01-2013 дата публикации

Reducing cross queue synchronization on systems with low memory latency across distributed processing nodes

Номер: US20130014124A1
Принадлежит: International Business Machines Corp

A method for efficient dispatch/completion of a work element within a multi-node data processing system. The method comprises: selecting specific processing units from among the processing nodes to complete execution of a work element that has multiple individual work items that may be independently executed by different ones of the processing units; generating an allocated processor unit (APU) bit mask that identifies at least one of the processing units that has been selected; placing the work element in a first entry of a global command queue (GCQ); associating the APU mask with the work element in the GCQ; and responsive to receipt at the GCQ of work requests from each of the multiple processing nodes or the processing units, enabling only the selected specific ones of the processing nodes or the processing units to be able to retrieve work from the work element in the GCQ.

Подробнее
24-01-2013 дата публикации

Terminating barriers in streams of access requests to a data store while maintaining data consistency

Номер: US20130024630A1
Принадлежит: ARM LTD

A memory controller for a slave memory that controls an order of data access requests is disclosed. There is a read and write channel having streams of requests with corresponding barrier transactions within the request streams indicating where reordering should not occur. The controller has barrier response generating circuitry located on the read and said write channels and being responsive to receipt of one of said barrier transactions: to issue a response to the received barrier transaction such that subsequent requests in said stream of requests are not blocked by the barrier transaction and can be received and to terminate the received barrier transaction and not transmit the received barrier transaction further; and to mark requests subsequent to the received barrier transaction in the stream of requests with a barrier context value identifying the received barrier transaction. The memory controller comprises a point of data consistency on the write channel prior to the memory; and the memory controller comprises comparison circuitry configured to compare the bather context value of each write request to be issued to the memory with the barrier context values of at least some pending read requests, the pending read requests being requests received at the memory controller but not yet issued to the memory and: in response to detecting at least one of the pending read requests with an earlier barrier context value identifying a bather transaction that has a corresponding barrier transaction in the stream of requests on the write channel that is earlier in the stream of requests than the write request, stalling the write request until the at least one pending read request has been performed; and in response to detecting no pending read requests with the earlier barrier context value, issuing the write request to the memory.

Подробнее
24-01-2013 дата публикации

Relaxation of synchronization for iterative convergent computations

Номер: US20130024662A1
Принадлежит: International Business Machines Corp

Systems and methods are disclosed that allow atomic updates to global data to be at least partially eliminated to reduce synchronization overhead in parallel computing. A compiler analyzes the data to be processed to selectively permit unsynchronized data transfer for at least one type of data. A programmer may provide a hint to expressly identify the type of data that are candidates for unsynchronized data transfer. In one embodiment, the synchronization overhead is reducible by generating an application program that selectively substitutes codes for unsynchronized data transfer for a subset of codes for synchronized data transfer. In another embodiment, the synchronization overhead is reducible by employing a combination of software and hardware by using relaxation data registers and decoders that collectively convert a subset of commands for synchronized data transfer into commands for unsynchronized data transfer.

Подробнее
31-01-2013 дата публикации

Scheduling Flows in a Multi-Platform Cluster Environment

Номер: US20130031561A1
Принадлежит: International Business Machines Corp

Techniques for scheduling multiple flows in a multi-platform cluster environment are provided. The techniques include partitioning a cluster into one or more platform containers associated with one or more platforms in the cluster, scheduling one or more flows in each of the one or more platform containers, wherein the one or more flows are created as one or more flow containers, scheduling one or more individual jobs into the one or more flow containers to create a moldable schedule of one or more jobs, flows and platforms, and automatically converting the moldable schedule into a malleable schedule.

Подробнее
28-02-2013 дата публикации

Compiler for x86-based many-core coprocessors

Номер: US20130055225A1
Принадлежит: NEC Laboratories America Inc

A system and method for compiling includes, for a parallelizable code portion of an application stored on a computer readable storage medium, determining one or more variables that are to be transferred to and/or from a coprocessor if the parallelizable code portion were to be offloaded. A start location and an end location are determined for at least one of the one or more variables as a size in memory. The parallelizable code portion is transformed by inserting an offload construct around the parallelizable code portion and passing the one or more variables and the size as arguments of the offload construct such that the parallelizable code portion is offloaded to a coprocessor at runtime.

Подробнее
28-02-2013 дата публикации

Managing shared computer resources

Номер: US20130055284A1
Автор: Simon L. Sabato
Принадлежит: Cisco Technology Inc

Various systems, processes, and products may be used to manage shared computer resources. In particular implementations, managing shared computer resources may include the ability to execute a first process on a first central processing unit and execute a second process on a second central processing units, wherein the first process and the second process are operable to access a first resource, and to determine at a mutex controller which of the first process and the second process is allowed to access the first resource at a given time.

Подробнее
14-03-2013 дата публикации

Message communication of sensor and other data

Номер: US20130067486A1
Принадлежит: Microsoft Corp

A service may be provided that reads sensors, and that communicates information based on the sensor readings to applications. In one example, an operating system provides a sensor interface that allows programs that run on a machine to read the values of sensors (such as an accelerometer, light meter, etc.). A service may use the interface to read the value of sensors, and may receive subscriptions to sensor values from other programs. The service may then generate messages that contain the sensor value, and may provide these messages to programs that have subscribed to the messages. The messages may contain raw sensor data. Or, the messages may contain information that is derived from the sensor data and/or from other data.

Подробнее
14-03-2013 дата публикации

Runtime Optimization Of An Application Executing On A Parallel Computer

Номер: US20130067487A1
Принадлежит: International Business Machines Corp

Identifying a collective operation within an application executing on a parallel computer; identifying a call site of the collective operation; determining whether the collective operation is root-based; if the collective operation is not root-based: establishing a tuning session and executing the collective operation in the tuning session; if the collective operation is root-based, determining whether all compute nodes executing the application identified the collective operation at the same call site; if all compute nodes identified the collective operation at the same call site, establishing a tuning session and executing the collective operation in the tuning session; and if all compute nodes executing the application did not identify the collective operation at the same call site, executing the collective operation without establishing a tuning session.

Подробнее
21-03-2013 дата публикации

Send-side matching of data communications messages

Номер: US20130073603A1
Принадлежит: International Business Machines Corp

Send-side matching of data communications messages in a distributed computing system comprising a plurality of compute nodes, including: issuing by a receiving node to source nodes a receive message that specifies receipt of a single message to be sent from any source node, the receive message including message matching information, a specification of a hardware-level mutual exclusion device, and an identification of a receive buffer; matching by two or more of the source nodes the receive message with pending send messages in the two or more source nodes; operating by one of the source nodes having a matching send message the mutual exclusion device, excluding messages from other source nodes with matching send messages and identifying to the receiving node the source node operating the mutual exclusion device; and sending to the receiving node from the source node operating the mutual exclusion device a matched pending message.

Подробнее
21-03-2013 дата публикации

Information processing system, information processing apparatus, and information processing method

Номер: US20130073719A1
Автор: Mitsuo Ando
Принадлежит: Ricoh Co Ltd

A disclosed information processing system includes a first apparatus including a storage unit storing types of events which occur in the first apparatus so as to be reported to an information processing apparatus via a network, and a sending unit sending, when one of the events stored in the storage unit occurs, event information of the event to the information processing apparatus; and the information processing apparatus including a delivery destination storage unit storing identification information of a second apparatus existing at a delivery destination of the event in the first apparatus, and a delivery unit sending the event information of the event to the second apparatus of which identification information is stored in the delivery destination storage unit when the event information is received by the information processing apparatus.

Подробнее
28-03-2013 дата публикации

Programming in a Simultaneous Multi-Threaded Processor Environment

Номер: US20130080838A1
Принадлежит: International Business Machines Corp

A system, method, and product are disclosed for testing multiple threads simultaneously. The threads share a real memory space. A first portion of the real memory space is designated as exclusive memory such that the first portion appears to be reserved for use by only one of the threads. The threads are simultaneously executed. The threads access the first portion during execution. Apparent exclusive use of the first portion of the real memory space is permitted by a first one of the threads. Simultaneously with permitting apparent exclusive use of the first portion by the first one of the threads, apparent exclusive use of the first portion of the real memory space is also permitted by a second one of the threads. The threads simultaneously appear to have exclusive use of the first portion and may simultaneously access the first portion.

Подробнее
28-03-2013 дата публикации

Multi-Lane Concurrent Bag for Facilitating Inter-Thread Communication

Номер: US20130081061A1
Принадлежит: Oracle International Corp

A method, system, and medium are disclosed for facilitating communication between multiple concurrent threads of execution using a multi-lane concurrent bag. The bag comprises a plurality of independently-accessible concurrent intermediaries (lanes) that are each configured to store data elements. The bag provides an insert function executable to insert a given data element into the bag by selecting one of the intermediaries and inserting the data element into the selected intermediary. The bag also provides a consume function executable to consume a data element from the bag by choosing one of the intermediaries and consuming (removing and returning) a data element stored in the chosen intermediary. The bag guarantees that execution of the consume function consumes a data element if the bag is non-empty and permits multiple threads to execute the insert or consume functions concurrently.

Подробнее
18-04-2013 дата публикации

Concurrent Execution of Critical Sections by Eliding Ownership of Locks

Номер: US20130097391A1
Принадлежит: WISCONSIN ALUMNI RESEARCH FOUNDATION

Critical sections of multi-threaded programs, normally protected by locks providing access by only one thread, are speculatively executed concurrently by multiple threads with elision of the lock acquisition and release. Upon a completion of the speculative execution without actual conflict as may be identified using standard cache protocols, the speculative execution is committed, otherwise the speculative execution is squashed. Speculative execution with elision of the lock acquisition, allows a greater degree of parallel execution in multi-threaded programs with aggressive lock usage. 1. A method of coordinating access to common memory by multiple program threads comprising the steps of:in each given program thread,(a) detecting the beginning of a critical section of the given program thread in which conflicts to access of the common memory could occur resulting from execution of other program threads;(b) speculatively executing the critical section; and(c) committing the speculative execution of the critical section if there has been no conflict and squashing the speculative execution of the critical section if there has been a conflict.2. The method of claim 1 , wherein the conflict is:(a) another thread writing data read by the given program thread in the critical section, or(b) another thread reading or writing data written by the given program thread.3. The method of wherein the conflict is detected by an invalidation of a cache block holding data of the critical section.4. The method of wherein the speculative execution is committed at the end of the critical section.5. The method of wherein the end of the critical section is detected by a pattern of instructions typically associated with a lock release.6. The method of wherein the pattern of instructions is a store instruction to a deduced lock variable address.7. The method of wherein the speculative execution is committed at a resource boundary limiting further speculation.8. The method of including the ...

Подробнее
18-04-2013 дата публикации

ADMINISTERING INCIDENT POOLS FOR EVENT AND ALERT ANALYSIS

Номер: US20130097620A1

Administering incident pools including assigning an incident received from one or more components of the distributed processing system to a pool of incidents; assigning to each incident a particular combined minimum time for inclusion of the incident in the pool; in response to the pool closing, determining for each incident in the pool whether the incident has met its combined minimum time for inclusion in the pool; if the incident has been in the pool for its combined minimum time, including the incident in the closed pool; if the incident has not been in the pool for its combined minimum time, moving the incident from the closed pool to a next pool; applying incident suppression rules using the incidents assigned to the next pool; and applying incident creation rules to the incidents that were assigned to the next pool, while omitting any duplicate incidents caused by the assignment. 1. A method of administering incident pools for event and alert analysis in a distributed processing system , the method comprising:assigning, by the incident analyzer, an incident received from one or more components of the distributed processing system to a pool of incidents;assigning, by the incident analyzer, to each incident a particular combined minimum time for inclusion of the incident in the pool;in response to the pool closing, determining, by the incident analyzer, for each incident in the pool whether the incident has met its combined minimum time for inclusion in the pool;if the incident has been in the pool for its combined minimum time, including, by the incident analyzer, the incident in the closed pool;if the incident has not been in the pool for its combined minimum time, moving, by the incident analyzer, the incident to a next pool;applying, by the incident analyzer, incident suppression rules using the incidents assigned to the next pool; andapplying, by the incident analyzer, incident creation rules to the incidents assigned to the next pool, while omitting any ...

Подробнее
25-04-2013 дата публикации

Data processing device and method, and processor unit of same

Номер: US20130103930A1
Автор: Takashi Horikawa
Принадлежит: NEC Corp

A processor unit ( 200 ) includes: cache memory ( 210 ); an instruction execution unit ( 220 ); a processing unit ( 230 ) that detects fact that a thread enters an exclusive control section which is specified in advance to become a bottleneck; a processing unit ( 240 ) that detects a fact that the thread exits the exclusive control section; and an execution flag ( 250 ) that indicates whether there is the thread that is executing a process in the exclusive control section based on detection results. The cache memory ( 210 ) temporarily stores a priority flag in each cache entry, and the priority flag indicates whether data is to be used during execution in the exclusive control section. When the execution flag ( 250 ) is set, the processor unit ( 200 ) sets the priority flag that belongs to an access target of cache entries. The processor unit ( 200 ) leaves data used in the exclusive control section in the cache memory by determining a replacement target of cache entries using the priority flag when a cache miss occurs.

Подробнее
02-05-2013 дата публикации

Time Limited Lock Ownership

Номер: US20130111089A1
Принадлежит: ORACLE INTERNATIONAL CORPORATION

Described herein are techniques for time limited lock ownership. In one embodiment, in response to receiving a request for a lock on a shared resource, the lock is granted and a lock lease period associated with the lock is established. Then, in response to determining that the lock lease period has expired, one or more lock lease expiration procedures are performed. In many cases, the time limited lock ownership may prevent system hanging, timely detect system deadlocks, and/or improve overall performance of the database. 1. A computer-implemented method , comprising:receiving a request for a lock, from a plurality of locks, on a shared resource; granting said lock on said shared resource to a lock holder, and', 'establishing a lock lease period associated with said lock;, 'in response to receiving said requestdetermining that said lock lease period associated with said lock needs to be shortened; andin response to determining that said lock lease period associated with said lock needs to be shortened, shortening said lock lease period associated with said lock.2. The computer-implemented method of claim 1 , wherein the determining that said lock lease period associated with said lock needs to be shortened comprises determining that said lock lease period needs to be shortened to avoid system deadlocks.3. The computer-implemented method of claim 1 , wherein the determining that said lock lease period associated with said lock needs to be shortened comprises determining that said lock lease period needs to be shortened because a process having a higher priority than said lock holder requested said lock on said shared resource.4. The computer-implemented method of claim 1 , wherein the determining that said lock lease period associated with said lock needs to be shortened comprises determining that said lock lease period needs to be shortened because a notification of a blocking asynchronous system trap (BAST) function was received.5. The computer-implemented method ...

Подробнее
09-05-2013 дата публикации

MULTICORE PROCESSOR SYSTEM, COMMUNICATION CONTROL METHOD, AND COMMUNICATION COMPUTER PRODUCT

Номер: US20130117765A1
Принадлежит: FUJITSU LIMITED

A multicore processor system is configured to cause among multiple cores, a second core to acquire from a first core that executes a first process, an execution request for a second process and a remaining period from a time of execution of the execution request until an estimated time of completion of the first process; and give notification of a result of the second process from the second core to the first core after an estimated completion time of the first process obtained by adding the remaining period to a start time of the second process. 1. A multicore processor system configured to:cause among multiple cores, a second core to acquire from a first core that executes a first process, an execution request for a second process and a remaining period from a time of execution of the execution request until an estimated time of completion of the first process; andgive notification of a result of the second process from the second core to the first core after an estimated completion time of the first process, obtained by adding the remaining period to a start time of the second process.2. The multicore processor system according to claim 1 , further configured to:calculate a waiting period by subtracting a period consumed for completing the second process from the remaining period, when the second core completes the second process before the estimated completion time of the first process, andcause the second core to detect that the waiting period has elapsed since calculating the waiting period, whereinthe multicore processor system gives notification of the result of the second process from the second core to the first core, when the second core detects that the waiting period has elapsed.3. The multicore processor system according to claim 1 , whereinthe multicore processor system causes a third core, which is executing a third process, to acquire an estimated period of a fourth process for completing the fourth process that is executed by the first core and ...

Подробнее
16-05-2013 дата публикации

SYSTEM AND METHOD FOR OPTIMIZING USER NOTIFICATIONS FOR SMALL COMPUTER DEVICES

Номер: US20130125142A1
Принадлежит: MICROSOFT CORPORATION

A system and method for notifying users in a manner that is appropriate for the event and the environment for the user. The method of the present invention relates to determining the desired properties of an event and assigning varying notification characteristics to that event. Profiles are created of the various events, wherein each profile relates to a different mode or situational environment, such as a meeting environment, an office or normal environment, a louder outside-type environment, etc. The invention further relates to placing the small computer device in a particular mode, either automatically or manually. Once in a particular mode the device provides notifications according to that mode. 120-. (canceled)21. A method for automatically notifying a user of notification events with predetermined notification types depending on the user's environment and stored notification profiles , the method comprising:operating in a first notification mode associated with a first set of notification types;detecting a calendar-related event;determining whether a second notification mode is associated with the calendar-related event;upon determining that the calendar-related event is associated with the second notification mode, notifying the user of the calendar-related event using the second notification mode; andupon determining that the calendar-related event is not associated with the second notification mode, notifying the user of the calendar-related event using the first notification mode.22. The method of claim 21 , wherein detecting the calendar-related event further comprises accessing a calendar-type application capable of storing calendar-related events.23. The method of claim 22 , wherein detecting the calendar-related event further comprises reminding the user of upcoming calendar-related events scheduled in the calendar-type application.24. The method of claim 21 , wherein the second notification mode is associated with the calendar-related event when ...

Подробнее
16-05-2013 дата публикации

METHOD AND SYSTEM FOR RECORDING OPERATIONS IN A WEB APPLICATION

Номер: US20130125143A1
Принадлежит:

Collecting log data efficiently by controlling the capturing event for an operation log on the basis of application layer information. A web server generates a response including an operation log capturing script and the information from an operation log capturing control definition table and a property capturing definition table, and sends the response to a client. In the client, the received information is forwarded from a web browser module to a script engine module. An operation log capturing module sets the information acquisition event handler on the basis of the forwarded information, captures a sequential operation log on the basis of the operations performed by a user in the web browser, and sends the captured sequential operation log to a log server. A log server module collects sequential operation log in an operation log table, and a log analysis module analyzes the collected logs. 1. A method of recording operation in a web application , which records an operation log on a web page by a computer having a processing unit , wherein a property acquisition definition rule for capturing information from the web page , and a log capturing control definition rule for controlling a recording range of the operation log on the basis of the captured information are provided , and the processing unit executes the steps of setting an information acquisition event handler of the web page on the basis of the property acquisition definition rule; capturing information from the web page by the information acquisition event handler; controlling the recording range of the operation log on the web page on the basis of the captured information and the log capturing control definition rule; and recording the operation log.2. The method of recording operation in the web application according to claim 1 , wherein the step of setting the information acquisition event handler includes a step of rewriting to call the information acquisition event handler claim 1 , after ...

Подробнее
23-05-2013 дата публикации

METHOD AND SYSTEM FOR TRANSFORMING INPUT DATA STREAMS

Номер: US20130132974A1
Принадлежит: Open Text S.A.

A system and method for processing an input data stream in a first data format of a plurality of first data formats to an output data stream in a second data format of a plurality of second data formats. A plurality of input connector modules receive respective input data streams and at least one input queue stores the received input data streams. A plurality of job threads is operatively connected to the at least one input queue, each job thread formatting a stored input data stream to produce an output data stream. At least one output queue stores the output data streams from the plurality of job threads. A plurality of output connector modules is operatively connected to the at least one output queue, the output connector modules supplying respective output data streams. 1. A system for transforming input data streams comprising:a physical input connector;a physical output connector; read an electronic input data stream of file data received at the physical input connector, wherein input data in the input data stream is of a first document format;', 'detect patterns in the input data stream to identify events;', 'create a message for each event containing text from the event according to a generic data structure corresponding to the event;', 'execute a process configured to create output data of a second format from the messages, the output data containing text from the messages, the output data in a different format from the first document format; and', 'provide an output data stream to a destination via the physical output connector, the output data stream comprising the output data., 'a processing system coupled to the physical input connector and the physical output connector, the processing system configured to2. The system for transforming input data streams of claim 1 , wherein the processing system is configured to create an output pipeline for the process.3. The system for transforming input data streams of claim 2 , wherein the output pipeline further ...

Подробнее
30-05-2013 дата публикации

Scaleable Status Tracking Of Multiple Assist Hardware Threads

Номер: US20130139168A1
Принадлежит: International Business Machines Corp

A processor includes an initiating hardware thread, which initiates a first assist hardware thread to execute a first code segment. Next, the initiating hardware thread sets an assist thread executing indicator in response to initiating the first assist hardware thread. The set assist thread executing indicator indicates whether assist hardware threads are executing. A second assist hardware thread initiates and begins executing a second code segment. In turn, the initiating hardware thread detects a change in the assist thread executing indicator, which signifies that both the first assist hardware thread and the second assist hardware thread terminated. As such, the initiating hardware thread evaluates assist hardware thread results in response to both of the assist hardware threads terminating.

Подробнее
30-05-2013 дата публикации

Transactional Environments for Event and Data Binding Handlers

Номер: US20130139181A1
Принадлежит:

Disclosed are apparatus and methods for executing software applications with an actual environment. Handlers for a software application are registered. Each handler can be executed upon receiving an indication of a triggering event. A determination is made that a triggering event for the software application has occurred. In response to the triggering event: a handler environment for the triggering event is determined based on the actual environment, triggered handlers of the registered handlers are determined to be executed, at least a respective portion of the handler environment is made available to each triggered handler, executing the triggered handlers, where at least one triggered handler updates its respective portion of the handler environment during execution, determining an updated-handler environment based on the handler environment and the portions of the handler environments after execution of the triggered handlers, and updating the actual environment based on the updated-handler environment. 120-. (canceled)21. A method , comprising:executing a software application on a computing device using an actual environment of the software application;determining, at the computing device, that a triggering event for the software application has occurred; and determining a handler environment for the triggering event, wherein the handler environment is based on the actual environment,', 'determining a triggered handler to be executed,', 'making available to the triggered handler at least a respective portion of the handler environment,', 'executing the triggered handler,', 'after executing the triggered handler, determining an updated-handler environment based on the handler environment and the portions of the handler environments made available to the one or more triggered handlers, and', 'updating the actual environment based on the updated-handler environment., 'in response to the triggering event, the computing device22. The method of claim 21 , further ...

Подробнее
06-06-2013 дата публикации

Embedded systems and methods for threads and buffer management thereof

Номер: US20130145372A1
Принадлежит: INSTITUTE FOR INFORMATION INDUSTRY

Embedded systems are provided, which includes a processing unit and a memory. The processing unit simultaneously executes first thread having a flag for performing a data acquisition operation and second thread for performing a data process and output operation for the acquired data in the data acquisition operation. The flag is used for indicating whether a state of the first thread is in an execution state or a sleep state. The memory which is coupled to the processing unit provides a shared buffer for the first and second threads. Before executing the second thread, the flag is checked to determine whether to execute the second thread, wherein the second thread is executed when the flag indicates the sleep state while execution of the second thread is suspended when the flag indicates the execution state.

Подробнее
06-06-2013 дата публикации

Information processing apparatus, information processing method, and storage medium

Номер: US20130145373A1
Автор: Hideo Noro
Принадлежит: Canon Inc

There is provided with an information processing apparatus for controlling execution of a plurality of threads which run on a plurality of calculation cores connected to a memory including a plurality of banks. A first selection unit is configured to select a thread as a continuing thread which receives data from other thread, out of threads which process a data group of interest, wherein the number of accesses for a bank associated with the selected thread is less than a predetermined count. A second selection unit is configured to select a thread as a transmitting thread which transmits data to the continuing thread, out of the threads which process the data group of interest.

Подробнее
06-06-2013 дата публикации

SYNCHRONIZING JAVA RESOURCE ACCESS

Номер: US20130145374A1

A method and an apparatus for synchronizing Java resource access. The method includes configuring for a first access interface of a resource set, a first monitor, and configuring, for a second access interface of the resource set, a second monitor, configuring, for the first monitor, a first waiting queue, and the second monitor, a second waiting queue, in response to the first access interface receiving an access request for a resource from a thread, the first monitor querying whether the resource set has a resource satisfying the access request, in response to a positive querying result, the thread obtains the resource and notifies the second monitor to awake a thread in the second waiting queue, in response to a negative querying result, the first monitor puts the thread in the first waiting queue to queue up. 1. A method of synchronizing Java resource access , the method comprising:configuring a first monitor, for a first access interface of a resource set, and a second monitor, for a second access interface of the resource set;configuring a first waiting queue, for the first monitor, and a second waiting queue, for the second monitor;in response to the first access interface receiving an access request for a resource from a thread, the first monitor querying whether the resource set has a resource that satisfies the access request;in response to a positive query result, the thread obtaining the resource and notifying the second monitor to awake a thread in the second waiting queue; andin response to a negative query result, the first monitor putting the thread into the first waiting queue to queue up.2. The method of synchronizing according to claim 1 , wherein the first access interface is a producer-type access interface and the second access interface is a consumer -type access interface.3. The method of synchronizing according to claim 2 , wherein the first waiting queue contains a producer thread and the second waiting queue contains a consumer thread.4. ...

Подробнее
06-06-2013 дата публикации

Determining collective barrier operation skew in a parallel computer

Номер: US20130145379A1
Автор: Daniel A. Faraj
Принадлежит: International Business Machines Corp

Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by: identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time.

Подробнее
13-06-2013 дата публикации

PREPARING PARALLEL TASKS TO USE A SYNCHRONIZATION REGISTER

Номер: US20130152103A1

A job may be divided into multiple tasks that may execute in parallel on one or more compute nodes. The tasks executing on the same compute node may be coordinated using barrier synchronization. However, to perform barrier synchronization, the tasks use (or attach) to a barrier synchronization register which establishes a common checkpoint for each of the tasks. A leader task may use a shared memory region to publish to follower tasks the location of the barrier synchronization register—i.e., a barrier synchronization register ID. The follower tasks may then monitor the shared memory to determine the barrier synchronization register ID. The leader task may also use a count to ensure all the tasks attach to the BSR. This advantageously avoids any task-to-task communication which may reduce overhead and improve performance. 1. A method for synchronizing a plurality of tasks of a job , comprising:allocating, by one or more computer processors, a shared memory region for the plurality of tasks, wherein the plurality of tasks are executed in parallel on a given compute node;storing, in the shared memory region, an indicator for discovering a register;retrieving the indicator from the shared memory region;discovering the register using the retrieved indicator; andduring a synchronization process, accessing the register to ensure that each of the plurality of tasks have completed.2. The method of claim 1 , wherein storing the indicator for discovering the register in the shared memory region is performed by a leader task selected from the plurality of tasks claim 1 , wherein retrieving the indicator and discovering the register is performed by at least one follower task selected from the plurality of the tasks claim 1 , and wherein the shared memory region is accessible by both the leader and follower tasks.3. The method of claim 1 , further comprising:incrementing a count after at least one of the plurality of tasks attaches to the register; andafter determining the count ...

Подробнее
20-06-2013 дата публикации

RUNTIME OPTIMIZATION OF AN APPLICATION EXECUTING ON A PARALLEL COMPUTER

Номер: US20130160025A1

Identifying a collective operation within an application executing on a parallel computer; identifying a call site of the collective operation; determining whether the collective operation is root-based; if the collective operation is not root-based: establishing a tuning session and executing the collective operation in the tuning session; if the collective operation is root-based, determining whether all compute nodes executing the application identified the collective operation at the same call site; if all compute nodes identified the collective operation at the same call site, establishing a tuning session and executing the collective operation in the tuning session; and if all compute nodes executing the application did not identify the collective operation at the same call site, executing the collective operation without establishing a tuning session. 1. A method of runtime optimization of an application executing on a parallel computer , the parallel computer having a plurality of compute nodes organized into a communicator , the method comprising:determining, by each compute node, whether a collective operation is root-based;if the collective operation is not root-based, establishing a tuning session administered by a self tuning module for the collective operation in dependence upon an identifier of a call site of the collective operation and executing the collective operation in the tuning session;if the collective operation is root-based, determining, through use of a single other collective operation, whether all compute nodes executing the application identified the collective operation at the same call site;if all compute nodes executing the application identified the collective operation at the same call site, establishing a tuning session administered by the self tuning module for the collective operation in dependence upon the identifier of the call site of the collective operation and executing the collective operation in the tuning session; andif ...

Подробнее
20-06-2013 дата публикации

METHOD FOR CENTRALIZING EVENTS FOR A MULTILEVEL HIERARCHICAL COMPUTER MANAGEMENT SYSTEM

Номер: US20130160030A1
Принадлежит: CASSIDIAN SAS

A method for centralizing events for a multilevel hierarchical computer management system, the system including a plurality of source equipments generating events and a plurality of event collectors per level, the method including selecting by an upper level collector a lower level collector according to operational parameters and/or a link quality of service of the lower level collector; receiving by the collector the events from the selected lower level collector; periodically verifying if the selected collector is available and if not repeating the selection step; and comparing by the upper level collector its events with those from the unselected lower level collectors and receiving from one of these unselected lower level collectors the events that are different. 1. A method for centralizing events for a multilevel hierarchical computer management system , said system comprising a plurality of source equipments generating events and a plurality of event collectors per level , said method comprising:selecting by a collector from an upper level a collector from a lower level according to operational parameters and/or a link quality of service of said lower level collector;receiving by said collector the events from said selected lower level collector;periodically verifying if the selected collector is available and if not repeating the selecting; andcomparing by said upper level collector its events with those from the unselected lower level collectors and receiving from one of said unselected lower level collectors the events that are different.2. The event centralizing method according to claim 1 , comprising recording all events generated by the source equipments in collectors of the same hierarchical level as the source equipments.3. The event centralizing method according to claim 1 , wherein said comparing of the events from the upper level collector with those from the unselected lower level collectors is carried out periodically.4. The event centralizing ...

Подробнее
04-07-2013 дата публикации

Information processing apparatus and method of controlling information processing apparatus

Номер: US20130174161A1
Принадлежит: Fujitsu Ltd

A hardware thread causes a SleepID register of a WAKEUP signal generation unit to store a SleepID that identifies the hardware thread when suspending a process due to waiting for a process by another CPU. The WAKEUP signal generation unit causes the WAKEUP data register of the WAKEUP signal generation unit to store a SleepID notified by a node when a process that the hardware thread waits ends. The WAKEUP signal generation unit outputs a WAKEUP signal that cancels the stop of the hardware thread to the hardware thread when the SleepIDs of the SleepID register and the WAKEUP data register agree with each other.

Подробнее
04-07-2013 дата публикации

Fault tolerant distributed lock manager

Номер: US20130174165A1
Автор: Rajat Chopra
Принадлежит: Red Hat Inc

A lock manager running on a machine may write a first entry for a first process to a queue associated with a resource. If the first entry is not at a front of the queue, the lock manager identifies a second entry that is at the front of the queue, and determines whether a second process associated with the second entry is operational. If the second process is not operational, the lock manager removes the second entry from the queue. Additionally, if the queue becomes unavailable, the lock manager may initiate failover to a backup copy of the queue.

Подробнее
11-07-2013 дата публикации

Administering Incident Pools For Event And Alert Analysis

Номер: US20130179905A1

Administering incident pools including creating a pool of incidents, the pool having a predetermined initial period of time; assigning each received incident to the pool; assigning, by the incident analyzer, to each incident a predetermined minimum time for inclusion in a pool; extending for one or more of the incidents the predetermined initial period of time of the pool by a particular period of time assigned to the incident; determining whether conditions have been met to close the pool; and if conditions have been met to close the pool determining for each incident in the pool whether the incident has been in the pool for its predetermined minimum time for inclusion in a pool; and if the incident has not been in the pool for its predetermined minimum time, evicting the incident from the closed pool and including the incident in a next pool. 1. A method of administering incident pools for event and alert analysis in a distributed processing system , the method comprising:receiving, by an incident analyzer from an incident queue, a plurality of incidents from one or more components of the distributed processing system;creating, by the incident analyzer, a pool of incidents;assigning, by the incident analyzer, each received incident to the pool;assigning, by the incident analyzer, to each incident a predetermined minimum time for inclusion in a pool;determining, by the incident analyzer, whether conditions have been met to close the pool; andif conditions have been met to close the pool, determining for each incident in the pool whether the incident has been in the pool for its predetermined minimum time for inclusion in a pool; andif the incident has not been in the pool for its predetermined minimum time, evicting the incident from the closed pool and including the incident in a next pool.2. The method of wherein one or more of the incidents comprises events and where in the method further comprises identifying one or more alerts in dependence upon one or more ...

Подробнее
18-07-2013 дата публикации

Method for Synchronous Execution of Programs in a Redundant Automation System

Номер: US20130185726A1
Принадлежит: SIEMENS AKTIENGESELLSCHAFT

A method for synchronous execution of programs in a redundant automation system comprising at least two subsystems, wherein at least one request for execution of one of the programs is taken as a basis for starting a scheduling pass, and during this scheduling pass a decision is taken as to whether this one program is executed on each of the subsystems. Suitable measures are proposed which allow all programs a fair and deterministic share of the program execution based on their priorities. 1. A method for synchronous execution of computer programs in a redundant automation system comprising a plurality of subsystems , wherein at least one request for execution of one computer program of the computer programs forms a basis for starting a scheduling pass during which a decision is performed to determine whether the one computer program of the computer programs is executed in sync on each of the plurality of subsystems , the method comprising the steps of:recording, by each of the plurality of subsystems, for each of the computer programs, a local execution time which has already accrued, the local execution time indicating for how long a respective computer program of the computer programs has already been executed on a respective subsystem of the plurality of subsystems; andperforming the decision during the scheduling pass based on a respective recorded accrued local execution time and one of a respective prescribable and prescribed maximum execution time for the requested one computer program of the plurality of computer programs, the requested computer program being unexecuted on the plurality of subsystems if an accrued local execution time one of reaches a respective maximum execution time and exceeds the respective maximum execution time.2. The method as claimed in claim 1 , further comprising the steps of:a) interchanging by the plurality of subsystems, during the scheduling pass, information which comprises the respective accrued local execution time for the ...

Подробнее
18-07-2013 дата публикации

PROVIDING BY ONE PROGRAM TO ANOTHER PROGRAM ACCESS TO A WARNING TRACK FACILITY

Номер: US20130185732A1

A program (e.g., an operating system) is provided a warning that it has a grace period in which to perform a function, such as cleanup (e.g., complete, stop and/or move a dispatchable unit). The program is being warned, in one example, that it is losing access to its shared resources. For instance, in a virtual environment, a guest program is warned that it is about to lose its central processing unit resources, and therefore, it is to perform a function, such as cleanup. 1. A method of facilitating processing in a computing environment , said method comprising:providing by a first program to a second program an indication of a warning track facility installed within the computing environment, the warning track facility to provide to the second program a grace period to perform a first function;notifying the second program, by the first program, that the grace period has begun; andperforming by the first program a second function subsequent to the grace period.2. The method of claim 1 , wherein the performing the second function comprises providing by the first program to the second program a status indication relating to completion of the first function within the grace period.3. The method of claim 2 , wherein the providing the status indication comprises providing by the first program to the second program claim 2 , a next time the second program executes claim 2 , an indication that the first function completed within the grace period.4. The method of claim 2 , wherein the providing the status indication comprises providing by the first program to the second program claim 2 , a next time the second program executes claim 2 , an indication that the first function did not complete within the grace period.5. The method of claim 1 , further comprising receiving by the first program a warning track registration request from the second program claim 1 , and enabling the second program to participate in the warning track facility.6. The method of claim 1 , wherein the ...

Подробнее
18-07-2013 дата публикации

USE OF A WARNING TRACK INTERRUPTION FACILITY BY A PROGRAM

Номер: US20130185738A1

A program (e.g., an operating system) is provided a warning that it has a grace period in which to perform a function, such as cleanup (e.g., complete, stop and/or move a dispatchable unit). The program is being warned, in one example, that it is losing access to its shared resources. For instance, in a virtual environment, a guest program is warned that it is about to lose its central processing unit resources, and therefore, it is to perform a function, such as cleanup. 1. A method of facilitating processing in a computing environment , said method comprising:obtaining by a program an indication of a warning track facility installed within the computing environment, the warning track facility to provide to the program a warning track grace period to perform a function;receiving by the program a warning track notification indicating the warning track grace period has begun; andbased on the warning track notification, at least initiating by the program the function within the warning track grace period.2. The method of claim 1 , further comprising registering by the program for the warning track facility.3. The method of claim 2 , wherein based on the registering the program is enabled for the warning track facility.4. The method of claim 1 , wherein the warning track notification comprises an interrupt in which shared resources assigned to the program are released subsequent to termination of the warning track grace period.5. The method of claim 1 , wherein the function comprises one of:completing a dispatchable unit executing on a processor in which the program executes; ormaking the dispatchable unit re-dispatchable on another processor of the computing environment.6. The method of claim 1 , wherein the program is a guest program having access to shared resources of the computing environment during a timeslice provided to a guest central processing unit on which the guest program executes claim 1 , and wherein the warning track grace period is distinguishable ...

Подробнее
18-07-2013 дата публикации

WARNING TRACK INTERRUPTION FACILITY

Номер: US20130185739A1

A program (e.g., an operating system) is provided a warning that it has a grace period in which to perform a function, such as cleanup (e.g., complete, stop and/or move a dispatchable unit). The program is being warned, in one example, that it is losing access to its shared resources. For instance, in a virtual environment, a guest program is warned that it is about to lose its central processing unit resources, and therefore, it is to perform a function, such as cleanup. 1. A method of facilitating processing in a computing environment , said method comprising:providing by a first program to a second program a warning track facility installed indication indicating installation of a warning track facility within the computing environment, the warning track facility to provide to the second program a grace period to perform a first function;providing by the first program to the second program a warning track notification;based on the warning track notification, initiating by the second program the first function within the grace period; andperforming by the first program a second function subsequent to the grace period.2. The method of claim 1 , wherein the warning track notification comprises an interruption in which shared resources assigned to the second program are released subsequent to termination of the grace period.3. The method of claim 2 , wherein the first program gains access to the released shared resources in order to perform the second function.4. The method of claim 1 , wherein the first program is a host program and the second program is a guest program claim 1 , the guest program having access to shared resources of the computing environment during a timeslice provided to a guest central processing unit on which the guest program executes claim 1 , the grace period being distinguishable from the timeslice.5. The method of claim 4 , wherein the grace period prematurely terminates the timeslice.6. The method of claim 4 , wherein the grace period ...

Подробнее
25-07-2013 дата публикации

DATA PROCESSING METHOD, DATA PROCESSING DEVICE, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING DATA PROCESSING PROGRAM

Номер: US20130191846A1
Автор: Kumura Takahiro
Принадлежит: NEC Corporation

A data processing method according to the present invention includes executing a third thread for performing a series of procedures (reception, operation, storage, and transmission), in which the series of procedures includes receiving a control signal transmitted from a first thread that supplies input data, then executing an operation using the input data, storing a result of the operation to a data region specified by the control signal, and transmitting the control signal to a second thread that uses the result. This guarantees exclusive data access without locking/unlocking data at the time of executing threads with data dependency and also reduces data transfer cost. 1. A data processing method comprising executing a third thread for performing a series of procedures (reception , operation , storage , and transmission) , the series of procedures including receiving a control signal transmitted from a first thread that supplies input data , then executing an operation using the input data , storing a result of the operation to a data region specified by the control signal , and transmitting the control signal to a second thread that uses the result.2. The data processing method according to claim 1 , whereinthe first thread and the second threads are different threads from each other, andeither one of series of procedures is performed, the one series of procedures (reception, operation, storage, and transmission) including receiving a control signal transmitted from a thread that supplies the first thread with input data, then executing an operation using the input data, storing a result of the operation to a data region specified by the control signal as input data to be supplied to the third thread, and transmitting the control signal to the third thread, and the other series of procedures (reception, operation, storage, and transmission) including the second thread receiving a control signal transmitted from the third thread, then executing an operation ...

Подробнее
01-08-2013 дата публикации

Major branch instructions

Номер: US20130198492A1
Принадлежит: International Business Machines Corp

Major branch instructions are provided that enable execution of a computer program to branch from one segment of code to another segment of code. These instructions also create a new stream of processing at the other segment of code enabling execution of the other segment of code to be performed in parallel with the segment of code from which the branch was taken. In one example, the other stream of processing starts a transaction for processing instructions of the other stream of processing.

Подробнее
01-08-2013 дата публикации

Major branch instructions

Номер: US20130198496A1
Принадлежит: International Business Machines Corp

Major branch instructions are provided that enable execution of a computer program to branch from one segment of code to another segment of code. These instructions also create a new stream of processing at the other segment of code enabling execution of the other segment of code to be performed in parallel with the segment of code from which the branch was taken. In one example, the other stream of processing starts a transaction for processing instructions of the other stream of processing.

Подробнее
08-08-2013 дата публикации

EXCLUSIVE CONTROL METHOD OF RESOURCE AND EXCLUSIVE CONTROLLER OF RESOURCE

Номер: US20130205057A1
Автор: SASAOKA Toshio
Принадлежит: Panasonic Corporation

Under the circumstances that a lock object which performs a restriction control on an exclusive use of a sharable resource is granting a second information processor a right of prior use of the sharable resource over a first information processor, a time length of exclusive use during which the sharable resource is exclusively used by the second information processor is measured when an attempt to acquire the right of prior use requested by the first information processor for the lock object fails, and least two standby operations are set, the at least two standby operations being carried out by the first information processor until the right of prior use of the sharable resource granted to the second information processor is no longer valid, and the time length of exclusive use is compared to a decision threshold value preset for evaluation of the time length of exclusive use so that one of the standby operations suitable for a comparison result is selected. 1. A resource exclusive control method used for exclusive control of a sharable resource between a plurality of information processors capable of concurrently executing processes , comprising:a step in which under the circumstances that a lock object which performs a restriction control on an exclusive use of the sharable resource is granting a second information processor a right of prior use of the sharable resource over a first information processor, a time length of exclusive use during which the sharable resource is exclusively used by the second information processor is measured, wherein the time length of exclusive use is a length of time from when the second information processor begins exclusive use of the sharable resource to when an attempt to acquire the right of prior use requested by the first information processor for the lock object fails, anda step in which at least two standby operations are set, the at least two standby operations being carried out by the first information processor until the ...

Подробнее
08-08-2013 дата публикации

Methods and Apparatus for Mobile Device Event Detection

Номер: US20130205306A1
Автор: Kelly Joe
Принадлежит: mCube, Incorporated

A computer-implemented method for determining an action for a user, implemented in a computing system programmed to perform the method includes receiving a first time series of physical perturbations with a first physical sensor in response to physical perturbations of the computing system, receiving a second time series of physical perturbations with a second physical sensor in response to the physical perturbations of the computing system, determining an event vector in response to the first time series of physical perturbations and in response to the second time series of physical perturbations, comparing the event vector to a first event signature to determine a first value, determining occurrence of a first event when the first value exceeds a first threshold, and determining a first action for the computing system in response to the determining in the computing system, occurrence of the first event. 1. A computer-implemented method for determining an action for a user , implemented in a computing system programmed to perform the method comprising:receiving in the computing system, a first time series of physical perturbations with a first physical sensor in response to physical perturbations of the computing system;receiving in the computing system, a second time series of physical perturbations with a second physical sensor in response to the physical perturbations of the computing system;determining in the computing system, an event vector in response to the first time series of physical perturbations and in response to the second time series of physical perturbations;comparing in the computing system, the event vector to a first event signature to determine a first value;determining in the computing system, occurrence of a first event when the first value exceeds a first threshold; anddetermining in the computing system, a first action for the computing system in response to the determining in the computing system, occurrence of the first event.2. The ...

Подробнее
15-08-2013 дата публикации

EVENT NOTIFICATION MANAGEMENT

Номер: US20130212599A1
Принадлежит: Apple Inc.

Systems and methods are provided for event notification. In one implementation, a method is provided. A determination is made as to whether a threshold associated with pending event notifications has been exceeded by an incoming event notification. A plurality of pending event notifications that can be combined are identified. Two or more event notifications are combined. 1. (canceled)2. A method comprising:determining, by a computer system, that a threshold number of pending event notifications in a queue has been exceeded by an incoming event notification;determining that two or more event notifications in the queue are associated with a common feature of the computer system; andcombining the two or more event notifications based on the common feature such that the number of pending event notifications, including the combined two or more event notifications, in the queue no longer exceeds the threshold number of event notifications, including replacing two or more event notifications with a single event notification indicating that a change has occurred with regard to the common feature.3. The method of claim 2 , further comprising:scanning the computer system to identify the combined event notifications associated with the common feature; andperforming a backup operation including backup data associated with the combined event notifications.4. The method of claim 3 , where scanning the computer system includes comparing current data related to the common feature with data from a previous backup to identify changed data.5. The method of claim 2 , where receiving an incoming event notification includes receiving an event notification at an event notification queue that identifies a change to a feature of the computer system.6. The method of claim 2 , further comprising:identifying one or more event notifications as protected, where protected event notifications cannot be combined.7. The method of claim 2 , further comprising:determining a number of event ...

Подробнее
15-08-2013 дата публикации

SYSTEM AND METHOD FOR MANAGING CONCURRENT EVENTS

Номер: US20130212603A1
Принадлежит: TWILIO, INC.

A system and method that includes receiving an API request to a type of API resource; retrieving an API concurrency value for the API request; determining a comparison status associated with a comparison of the API concurrency value to a concurrency threshold; if the comparison status is within the concurrency threshold, transmitting the API request to an API processing resource; if the comparison status indicates the concurrency threshold is not satisfied, impeding processing of the API request; accounting for an increase in the API concurrency value if the API request is transmitted to an API processing resource; and accounting for a decrease in the API concurrency value at a time associated with the API processing resource completing processing of the API request. 1. A method comprising:receiving an application programming interface (API) request to a type of API resource;retrieving an API concurrency value for the API request;determining a comparison status associated with a comparison of the API concurrency value to a concurrency limit;in a first condition based at least in part on if the comparison status satisfies the concurrency limit, transmitting the API request to an API processing resource;in a second condition based at least in part on if the comparison status indicates the concurrency threshold is not satisfied, impeding processing of the API request;accounting for an increase in the API concurrency value if the API request is transmitted to an API processing resource; andaccounting for a decrease in the API concurrency value at a time associated with the API processing resource completing processing of the API request.2. The method of claim 1 , wherein the received API request is a request from an account; and wherein the retrieved API concurrency value is a value maintained for API concurrency usage of the account.3. The method of claim 2 , wherein the API request is an API request of a telecommunications platform claim 2 , and wherein the API ...

Подробнее
22-08-2013 дата публикации

INFORMATION PROCESSING DEVICE, SYSTEM, CONTROL METHOD, AND PROGRAM

Номер: US20130219414A1
Автор: Sato Tadashi
Принадлежит: NEC Corporation

An information processing device reduces a time of processing for adding an attribute name performed in each node. 110-. (canceled)11. An information processing device , comprising:an its own-segment memorizing unit that memorizes its own segment, the own segment being at least one segment among each segment made by dividing a range of a key into a plurality of segments so as to make the segments neighbor with each other, the key being generated about at least two attribute names using an attribute name and an attribute value, based on a predetermined order relation and being of a size-comparable form; anda judgment unit that judges whether a key generated from an attribute name and an attribute value being included in the own-segment or not.12. The information processing device according to claim 11 , further comprising:a reception unit that receives event information, the event information including an attribute name and an attribute value;a segment information transfer unit that, when an attribute name having a frequency included in the received event information being no smaller than a predetermined value is existing among the attribute names, transmits an end segment to another information processing device storing a segment neighboring the end segment, the end segment being a divided segment of the own segment and being a segment including a lower limit or an upper limit;a segment information receiving unit that receives a segment neighboring its own segment from another information processing device; anda processing object information storage unit that updates its own-segment memorizing unit using one of or both of the transmitted end segment and a segment received by the segment information receiving unit.13. The information processing device according to claim 12 , further comprising:a key information generation unit that generates a key of a size-comparable form based on an attribute name and an attribute value included in the event information, upon ...

Подробнее
29-08-2013 дата публикации

Recording Activity of Software Threads in a Concurrent Software Environment

Номер: US20130227586A1

The present disclosure provides a method, computer program product, and activity recording system for identifying idleness in a processor via a concurrent software environment. A thread state indicator records an indication of a synchronization state of a software thread that is associated with an identification of the software thread. A time profiler identifies a processor of the computer system being idle and records an indication that the processor is idle. A dispatch monitor identifies a dispatch of the software thread to the processor. In response to the dispatch monitor determining the indication identifies that the processor is idle and the indication of a synchronization state of the software thread indicating the software thread ceases to execute in the processor, the dispatch monitor generates a record attributing the idleness of the processor to the software thread and the indicated synchronization state. 1. An activity recording system for identifying idleness in a processor executing software threads in a concurrent software environment of a computer system , the activity recording system comprising:a thread state indicator that records an indication of a synchronization state of a software thread in which the software thread ceases to execute in the processor of the computer system, wherein the indication of the synchronization state is associated with an identification of the software thread; identifies the processor of the computer system becoming idle; and', 'records an indication that the processor is idle; and, 'a time profiler that identifies a dispatch of the software thread to the processor; and', 'in response to the recording of the indication that the processor is idle and the indication of the synchronization state of the software thread, generates a record attributing an idleness of the processor to the software thread and the indicated synchronization state, 'a dispatch monitor that2. The activity recording system of claim 1 , wherein the ...

Подробнее
05-09-2013 дата публикации

Method for Processing Sensor Data and Computing Node

Номер: US20130232402A1
Принадлежит: Huawei Technologies Co., Ltd.

A method for processing sensor data and a computing node are provided. The method is applied to a computing node, and the computing node includes a hardware layer, an OS running on the hardware layer, and a browser engine running on the OS, where the hardware layer includes a first sensor device. The method includes: sensing, by the first sensor device, a state change, generating sensor data, and transmitting the sensor data to the OS in form of an event; determining, by the OS, an event type of the event according to the sensor data, and transmitting the sensor data and the event type to the browser engine; determining, by the browser engine according to the event type, that the event has been registered, and executing processing logic of the event. Thus the written application is capable of running on different OSs, thereby enabling an application to run across platform. 1. A method for processing sensor data comprising:sensing, by a first sensor device of a computing node, a state change, wherein the computing node comprises a hardware layer, an operating system (OS) running on the hardware layer, and a browser engine running on the OS, wherein the hardware layer comprises the first sensor device;generating sensor data;transmitting the sensor data to the OS in a form of an event;determining, by the OS, an event type of the event according to the sensor data;transmitting the sensor data and the event type to the browser engine, wherein the sensor data is transmitted to the browser engine in the form of the event;determining, by the browser engine, according to the event type that the event has been registered; andexecuting processing logic of the event.2. The method according to claim 1 , wherein the browser engine comprises a browser Shell claim 1 , a web page parsing and rendering engine claim 1 , and a script parsing engine claim 1 , and wherein determining claim 1 , by the browser engine claim 1 , according to the event type that the event has been registered ...

Подробнее
05-09-2013 дата публикации

DYNAMIC USER INTERFACE AGGREGATION THROUGH SMART EVENTING WITH NON-INSTANTIATED CONTENT

Номер: US20130232509A1

A published event from a first content element executing within a framework may be detected. In response, a registry may be searched for one or more registered events that match the published event, and if a matching registered event is found, a second content element that registered said matching registered event may be instantiated to start executing within the framework. The second content element is dynamically aggregated into the framework based on the published event without the first content element needing to have previous knowledge of the second content element, and without the second content element needing to have previous knowledge of the first content element. The framework also does not need to be designed initially to deploy the second content element. Which one or more content elements to aggregate into the framework may be determined at run time rather than at design time. 1. A method for dynamically aggregating content through smart eventing with non-instantiated content , comprising:executing on a processor a first content element within a framework; andin response to detecting a published event from the first content element executing within the framework, searching a registry for one or more registered events that match the published event, and if a matching registered event is found, instantiating a second content element that registered said matching registered event to start executing within the framework,wherein the second content element is dynamically aggregated into the framework based on the published event without the first content element needing to have previous knowledge of the second content element, and without the second content element needing to have previous knowledge of the first content element, and wherein the framework also does not need to be designed initially to deploy the second content element.2. The method of claim 1 , further including enabling one or more content elements to register dynamically with the framework ...

Подробнее
12-09-2013 дата публикации

MILESTONE MANAGER

Номер: US20130239123A1
Автор: Lowry Andrew
Принадлежит: MORGAN STANLEY

A milestone manager receives a milestone message from a first application. The milestone message includes information associated with a periodic event. The milestone manager applies a rule based process on the milestone message information and sends a trigger to a second application in response to the milestone. The trigger initiates processing of the second application in response to the milestone. 133-. (canceled)34. A method for managing processing activity of computer applications in a distributed computing environment , the method comprising:receiving, by at least one processor and from a first application, a first milestone message comprising first milestone information indicating a milestone type indicator for the first milestone message and one or more milestone parameter values describing the first milestone, wherein at least one of the one or more milestone parameter values is a first hierarchal attribute value indicating a first element in a hierarchal structure;receiving, by the at least one processor, a second milestone message comprising second milestone information indicating a milestone type indicator for the second milestone message and one or more milestone parameter values describing the second milestone, wherein at least one of the one or more milestone parameter values is a second hierarchal attribute value indicating a second element in the hierarchal structure;bridging up, by the at least one processor, the first and second milestone messages to derive an aggregated milestone message based at least in part on the hierarchal attribute values for each of the first and second milestone messages and the hierarchal structure, wherein the aggregated milestone message comprises an indication of a third element in the hierarchal structure, wherein the third element is a parent of the first element and the second element;applying, by the at least one processor, a rule on the aggregated milestone message; andsending, by the at least one processor, a ...

Подробнее
12-09-2013 дата публикации

Application level speculative processing

Номер: US20130239125A1
Автор: Francesco Iorio
Принадлежит: Autodesk Inc

One or more embodiments of the invention is a computer-implemented method for speculatively executing application event responses. The method includes the steps of identifying one or more event responses that could be issued for execution by an application being executed by a master process, for each event response, generating a child process to execute the event response, determining that a first event response included in the one or more event responses has been issued for execution by the application, committing the child process associated with the first event response as a new master process, and aborting the master process and all child processes other than the child process associated with the first event response.

Подробнее
19-09-2013 дата публикации

VERIFYING SYNCHRONIZATION COVERAGE IN LOGIC CODE

Номер: US20130247062A1
Принадлежит: INTERNATIONAL BUSINESS MACHINES

A computer implemented system and method for measuring synchronization coverage for one or more concurrently executed threads is provided. The method comprises updating an identifier of a first thread to comprise an operation identifier associated with a first operation, in response to determining that the first thread has performed the first operation; associating the identifier of the first thread with one or more resources accessed by the first thread; and generating a synchronization coverage model by generating a relational data structure of said one or more resources, wherein a resource is associated with at least the identifier of the first thread and an identifier of a second thread identifier, such that the second thread waits for the first thread before accessing said resource. 1. A computer implemented method for measuring synchronization coverage for one or more concurrently executed threads , the method comprising:updating an identifier of a first thread to comprise an operation identifier associated with a first operation, in response to determining that the first thread has performed the first operation;associating the identifier of the first thread with one or more resources accessed by the first thread; andgenerating a synchronization coverage model by generating a relational data structure of said one or more resources, wherein a resource is associated with at least the identifier of the first thread and an identifier of a second thread identifier, such that the second thread waits for the first thread before accessing said resource.2. The method of wherein the first operation comprises accessing the first resource.3. The method of wherein the first operation comprises accessing the first resource to read data from the first resource.4. The method of wherein the first operation comprises accessing the first resource to write data to the resource.5. The method of wherein the first operation comprises entry to a function.6. The method of wherein the ...

Подробнее
19-09-2013 дата публикации

Apparatus and method for executing multi-operating systems

Номер: US20130247065A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

An apparatus and method for executing multi-operating systems (OS) are provided. Resources allocated to the respective multi-OSs are managed by management applications of the multi-OSs. A processor executes a plurality of multi-OSs. Each of the plurality of multi-OSs executes the management application. Each of the plurality of multi-OSs regards a resource held by another multi-OS among the plurality of multi-OSs as used by the corresponding management application, thereby preventing the resource from being allocated to another application included in the multi-OS.

Подробнее
19-09-2013 дата публикации

Creating A Checkpoint Of A Parallel Application Executing In A Parallel Computer That Supports Computer Hardware Accelerated Barrier Operations

Номер: US20130247069A1
Принадлежит: International Business Machines Corp

In a parallel computer executing a parallel application, where the parallel computer includes a number of compute nodes, with each compute node including one or more computer processors, the parallel application including a number of processes, and one or more of the processes executing a barrier operation, creating a checkpoint of a parallel application includes: maintaining, by each computer processor, global barrier operation state information, the global barrier operation state information includes an aggregation of each process's barrier operation state information; invoking, for each process of the parallel application, a checkpoint handler; saving, by each process's checkpoint handler as part of a checkpoint for the parallel application, the process's barrier operation state information; and exiting, by each process, the checkpoint handler.

Подробнее
26-09-2013 дата публикации

TECHNIQUES TO REMOTELY ACCESS OBJECT EVENTS

Номер: US20130254780A1
Автор: III Cummins Aiken, Mebane
Принадлежит: SAS INSTITUTE INC.,

Techniques to remotely access object events are described. An apparatus may comprise a processor and a memory communicatively coupled to the processor. The memory may be operative to store a remote event bridge having a surrogate object that when executed by the processor is operative to allow an observer object for a first process to subscribe to an event of a subject object for a second process using the surrogate object. In this manner, the remote event bridge and the surrogate object operates as an interface between subject objects and observer objects without any modifications to either class of objects. Other embodiments are described and claimed. 1. A computer-implemented method , comprising:sending a subscription request from an observer object having an observer event handler to a subject object, the subscription request to subscribe to an object event of the subject object;receiving an event notification from the subject object via a remote event bridge, the event notification to indicate the object event has occurred; andresponsive to receiving the event notification, sending a reply to the subject object having one or more observer event arguments.2. The computer-implemented method of claim 1 , comprising receiving the event notification via a call to the observer event handler from a surrogate object of the remote event bridge.3. The computer-implemented method of claim 1 , comprising executing a remote object proxy to facilitate remoting operations by operating as a bridge and integrating various message protocols.4. The computer-implemented method of claim 5 , comprising sending the subscription request to subscribe to the object event of the subject object via the remote object proxy.5. The computer-implemented method of claim 1 , comprising sending the subscription request to subscribe to the object event of the subject object anonymously during runtime using the remote event bridge.6. The computer-implemented method of claim 1 , comprising ...

Подробнее
26-09-2013 дата публикации

Deterministic Serialization of Access to Shared Resources In A Multi-processor System For Code Instructions Accessing Resources In a Non-Deterministic Order

Номер: US20130254877A1
Принадлежит: International Business Machines Corp

Managing access to resources shared among multiple processes within a computer system. Multiple program instances of an application are almost simultaneously executed on multiple processors for fault tolerance. The replication solution supports the recording and subsequent replay of reservation events granting the shared resources exclusive access rights to the processes, when one program code instruction may request access to a set of shared resources in a non-deterministic order.

Подробнее
03-10-2013 дата публикации

METHOD AND APPARATUS FOR EFFICIENT INTER-THREAD SYNCHRONIZATION FOR HELPER THREADS

Номер: US20130263145A1

A monitor bit per hardware thread in a memory location may be allocated, in a multiprocessing computer system having a plurality of hardware threads, the plurality of hardware threads sharing the memory location, and each of the allocated monitor bit corresponding to one of the plurality of hardware threads. A condition bit may be allocated for each of the plurality of hardware threads, the condition bit being allocated in each context of the plurality of hardware threads. In response to detecting the memory location being accessed, it is determined whether a monitor bit corresponding to a hardware thread in the memory location is set. In response to determining that the monitor bit corresponding to a hardware thread is set in the memory location, a condition bit corresponding to a thread accessing the memory location is set in the hardware thread's context. 1. A method of synchronizing threads , comprising:allocating a bit per hardware thread in a memory location, in a multiprocessing computer system having a plurality of hardware threads, the plurality of hardware threads sharing the memory location, and each of the allocated bit corresponding to one of the plurality of hardware threads;allocating a condition bit for each of the plurality of hardware threads, the condition bit being allocated in each context of the plurality of hardware threads;in response to detecting the memory location being accessed, determining whether a bit corresponding to a hardware thread in the memory location is set;in response to determining that the bit corresponding to a hardware thread is set in the memory location, setting a condition bit corresponding to a thread accessing the memory location, in the hardware thread's context.2. The method of claim 1 , wherein the memory location is a cache line in cache memory.3. The method of claim 2 , wherein a helper hardware thread performing data prefetching for an application hardware thread sets the bit in the memory location to monitor ...

Подробнее
03-10-2013 дата публикации

COMPUTER-READABLE STORAGE MEDIUM HAVING INFORMATION PROCESSING PROGRAM STORED THEREIN, INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING SYSTEM

Номер: US20130263154A1
Автор: ISHIHARA Susumu
Принадлежит: NINTENDO CO., LTD.

Object data that can be used in a predetermined application is previously stored in the information processing apparatus. The information processing apparatus makes communication with another unspecified information processing apparatus that is within a predetermined range. When object data that can be used in the predetermined application is received by the communication, an object based on the object data having been received is caused to appear in a virtual space, and when object data that can be used in the predetermined application is not received by the communication an object based on the previously stored object data is caused to appear in the virtual space. 1. A computer-readable storage medium having stored therein an information processing program that causes a computer of an information processing apparatus having a communication function to function as:an object data storing section configured to store object data that can be used in a predetermined application;a communication section configured to make communication with another unspecified information processing apparatus that is within a predetermined range, by using the communication function;a communication execution determination section configured to determine whether or not the communication with the other unspecified information processing apparatus that is within the predetermined range has been made by the communication section; anda controller configured to perform control in a case where the communication execution determination section determines that the communication with the other information processing apparatus has been made such that, when object data that can be used in the predetermined application is received by the communication, an object based on the object data having been received is caused to appear in a virtual space, and when object data that can be used in the predetermined application is not received by the communication, an object based on the object data previously ...

Подробнее
03-10-2013 дата публикации

OPERATION LOG COLLECTION METHOD AND DEVICE

Номер: US20130263156A1
Принадлежит: Hitachi, Ltd.

In order to evaluate the service quality of an application, there are methods which acquire various types of events which occur upon a Web browser upon which an application is operating in order to analyze thereof. On this occasion, if all events are acquired/collected, a load is placed thereby upon the Web browser or a server which records the events. In the present invention, when the Web browser starts the application, a connection is made to an event handler which acquires events related to user operations or application responses. When the event handler detects the occurrence of an event, if the event has not been recorded in the past, the event is recorded as a log. In the case of another event, in case a script has been executed or in case data has been modified, the event is recorded as a log. 1. An operation log collection method for recording operation information on Web applications , the operation log collection method comprising:a program initialization step of building into a browser an event acquisition means for detecting an event upon reading a Web application and for acquiring the detected event;an operation log recording step which, upon acquisition of the event by the event acquisition means, determines whether it is necessary to record the acquired event and which records a log of the event of which the recording is determined to be necessary, andan operation log output step of outputting the log of the events recorded in the operation log recording step.2. An operation log collection method according to claim 1 , wherein claim 1 , upon acquisition of the event claim 1 , the operation log recording step determines whether there is a data update in the acquired event and records the log of the event determined to have data updated therein.3. An operation log collection method according to claim 1 , wherein claim 1 , upon acquisition of the event claim 1 , the operation log recording step determines whether the acquired event has triggered event- ...

Подробнее