Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 2306. Отображено 197.
27-04-2009 дата публикации

СИСТЕМНАЯ АРХИТЕКТУРА И СВЯЗАННЫЕ С НЕЙ СПОСОБЫ ДИНАМИЧЕСКОГО ДОБАВЛЕНИЯ ПРОГРАММНЫХ КОМПОНЕНТОВ ДЛЯ РАСШИРЕНИЯ ФУНКЦИОНАЛЬНЫХ ВОЗМОЖНОСТЕЙ СИСТЕМНЫХ ПРОЦЕССОВ

Номер: RU2353968C2

Изобретение относится к вычислительной технике и может быть использовано в операционных системах для автоматического добавления программных компонентов в системные процессы. Техническим результатом изобретения является расширение мультимедиа функциональных возможностей системных процессов. В изобретении описаны системы и способы для автоматической установки и использования фильтров для обработки мультимедийных данных. Установленные фильтры впоследствии вызываются другими системными процессами, такими как драйверы устройств, приложения и программные средства сбора и обработки данных. Фильтры являются объектами, которые могут быть использованы множеством процессов в любой данный момент времени. Поиск и нумерация установленных фильтров могут быть выполнены службой управления фильтрами согласно их категориям. 8 н. и 15 з.п. ф-лы, 16 ил.

Подробнее
24-10-2017 дата публикации

МНОГОФУНКЦИОНАЛЬНОЕ ОТЛАДОЧНОЕ УСТРОЙСТВО ДЛЯ МИКРОПРОЦЕССОРНЫХ СИСТЕМ

Номер: RU2634197C1

Изобретение относится к области электроники и микропроцессорной техники и может найти обширное применение при отладке, ремонте и эксплуатации широкого спектра микропроцессорных систем и устройств, как уже существующих, так и вновь разрабатываемых, а также при изучении и исследовании принципов работы подобных систем и устройств в практических разделах дисциплин учебных заведений, имеющих соответствующую направленность подготовки специалистов. Технический результат – повышение производительности и снижение трудоемкости процесса отладки цифровых микропроцессорных систем и устройств. В конструкцию отладочного устройства, использующего часть памяти внешнего инструментального компьютера для хранения программы целевой отлаживаемой микропроцессорной системы и имеющего в своем составе интерфейс LPT–порта принтера для передачи программ и данных в отлаживаемую микропроцессорную систему через тристабильный восьмиразрядный буферный шинный формирователь, а также устройство синхронизации, представляющее ...

Подробнее
24-01-2023 дата публикации

Вычислительная панель 4Э8СВ-MSWTX

Номер: RU216236U1

Полезная модель относится к области вычислительной техники и может быть использована для систем хранения данных и в серверных решениях. Вычислительная панель на базе четырех восьмиядерных микропроцессоров, содержащая контроллер периферийных интерфейсов, контроллеры, флеш-память, менеджер, порты ввода-вывода, соединители оперативной памяти, в качестве микропроцессоров используются микропроцессоры с микроархитектурой Эльбрус пятого поколения Эльбрус-8СВ, шестнадцать соединителей оперативной памяти DDR4, с эффективной частотой до 2400 МГц и максимальным объемом до 1024 Гб, в качестве контроллера периферийных интерфейсов используются два КПИ-2, один из КПИ-2 соединяется с микропроцессором Эльбрус-8СВ, который является управляющим и позволяет использовать порты ввода-вывода, для обмена данными с ведомыми микропроцессорами Эльбрус-8 СВ, второй КПИ-2 соединяется с одним из ведомых микропроцессоров Эльбрус-8СВ, порты ввода-вывода соединяются с двумя КПИ-2 через контроллеры. Целью полезной модели ...

Подробнее
08-04-2024 дата публикации

СПОСОБ ДЛЯ ФОРМИРОВАНИЯ ИЗОБРАЖЕНИЯ, УСТРОЙСТВО И КОМПЬЮТЕРНО-ЧИТАЕМЫЙ НОСИТЕЛЬ ХРАНЕНИЯ ИНФОРМАЦИИ

Номер: RU2816914C1

Изобретение относится к средствам для формирования изображения. Технический результат заключается в расширении арсенала средств при управлении транспортировкой бумаги. Способ для формирования изображения, содержащий этапы, на которых: принимают первое задание формирования изображения и выполняют первую операцию формирования изображения согласно первому правилу транспортировки носителя для регистрации и принимают второе задание формирования изображения, которое выполняют согласно второму правилу транспортировки носителя для регистрации, при этом направления транспортировки носителя для регистрации по меньшей мере для одной страницы первого задания и по меньшей мере для одной страницы второго задания формирования изображения отличаются, при этом устройство формирования изображения снабжено по меньшей мере двумя ящиками для листов бумаги и входные листы бумаги в ящиках для листов бумаги размещены в различных направлениях, при этом первое задание формирования изображения и второе задание формирования ...

Подробнее
27-03-2002 дата публикации

ПРОЦЕССОР ОДНОРОДНОЙ ВЫЧИСЛИТЕЛЬНОЙ СРЕДЫ

Номер: RU2180969C1

Изобретение относится к вычислительной технике и может быть использовано в высокопроизводительных системах обработки больших массивов информации, в том числе и в режиме реального времени. Техническим результатом является расширение функциональных возможностей. Устройство содержит входной коммутатор, арифметико-логическое устройство, сдвигающий регистр, элемент задержки, первый и второй переключатели, выходной коммутатор, первый и второй блоки задержек, контроллер, блок настройки, управляющие шины. 1 з.п. ф-лы, 5 ил.

Подробнее
01-03-2021 дата публикации

Сервер локального участка периметра интегрированного комплекса безопасности

Номер: RU2743908C1

Изобретение относится к области средств технической охраны объектов. Технический результат состоит в повышении эффективности охраны объектов. Технический результат заявляемого технического решения достигается тем, что сервер локального участка периметра интегрированного комплекса безопасности содержит корпус, в котором установлены источники питания, магистральный коммутатор, блок обработки сигналов с установленным модулем связи и модулем питания, модуль защиты входного напряжения, подключенный к сети электропитания, при этом корпус сервера выполнен в виде двух вложенных друг в друга шкафов, внутренний из которых термостатирован.

Подробнее
20-12-2008 дата публикации

ИЗОЛИРОВАННОЕ ВЫЧИСЛИТЕЛЬНОЕ ОКРУЖЕНИЕ, ПРИВЯЗАННОЕ К ЦЕНТРАЛЬНОМУ ПРОЦЕССОРУ И МАТЕРИНСКОЙ ПЛАТЕ

Номер: RU2007122339A
Принадлежит:

... 1. Компьютер, выполненный с возможностью исполнения программного кода в изолированном вычислительном окружении, содержащийизолированное вычислительное окружение для исполнения программного кода, защищенную память, доступную только упомянутому программному коду и недоступную второму программному коду, исполняемому другим окружением исполнения;логическую схему для предписания процессору осуществлять исполнение из защищенной памяти; итаймер для хронирования событий, который присоединен к логической схеме, при этом упомянутый программный код активируется в качестве реакции на сигнал от таймера.2. Компьютер по п.1, в котором упомянутое другое окружение исполнения содержит одно из операционной системы, базовой системы ввода/вывода (BIOS) и ядра.3. Компьютер по п.1, в котором упомянутый программный код выполняет мониторинг состояния компьютера.4. Компьютер по п.3, дополнительно содержащий процессор, и состояние компьютера является одним из состояния ресурса, используемого операционной системой ...

Подробнее
20-03-2005 дата публикации

СИСТЕМНАЯ АРХИТЕКТУРА И СВЯЗАННЫЕ С НЕЙ СПОСОБЫ ДИНАМИЧЕСКГО ДОБАВЛЕНИЯ ПРОГРАММНЫХ КОМПОНЕНТОВ ДЛЯ РАСШИРЕНИЯ ФУНКЦИОНАЛЬНЫХ ВОЗМОЖНОСТЕЙ СИСТЕМНЫХ ПРОЦЕССОВ

Номер: RU2003129200A
Принадлежит:

... 1. Способ управления мультимедийными фильтрами, причем способ реализован на компьютере, а компьютер имеет операционную систему со службой системного уровня для установки мультимедийных фильтров, при этом способ включает в себя этапы, на которых принимают посредством системной службы запрос на установку одного или более мультимедийных фильтров, и устанавливают посредством системной службы один или более фильтров. 2. Способ по п.1, в котором запрос на установку одного или более фильтров принимают от одного или более фильтров. 3. Способ по п.1, в котором запрос на установку одного или более мультимедийных фильтров содержит один или более из следующих объектов: один или более уникальных идентификаторов, соответствующих категориям, связанным с мультимедийными фильтрами, один или более уникальных идентификаторов, соответствующих мультимедийным фильтрам, одно или более названий, соответствующих мультимедийным фильтрам, одно или более описаний мультимедийных фильтров, и одну или более цифровых ...

Подробнее
10-01-2011 дата публикации

АНИМИРОВАННЫЙ РАБОЧИЙ СТОЛ

Номер: RU2009125411A
Принадлежит:

... 1. Способ формирования анимированного рабочего стола для обрабатывающего устройства, причем анимированный рабочий стол имеет фон и передний план, а способ содержит: ! формирование содержимого, основанного на движущемся изображении в совместно используемой памяти (206, 302, 208, 304, 204, 306); ! формирование содержимого переднего плана из информации переднего плана (404); ! компоновку сцены, включающей в себя сформированное содержимое из совместно используемой памяти как, по меньшей мере, части фона и сформированного содержимого переднего плана; причем передний план помещается поверх фона (408, 512, 610, 718); и ! представление сцены как анимированного рабочего стола (410, 514, 610, 720). ! 2. Способ по п.1, в котором этап формирования содержимого, основанного на движущемся изображении в совместно используемой памяти, и этап создания сцены выполняются в отдельных процессах. ! 3. Способ по п.1, дополнительно содержащий: ! формирование содержимого, основанного на втором движущемся изображении ...

Подробнее
15-11-2018 дата публикации

Automatische Ausrichtung mit einer Tanksäule

Номер: DE102017109944A1
Принадлежит:

Die vorliegende Erfindung betrifft ein Verfahren für eine automatische Ausrichtung mit einer Tanksäule (14) für ein Fahrzeug (10), mit den Schritten: Aktivieren einer Ausrichtung mit einer Tanksäule (14) in einer Tankstelle (12), Erkennen mindestens einer Tanksäule (14) in der Umgebung des Fahrzeugs (10), Berechnen einer Trajektorie (16) zum Bewegen des Fahrzeugs (10) von einer Ist-Position zu einer mit der Tanksäule (14) ausgerichteten Betankungsposition (19) und Bewegen des Fahrzeugs (10) entlang der Trajektorie (16) in die Betankungsposition (19). Die vorliegende Erfindung betrifft außerdem ein Assistenzsystem (20) für eine automatische Ausrichtung mit einer Tanksäule für ein Fahrzeug (10), wobei das Assistenzsystem (20) für eine automatische Ausrichtung mit einer Tanksäule dazu geeignet ist, das vorstehende Verfahren auszuführen. Die vorliegende Erfindung betrifft weiterhin ein Fahrzeug (10) für eine automatische Ausrichtung des Fahrzeugs (10) mit einer Tanksäule (14), das das vorstehende ...

Подробнее
20-02-2008 дата публикации

Synchronising a Translation Lookaside Buffer to an Extended Paging Table

Номер: GB0002441039A
Принадлежит:

In a virtualisation based system, a Translation Lookaside Buffer (TLB) stores a mapping from a guest address to a host physical address. In response to an instruction and an operand, a logic circuit performs a synchronisation of a mapping from a guest address to a physical address of the host (host physical address) stored in the buffer with a corresponding mapping stored at least in part in an extended paging table (EPT). The synchronisation is based at least in part on the operand of the instruction which comprises at least one of a context descriptor and an EPT pointer. Preferably, the synchronisation comprises updating the mapping stored in the TLB based at least in part on the mapping stored in the EPT, where the mapping in the EPT is stored with the same guest address as the mapping stored in the TLB. The virtualisation based system may be a Virtual Machine Monitor.

Подробнее
02-10-2019 дата публикации

Performing a multiply-multiply-accumulate instruction

Номер: GB0002497698B
Принадлежит: INTEL CORP, Intel Corporation

Подробнее
07-03-2018 дата публикации

Hardware instruction generation unit for specialized processors

Номер: GB0002553442A
Принадлежит:

Methods, devices and systems are disclosed that interface a host computer to a specialized processor. In an embodiment, an instruction generation unit comprises attribute, decode, and instruction buffer stages. The attribute stage is configured to receive a host- program operation code and a virtual host-program operand from the host computer and to expand the virtual host-program operand into an operand descriptor. The decode stage is configured to receive the first operand descriptor and the host-program operation code, convert the host-program operation code to one or more decoded instructions for execution by the specialized processor, and allocate storage locations for use by the specialized processor. The instruction buffer stage is configured to receive the decoded instruction, place the one or more decoded instructions into one or more instruction queues, and issue decoded instructions from at least one of the one or more instruction queues for execution by the specialized processor ...

Подробнее
30-09-1998 дата публикации

Data processing

Номер: GB0009816776D0
Автор:
Принадлежит:

Подробнее
14-05-2008 дата публикации

Apparatus and method for an interface architecture for flexible and extensible media processing

Номер: GB0000806848D0
Автор:
Принадлежит:

Подробнее
17-05-2017 дата публикации

Cloud computing server interface

Номер: GB0201705079D0
Автор:
Принадлежит:

Подробнее
15-10-2007 дата публикации

INTEGRATED DATA PROCESSING CIRCUIT WITH SEVERAL PROGRAMMABLE PROCESSORS

Номер: AT0000374973T
Принадлежит:

Подробнее
17-10-2019 дата публикации

Method of board lumber grading using deep learning techniques

Номер: AU2018234390A1
Принадлежит: Spruson & Ferguson

A method of board lumber (Table 2) grading is performed in an industrial environment on a machine learning framework (12) configured as an interface to a machine learning-based deep convolutional network (20) that is trained end-to-end, pixels-to-pixels on semantic segmentation. The method uses deep learning techniques that are applied to semantic segmentation to delineate board lumber characteristics (Table 1), including their sizes and boundaries.

Подробнее
04-09-2003 дата публикации

PROFILE REFINEMENT FOR INTEGRATED CIRCUIT METROLOGY

Номер: AU2003215141A1
Принадлежит:

Подробнее
24-09-2015 дата публикации

ARCHITECTURAL MODE CONFIGURATION IN A COMPUTING SYSTEM

Номер: CA0002940911A1
Принадлежит:

A determination is made that a configuration architectural mode facility is installed in a computing environment that is configured for a plurality of architectural modes and has a defined power-on sequence that is to power-on the computing environment in one architectural mode of the plurality of architectural modes. Based on determining that the configuration architectural mode facility is installed, the computing environment is reconfigured to restrict use of the one architectural mode. The reconfiguring includes selecting a different power-on sequence to power-on the computing environment in another architectural mode of the plurality of architectural modes, wherein the another architectural mode is different from the one architectural mode, and executing the different power-on sequence to power-on the computing environment in the another architectural mode in place of the one architectural mode restricting use of the one architectural mode.

Подробнее
14-04-2020 дата публикации

VECTOR FIND ELEMENT NOT EQUAL INSTRUCTION

Номер: CA0002866878C

Processing of character data is facilitated. A Find Element Not Equal instruction is provided that compares data of multiple vectors for inequality and provides an indication of inequality, if inequality exists. An index associated with the unequal element is stored in a target vector register. Further, the same instruction, the Find Element Not Equal instruction, also searches a selected vector for null elements, also referred to as zero elements. A result of the instruction is dependent on whether the null search is provided, or just the compare.

Подробнее
12-06-2013 дата публикации

Method, system and apparatus for multi-level processing

Номер: CN103154892A
Автор: Mekhiel Nagi
Принадлежит:

A Multi-Level Processor (200) for reducing the cost of synchronization overhead including an upper level processor (201) for taking control and issuing the right to use shared data and to enter critical sections directly to each of a plurality of lower level processors (202, 203...20n) at processor speed. In one embodiment the instruction registers of lower level parallel processors are mapped to the data memory of upper level processor (201). Another embodiment (1300) incorporates three levels of processors. The method includes mapping the instructions of lower level processors into the memory of an upper level processor and controlling the operation of lower level processors. A variant of the method and apparatus facilitates the execution of Single Instruction Multiple Data (SIMD) and single to multiple instruction and multiple data (SI>MIMD). The processor includes the ability to stretch the clock frequency to reduce power consumption.

Подробнее
24-08-2016 дата публикации

In the multi-processor system of the communication disabled

Номер: CN0103154925B
Автор:
Принадлежит:

Подробнее
20-06-2003 дата публикации

Universal computer simulating other computers uses hierarchy of complexity levels in which low level machine emulates higher level machine

Номер: FR0002833729A1
Принадлежит:

Le dispositif de traitement d'information met en oeuvre des machines virtuelles imbriquées, c'est-à-dire qu'une machine est émulée par une autre, hiérarchiquement de la plus simple à la plus complexe, c'est-à-dire qu'une machine donnée émule totalement une autre machine plus compliquée et élaborée, chaque machine virtuelle ayant une architecture originale, c'est-à-dire se différenciant de l'architecture des autres machines de la hiérarchie de machines virtuelles, chaque machine virtuelle étant adaptée à exécuter un programme d'émulation ou de compilation dynamique simulant l'architecture virtuelle plus complexe de niveau immédiatement supérieur, une machine virtuelle d'un niveau donné dans la hiérarchie possédant donc une architecture plus simple que celles de toutes les machines virtuelles de niveau supérieur. Les langages machines des deux niveaux de machines virtuelles les plus bas de la hiérarchie sont spécifiquement conçus pour permettre l'exécution de toutes les opérations du niveau ...

Подробнее
12-06-2014 дата публикации

PROCESSING SYSTEM WITH SYNCHRONIZATION INSTRUCTION

Номер: WO2014088698A2
Принадлежит:

Подробнее
11-03-2021 дата публикации

SYNCHRONIZING SCHEDULING TASKS WITH ATOMIC ALU

Номер: US20210073029A1
Принадлежит: Imagination Technologies Ltd

A method of synchronizing a group of scheduled tasks within a parallel processing unit into a known state is described. The method uses a synchronization instruction in a scheduled task which triggers, in response to decoding of the instruction, an instruction decoder to place the scheduled task into a non-active state and forward the decoded synchronization instruction to an atomic ALU for execution. When the atomic ALU executes the decoded synchronization instruction, the atomic ALU performs an operation and check on data assigned to the group ID of the scheduled task and if the check is passed, all scheduled tasks having the particular group ID are removed from the non-active state.

Подробнее
01-09-2015 дата публикации

Graphics processing unit employing a standard processing unit and a method of constructing a graphics processing unit

Номер: US0009123128B2

Employing a general processing unit as a programmable function unit of a graphics pipeline and a method of manufacturing a graphics processing unit are disclosed. In one embodiment, the graphics pipeline includes: (1) accelerators, (2) an input output interface coupled to each of the accelerators and (3) a general processing unit coupled to the input output interface and configured as a programmable function unit of the graphics pipeline, the general processing unit configured to issue vector instructions via the input output interface to vector data paths for the programmable function unit.

Подробнее
22-07-2010 дата публикации

PROGRAMMABLE DEVICE FOR SOFTWARE DEFINED RADIO TERMINAL

Номер: US20100186006A1
Принадлежит: IMEC, Samsung Electronics

A programmable device suitable for software defined radio terminal is disclosed. In one aspect, the device includes a scalar cluster providing a scalar data path and a scalar register file and arranged for executing scalar instructions. The device may further include at least two interconnected vector clusters connected with the scalar cluster. Each of the at least two vector clusters provides a vector data path and a vector register file and is arranged for executing at least one vector instruction different from vector instructions performed by any other vector cluster of the at least two vector clusters.

Подробнее
29-04-2004 дата публикации

Parallel computer architecture of a cellular type, modifiable and expandable

Номер: US20040080628A1
Автор: Neven Dragojlovic
Принадлежит:

In this parallel computer processing is distributed among number of simple units with simple programs that consider only their immediate environment. Integration is achieved through multilayer architecture and a special three dimensional Memory Units. The whole system is expandable for more complex processing.

Подробнее
27-01-2022 дата публикации

ONLINE HYPERPARAMETER TUNING IN DISTRIBUTED MACHINE LEARNING

Номер: US20220027359A1
Принадлежит:

The disclosed embodiments provide a system for performing online hyperparameter tuning in distributed machine learning. During operation, the system uses input data for a first set of versions of a statistical model for a set of entities to calculate a batch of performance metrics for the first set of versions. Next, the system applies an optimization technique to the batch to produce updates to a set of hyperparameters for the statistical model. The system then uses the updates to modulate the execution of a second set of versions of the statistical model for the set of entities. When a new entity is added to the set of entities, the system updates the set of hyperparameters with a new dimension for the new entity. 120-. (canceled)21. A method , comprising:sending a first version of a global statistical model, including hyperparameter values and values of other parameters that are not hyperparameters, to a plurality of local model trainers that are each configured to train a local version of the global statistical model;wherein the hyperparameter values include regularization values that are each associated with one of the local versions of the global statistical model and correspond to a level of tuning for a respective entity of a plurality of different entities;receiving, from the plurality of local model trainers, a plurality of local model performance metrics that correspond to user feedback on outputs produced by the local versions of the global statistical model;updating the hyperparameter values based on the received plurality of local model performance metrics;including the updated hyperparameter values in a second version of the global statistical model; andsending the second version of the global statistical model, including the updated hyperparameter values, to the plurality of local model trainers;wherein the updated hyperparameter values include updated regularization values that each correspond to a change in the level of tuning for the respective ...

Подробнее
15-11-2018 дата публикации

AUTOMATIC LABELING OF PRODUCTS VIA EXPEDITED CHECKOUT SYSTEM

Номер: US20180330196A1
Принадлежит:

A portable checkout unit automatically generates training data for an automatic checkout system as a customer collects items in a store. A customer uses an item scanner of portable checkout unit to generate a virtual shopping list of items collected in the shopping cart. When the customer adds a new item to the shopping cart or on some regular interval, the portable checkout unit captures images of the items contained by the shopping cart and can generate bounding boxes for each product in each image. The bounding boxes can be associated with item identifiers from previously-generated bounding boxes to identify the items captured by the bounding boxes. Each bounding box paired with an item identifier can then be used as training data for an automated checkout system.

Подробнее
25-10-2018 дата публикации

Double Blind Machine Learning Insight Interface Apparatuses, Methods and Systems

Номер: US20180308008A1
Принадлежит:

The Double Blind Machine Learning Insight Interface Apparatuses, Methods and Systems (“DBMLII”) transforms campaign configuration request, campaign optimization input inputs via DBMLII components into top features, machine learning configured user interface, translated commands, campaign configuration response outputs. A double blind machine learning request is obtained. A third party's shared dataset and corresponding external predictions data determined by the third party based on an unavailable dataset is determined. Proprietary data corresponding to the shared dataset is determined. A dataframe comprising at least subsets of the determined shared dataset, external predictions data, and proprietary data is generated. A set of top features from the dataframe is determined. Top features data is utilized to generate a machine learning structure. The generated machine learning structure is utilized to produce machine learning results. The machine learning results are translated into commands ...

Подробнее
17-11-2011 дата публикации

Executing an Instruction for Performing a Configuration Virtual Topology Change

Номер: US20110283280A1

In a logically partitioned host computer system comprising host processors (host CPUs) partitioned into a plurality of guest processors (guest CPUs) of a guest configuration, a perform topology function instruction is executed by a guest processor specifying a topology change of the guest configuration. The topology change preferably changes the polarization of quest CPUs, the polarization related to the amount of a host CPU resource is provided to a guest CPU.

Подробнее
02-11-2021 дата публикации

Detecting malicious network addresses within a local network

Номер: US0011165798B2
Принадлежит: Cujo LLC

The behavior analysis engine can also detect malicious network addresses that are sent to networked devices in the local network. The network traffic hub identifies network communications that are transmitted through the local network that contain network addresses. The network traffic hub transmits (or sends) the network address to the behavior analysis engine and the behavior analysis engine extracts network address features from the network address. The behavior analysis engine then applies an execution model to the execution features to determine a confidence score for the network address that represents the execution model's certainty that the network address is malicious. The behavior analysis engine uses the confidence score to provide instructions to the network traffic hub as to whether to allow the networked device to receive the network address.

Подробнее
17-10-2017 дата публикации

Execution of an instruction for performing a configuration virtual topology change

Номер: US0009792157B2

In a logically partitioned host computer system comprising host processors (host CPUs) partitioned into a plurality of guest processors (guest CPUs) of a guest configuration, a perform topology function instruction is executed by a guest processor specifying a topology change of the guest configuration. The topology change preferably changes the polarization of guest CPUs, the polarization being related to the amount of a host CPU resource provided to a guest CPU.

Подробнее
07-03-2019 дата публикации

DEFERRED RESPONSE TO A PREFETCH REQUEST

Номер: US2019073309A1
Принадлежит:

Modifying prefetch request processing. A prefetch request is received by a local computer from a remote computer. The local computer responds to a determination that execution of the prefetch request is predicted to cause an address conflict during an execution of a transaction of the local processor by comparing a priority of the prefetch request with a priority of the transaction. Based on a result of the comparison, the local computer modifies program instructions that govern execution of the program instructions included in the prefetch request to include program instruction to perform one or both of: (i) a quiesce of the prefetch request prior to execution of the prefetch request, and (ii) a delay in execution of the prefetch request for a predetermined delay period.

Подробнее
16-04-2020 дата публикации

COMBINING INSTRUCTIONS FROM DIFFERENT BRANCHES FOR EXECUTION IN A SINGLE N-WAY VLIW PROCESSING ELEMENT OF A MULTITHREADED PROCESSOR

Номер: US20200117466A1
Принадлежит:

A data processing system includes a processor operable to execute a program partitioned into a number of discrete instructions, the processor having multiple processing elements each capable of executing more than one instruction per cycle, and an interface configured to read a first program and, on detecting a branch operation by that program creating m number of branches each having a different sequence of instructions, combine an instruction from one of the branches with an instruction from at least one of the other branches so as to cause a processing element to execute the combined instructions during a single cycle.

Подробнее
23-02-2021 дата публикации

Data processing systems for data testing to confirm data deletion and related methods

Номер: US0010929559B2
Принадлежит: OneTrust, LLC, ONETRUST LLC

In particular embodiments, a Personal Data Deletion System is configured to: (1) at least partially automatically identify and delete personal data that an entity is required to erase under one or more of the conditions discussed above; and (2) perform one or more data tests after the deletion to confirm that the system has, in fact, deleted any personal data associated with the data subject. The system may, for example, be configured to test to ensure the data has been deleted by: (1) submitting a unique token of data through a form to a system; (2) in response to passage of an expected data retention time, test the system by calling into the system after the passage of the data retention time to search for the unique token.

Подробнее
25-08-2020 дата публикации

System and method for developing run time self-modifying interaction solution through configuration

Номер: US0010756963B2
Принадлежит: PULZZE SYSTEMS, INC.

A system is provided. The system includes a processor, a memory, and an I/O device, an interaction engine unit stored in the memory and including a plurality of reusable software components. The plurality of the reusable software components is configured by a user, through a configuration process, to create at least one control flow and at least one service component representing at least one service. The at least one control flow executes a configured logic upon receipt of at least one event. The at least one control flow controls interactions among the at least one services or the at least one service to the at least one event. And, the interaction engine unit dynamically reconfigures the system configuration at run time based on at least one environmental condition.

Подробнее
16-09-2021 дата публикации

DATA PROCESSING SYSTEMS FOR FULFILLING DATA SUBJECT ACCESS REQUESTS AND RELATED METHODS

Номер: US20210286897A1
Принадлежит: OneTrust, LLC

Responding to a data subject access request includes receiving the request and identifying the requestor and source. In response to identifying the requestor and source, a computer processor determines whether the data subject access request is subject to fulfillment constraints, including whether the requestor or source is malicious. If so, then the computer processor denies the request or requests a processing fee prior to fulfillment. If not, then the computer processor fulfills the request.

Подробнее
22-09-2015 дата публикации

Synchronizing a translation lookaside buffer with an extended paging table

Номер: US0009141555B2
Принадлежит: Intel Corporation, INTEL CORP, INTEL CORPORATION

A processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a physical address of the host of the virtualization based system (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization based system.

Подробнее
17-08-2021 дата публикации

Analog processor comprising quantum devices

Номер: US0011093440B2
Принадлежит: D-WAVE SYSTEMS INC., D WAVE SYSTEMS INC

Analog processors for solving various computational problems are provided. Such analog processors comprise a plurality of quantum devices, arranged in a lattice, together with a plurality of coupling devices. The analog processors further comprise bias control systems each configured to apply a local effective bias on a corresponding quantum device. A set of coupling devices in the plurality of coupling devices is configured to couple nearest-neighbor quantum devices in the lattice. Another set of coupling devices is configured to couple next-nearest neighbor quantum devices. The analog processors further comprise a plurality of coupling control systems each configured to tune the coupling value of a corresponding coupling device in the plurality of coupling devices to a coupling. Such quantum processors further comprise a set of readout devices each configured to measure the information from a corresponding quantum device in the plurality of quantum devices.

Подробнее
21-02-2023 дата публикации

Data processing systems and methods for auditing data request compliance

Номер: US0011586762B2
Принадлежит: OneTrust, LLC

A privacy management system that is configured to process one or more data subject access requests and further configured to: (1) enable a data protection officer to submit an audit request; (2) perform an audit based on one or more parameters provided as part of the request (e.g., one or more parameters such as how long an average request takes to fulfill, one or more parameters related to logging and/or tracking data subject access requests and/or complaints from one or more particular customer advocacy groups, individuals, NGOs, etc.); and (3) provide one or more audit results to the officer (e.g., by displaying the results on a suitable display screen).

Подробнее
12-04-2022 дата публикации

Consent receipt management systems and related methods

Номер: US0011301589B2
Принадлежит: OneTrust, LLC

A consent receipt management system is configured to: (1) automatically cause a prior, validly received consent to expire (e.g., in response to a triggering event); and (2) in response to causing the previously received consent to expire, automatically trigger a recapture of consent. In particular embodiments, the system may, for example, be configured to cause a prior, validly received consent to expire in response to one or more triggering events.

Подробнее
10-05-2022 дата публикации

Data processing systems for assessing readiness for responding to privacy-related incidents

Номер: US0011328240B2
Принадлежит: OneTrust, LLC

Data processing systems and methods, according to various embodiments, are adapted for mapping various questions regarding a data breach from a master questionnaire to a plurality of territory-specific data breach disclosure questionnaires. The answers to the questions in the master questionnaire are used to populate the territory-specific data breach disclosure questionnaires and determine whether disclosure is required in territory. The system can automatically notify the appropriate regulatory bodies for each territory where it is determined that data breach disclosure is required.

Подробнее
12-03-2008 дата публикации

A MEMORY ARRANGEMENT FOR MULTI-PROCESSOR SYSTEMS

Номер: EP0001896983A2
Принадлежит:

Подробнее
17-08-2018 дата публикации

КОНФИГУРАЦИЯ АРХИТЕКТУРНОГО РЕЖИМА В ВЫЧИСЛИТЕЛЬНОЙ СИСТЕМЕ

Номер: RU2664413C2

Изобретение относится к технологиям сетевой связи. Технический результат заключается в повышении скорости обработки данных. Содержит: выявление посредством процессора того, что средство конфигурации архитектурного режима инсталлировано в вычислительном окружении, сконфигурированном для нескольких архитектурных режимов и имеющем заданную последовательность включения, которая предназначена для включения вычислительного окружения в одном архитектурном режиме из нескольких архитектурных режимов, причем один архитектурный режим содержит первую архитектуру системы команд и имеет первый набор поддерживаемых сервисов, проведение посредством процессора реконфигурирования вычислительного окружения для ограничения использования одного архитектурного режима, причем реконфигурирование включает в себя: выборку отличной последовательности включения для включения вычислительного окружения в другом архитектурном режиме из нескольких архитектурных режимов и - выполнение отличной последовательности включения ...

Подробнее
27-06-2015 дата публикации

СИСТЕМА И СПОСОБ РАСПРЕДЕЛЕННЫХ ВЫЧИСЛЕНИЙ

Номер: RU2554509C2

Изобретение относится к вычислительной технике. Технический результат заключается в повышении эффективности распределенных вычислений за счет ввода параметра, значение которого прямо или косвенно определяет инструкцию программы, которая должна быть выполнена следующей. Способ распределенных вычислений в распределенной системе, содержащей более одного исполнительного модуля, взаимодействующих друг с другом, один или более модулей памяти, связанных с указанными исполнительными модулями и содержащих последовательности инструкций исполнительного модуля, в котором программу исполняют на одном или более исполнительных модулях и копию программы загружают в модули памяти всех исполнительных модулей, участвующих в данный момент в исполнении программы, программа включает одну или более инструкций передачи управления, которая содержит, как минимум, один параметр, значение которого прямо или косвенно определяет инструкцию программы, которая должна быть выполнена следующей. 3 н. и 49 з.п. ф-лы, 4 ил ...

Подробнее
12-07-2018 дата публикации

Recheneinheit und Betriebsverfahren hierfür

Номер: DE102017200458A1
Принадлежит:

Die Erfindung betrifft eine Recheneinheit (100; 100a; 100b; 100c; 100d; 100e; 100f; 100g), mit wenigstens einem Rechenkern (110a, 110b, 110c), einer primären Speichereinrichtung (120), und wenigstens einer Hauptverbindungseinheit (130) zur Verbindung des wenigstens einen Rechenkerns (110a, 110b, 110c) mit der primären Speichereinrichtung (120), wobei die Recheneinheit (100) wenigstens eine Funktionseinheit (140; 1400) aufweist und wenigstens eine externe Komponente (170; 170a; 180; 190; 200; 200'), die extern zu der wenigstens einen Funktionseinheit (140; 1400) angeordnet ist, wobei wenigstens eine direkte Datenverbindung (DV1; DV2; DV3; DV4, DV5; DV6; a34, a35, a36) zwischen der wenigstens einen Funktionseinheit (140; 1400) und der wenigstens einen externen Komponente (170; 170a; 180; 190; 200; 200') vorgesehen ist.

Подробнее
13-09-2018 дата публикации

BEFEHLSSATZARCHITEKTUREN FÜR FEINKÖRNIGE HETEROGENE VERARBEITUNG

Номер: DE102018000983A1
Принадлежит:

Befehlssatzarchitekturen (ISA) für feinkörnige Verarbeitung und zugehörige Prozessoren, Verfahren und Compiler. Der ISA umfasst Befehle, die dazu ausgelegt sind, auf Prozessoren mit heterogenen Kernen ausgeführt zu werden, die unterschiedliche Mikroarchitekturen umsetzen. Mechanismen werden bereitgestellt, um zu ermöglichen, dass entsprechende Codesegmente für einen Zielprozessor (oder eine Prozessorfamilie) mit heterogenen Kernen kompiliert/assembliert werden können, und haben entsprechende Codesegmente, die für spezifische Typen von Prozessorkern-Mikroarchitekturen, die dynamisch bei Laufzeit über Ausführung der ISA-Befehle aufgerufen werden. Die ISA-Befehle umfassen sowohl unkonditionale als auch konditionale Verzweigungs- und Aufrufbefehle, zusätzlich zu Befehlen, die Prozessoren mit drei oder mehr unterschiedlichen Typen von Kernen unterstützen. Die Befehle sind dazu ausgelegt, dynamische Migration von Befehlsthreads über heterogene Kerne zu unterstützen, wobei im Wesentlichen kein ...

Подробнее
27-07-2016 дата публикации

Combining paths

Номер: GB0002524126B

Подробнее
15-04-2020 дата публикации

Processor system and accelerator

Номер: GB0002511672B
Принадлежит: UNIV WASEDA, Waseda University

Подробнее
02-04-2014 дата публикации

Unified,adaptive ras for hybrid systems

Номер: GB0002506551A
Принадлежит:

A method, system, and computer program product for maintaining reliability in a computer system. In an example embodiment, the method includes managing workloads on a first processor with a first processor architecture by an agent process executing on a second processor with a second processor architecture. The method proceeds by activating redundant computation on the second processor by the agent process. The method continues by performing a same computation from a workload of the workloads at least twice. Finally, the method includes comparing results of the same computation. In this embodiment the first processor is coupled the second processor by a network, and the first processor architecture and second processor architecture are different architectures.

Подробнее
16-08-2006 дата публикации

Aliasing data processing registers

Номер: GB0002409062B
Принадлежит: ADVANCED RISC MACH LTD, ARM LIMITED

Подробнее
20-02-2008 дата публикации

Monitoring performance through an Extensible Firmware Interface

Номер: GB0002441043A
Принадлежит:

A system and method of monitoring a host processor in a system comprises configuring the host processor to provide health and performance information through an extensible firmware interface (EFI) 225 to a platform manageability component (PMC) which operates independently of the operating system on the host processor. The host processor communicates through a sensor driver 203 which connects to platform manageability runtime drivers 207, 209 and 211 in an EFI runtime environment 220. The platform manageability component comprises several interfaces 237, 239 and 231 corresponding to the runtime drivers which communicate with a capability modules (CM) 253a,b. These interfaces may include a sensor effector interface (SEI) 231, an external operational interface (EOI) 237 and a platform management administrative interface (PMAI) 239. In operation the CM receives the health and performance information and analyses it in order to recommend an action to recover from a fatal error condition. The ...

Подробнее
27-03-2013 дата публикации

Processor power management based on class and content instructions

Номер: GB0201302383D0
Автор:
Принадлежит:

Подробнее
06-05-2015 дата публикации

Controlling data flow between processors in a processing system

Номер: GB0201504979D0
Автор:
Принадлежит:

Подробнее
17-01-2018 дата публикации

Method and device for adaptively enhancing an output of a detector

Номер: GB0201720234D0
Автор:
Принадлежит:

Подробнее
19-06-2013 дата публикации

Performing a multiply-multiply-accumulate instruction

Номер: GB0002497698A
Принадлежит:

In one embodiment, the present invention includes a processor having multiple execution units, at least one of which includes a circuit having a multiply-accumulate (MAC) unit including multiple multipliers and adders, and to execute a user-level multiply-multiply-accumulate instruction to populate a destination storage with a plurality of elements each corresponding to an absolute value for a pixel of a pixel block. Other embodiments are described and claimed.

Подробнее
09-07-2008 дата публикации

Parallel array architecture for a graphics processor

Номер: GB0000810493D0
Автор:
Принадлежит:

Подробнее
29-08-2018 дата публикации

Scheduling tasks

Номер: GB0002560059A
Принадлежит:

An arithmetic/logic unit holds data assigned to a group of tasks or threads. The tasks may be kernels in a graphics processor. The data may be a count or a set of flags. When one of the tasks in the group executes a synchronization instruction, the task is placed in a non-active state and the unit atomically modifies the data. The modification may involve incrementing or decrementing the count or setting a flag corresponding to the task. The unit then checks the data. This may involve determining if the count has reached a particular value or if all the flags are set. If the data passes the check, the unit signals to a scheduler that all the tasks should be placed in an active state. The unit may also reset the data to its initial state. The scheduler may maintain queues of tasks, which indicate whether they are in an active or non-active state.

Подробнее
15-03-2006 дата публикации

INSTRUCTIONS FOR THE PROCESSING OF A VERSCHLUSSELTEN MESSAGE

Номер: AT0000355552T
Принадлежит:

Подробнее
15-09-2008 дата публикации

PROCEDURE FOR THE CONFIGURATION OF A COMPUTER PROGRAMME

Номер: AT0000406610T
Принадлежит:

Подробнее
15-01-2016 дата публикации

Steuervorrichtung und Verfahren zur Steuerung einer Bewegung eines Elementes einer Anlage

Номер: AT512066B1
Принадлежит:

A control device is configured to control a movement of an element of an installation and includes a main processor and an auxiliary processor. The main processor has a first computer architecture. The auxiliary processor is connected to the main processor and includes a second computer architecture. The second computer architecture differs from the first computer architecture. The second computer architecture allows faster processing of predetermined signals than the first computer architecture. The auxiliary processor is configured (i) to read in an auxiliary-processor input signal from the main processor, and (ii) to determine an auxiliary-processor output signal and to output it to the main processor by using the auxiliary-processor input signal.

Подробнее
15-10-2015 дата публикации

Steuervorrichtung und Verfahren zur Steuerung einer Bewegung eines Elementes einer Anlage

Номер: AT512066A3
Принадлежит:

A control device is configured to control a movement of an element of an installation and includes a main processor and an auxiliary processor. The main processor has a first computer architecture. The auxiliary processor is connected to the main processor and includes a second computer architecture. The second computer architecture differs from the first computer architecture. The second computer architecture allows faster processing of predetermined signals than the first computer architecture. The auxiliary processor is configured (i) to read in an auxiliary-processor input signal from the main processor, and (ii) to determine an auxiliary-processor output signal and to output it to the main processor by using the auxiliary-processor input signal.

Подробнее
10-10-2019 дата публикации

Method and apparatus for object status detection

Номер: AU2018261257A1
Принадлежит: Spruson & Ferguson

A method of object status detection for objects supported by a shelf, from shelf image data, includes: obtaining a plurality of images of a shelf, each image including an indication of a gap on the shelf between the objects; registering the images to a common frame of reference; identifying a subset of the gaps having overlapping locations in the common frame of reference; generating a consolidated gap indication from the subset; obtaining reference data including (i) identifiers for the objects and (ii) prescribed locations for the objects within the common frame of reference; based on a comparison of the consolidated gap indication with the reference data, selecting a target object identifier from the reference data; and generating and presenting a status notification for the target product identifier.

Подробнее
26-08-2014 дата публикации

ANALOG PROCESSOR COMPRISING QUANTUM DEVICES

Номер: CA0002592084C
Принадлежит: D-WAVE SYSTEMS, INC.

... ²²²Analog processors for solving various computational problems are provided. ²Such analog processors comprise a plurality of quantum devices, arranged in a ²lattice, together with a plurality of coupling devices. The analog processors ²further comprise bias control systems each configured to apply a local ²effective bias on a corresponding quantum device. A set of coupling devices in ²the plurality of coupling devices is configured to couple nearest-neighbor ²quantum devices in the lattice. Another set of coupling devices is configured ²to couple next-nearest neighbor quantum devices. The analog processors further ²comprise a plurality of coupling control systems each configured to tune the ²coupling value of a corresponding coupling device in the plurality of coupling ²devices to a coupling. Such quantum processors further comprise a set of ²readout devices each configured to measure the information from a ²corresponding quantum device in the plurality of quantum devices.² ...

Подробнее
19-04-2018 дата публикации

METHODS, SYSTEMS, AND MEDIA FOR PAIRING DEVICES TO COMPLETE A TASK USING AN APPLICATION REQUEST

Номер: CA0003040268A1
Принадлежит: GOWLING WLG (CANADA) LLP

Methods, systems, and media for pairing devices for completing tasks are provided. In some embodiments, the method comprises: identifying, at a first user device, an indication of a task to be completed; transmitting, by the first user device to a server, information indicating the task to be completed and identifying information corresponding to the first user device; determining whether a predetermined duration of time has elapsed; in response to determining that the predetermined duration of time has elapsed, transmitting, from the first user device to the server, a request to determine whether the task has been completed by a second user device; and in response to receiving, from the server, an indication that the task has been completed by the second user device, retrieving data corresponding to the task from the server.

Подробнее
07-09-2018 дата публикации

DETECTING MALICIOUS BEHAVIOR WITHIN LOCAL NETWORKS

Номер: CA0003054842A1
Принадлежит: BLAKE, CASSELS & GRAYDON LLP

A behavior analysis engine and a network traffic hub can identify malicious behavior within a local network containing the network traffic hub. The behavior analysis engine can execute executable files that are downloaded by networked devices in the local network in a sandbox environment and determine if the executable files are malicious. The behavior analysis engine can also identify malicious network addresses based on features of the network addresses. The behavior analysis engine may identify entities connected to a received entity and determine whether the entity is malicious based on whether the connected entities are malicious, and further may generate condensed versions of machine-learned models to be executed locally on network traffic hubs in local networks.

Подробнее
20-10-2019 дата публикации

MACHINE LEARNING ARTIFICIAL INTELLIGENCE SYSTEM FOR PREDICTING POPULAR HOURS

Номер: CA0003040646A1
Принадлежит: SMART & BIGGAR LLP

A system for generating a graphical user interface in a client device. The system may include a processor in communication with the client device and a database. The processor may execute: receiving a request for occupancy information of a specified merchant; obtaining a plurality of credit card authorizations associated with the merchant; generating a posted transaction array based on the credit card authorizations; removing outlier members of the posted transaction array by applying a threshold filter; generating a transaction frequency array based on the posted transaction array, the transaction frequency array comprising weekdays and aggregated transactions associated with the weekdays; modifying the transaction frequency array by applying a transformation to the aggregated transactions; generating a smoothed array by applying a kernel density estimate to the transaction frequency array; and generating a graphical user interface displaying information in the smoothed array.

Подробнее
10-09-2018 дата публикации

AN AUTOMATED SYSTEM FOR MEASUREMENT OF ZONE 1 IN ASSESSMENT OF SEVERITY OF RETINOPATHY OF PREMATURITY

Номер: CA0002960501A1
Принадлежит:

An automated method for diagnosing and evaluating severity of retinopathy of prematurity in a retina of a patient is provided that is superior to conventional techniques. A graphical user interface (GUI) is provided for receiving biographical information for the patient creating a patient record in a database via the GUI. A photograph of the retina of the patient is collected and placed in the patient record via the GUI. The photograph is then analyzed to determine vascular distributions within the retina. A zone 1 boundary is assigned to the retina based on a set of threshold levels with respect to the determined vascular distributions. A system for performing the automated method is also provided.

Подробнее
01-11-2018 дата публикации

SYSTEM AND METHOD FOR GENERATING PREDICTIVE INSIGHTS USING SELF-ADAPTIVE LEARNING

Номер: CA0003061544A1
Принадлежит: BLAKE, CASSELS & GRAYDON LLP

A method and system are provided for generating insights related to customer acquisition and/or retention for a provider of a product or service. The method includes obtaining historical and current data associated with customers; generating at least one predictive insight associated with customer acquisition and/or customer retention using the data and at least one model; for each interaction associated with contact with customers via a channel, collecting and storing outcome data comprising statistical attributes and data; and self- adapting the at least one model using the stored statistical attributes and data to learn new recommendation strategies for customer acquisition and/or customer retention activities.

Подробнее
08-11-2019 дата публикации

Probability-based guide

Номер: CN0110431566A
Автор:
Принадлежит:

Подробнее
08-11-2019 дата публикации

Fatigue crack growth prediction

Номер: CN0110431395A
Автор:
Принадлежит:

Подробнее
17-07-2019 дата публикации

Номер: KR0102001222B1
Автор:
Принадлежит:

Подробнее
07-02-2017 дата публикации

멀티­프로세서 코어 디바이스들에 대한 디바이스 핀 기능을 할당하기 위한 디바이스 및 방법

Номер: KR1020170013875A
Принадлежит:

... 임베디드 디바이스는: 복수의 프로세서 코어들 - 각각의 프로세서 코어는 복수의 주변 디바이스들을 구비하고, 각각의 주변 디바이스는 출력부를 가질 수 있음 -; 복수의 할당 가능 외부 핀들을 구비한 하우징; 및 각각의 프로세싱 코어에 대한 복수의 주변기기 핀 선택 모듈들을 가지고, 여기서 각각의 주변기기 핀 선택 모듈은 할당 가능 외부 핀을 상기 프로세서 코어들 중 하나의 프로세서 코어의 상기 복수의 주변 디바이스들 중 하나의 주변 디바이스에 할당하도록 프로그램 가능하게 구성된다.

Подробнее
14-06-2019 дата публикации

Номер: KR1020190067189A
Автор:
Принадлежит:

Подробнее
01-09-2016 дата публикации

HARDWARE INSTRUCTION GENERATION UNIT FOR SPECIALIZED PROCESSORS

Номер: WO2016135712A1
Автор: JOHNSON, William
Принадлежит:

Methods, devices and systems are disclosed that interface a host computer to a specialized processor. In an embodiment, an instruction generation unit comprises attribute, decode, and instruction buffer stages. The attribute stage is configured to receive a host- program operation code and a virtual host-program operand from the host computer and to expand the virtual host-program operand into an operand descriptor. The decode stage is configured to receive the first operand descriptor and the host-program operation code, convert the host-program operation code to one or more decoded instructions for execution by the specialized processor, and allocate storage locations for use by the specialized processor. The instruction buffer stage is configured to receive the decoded instruction, place the one or more decoded instructions into one or more instruction queues, and issue decoded instructions from at least one of the one or more instruction queues for execution by the specialized processor ...

Подробнее
22-04-2008 дата публикации

Method and system for terminating write commands in a hub-based memory system

Номер: US0007363419B2

A memory hub receives downstream memory commands and processes each received downstream memory command to determine whether the memory command includes a write command directed to the memory hub. The memory hub operates in a first mode when the write command is directed to the hub to develop memory access signals adapted to be applied to memory devices. The memory hub operates in a second mode when the write command is not directed to the hub to provide the command's write data on a downstream output port adapted to be coupled to a downstream memory hub.

Подробнее
15-04-2021 дата публикации

CONFIGURATION PARAMETER TRANSFER

Номер: US20210109884A1

Examples relate to configuration parameter transfer. An apparatus may include a memory resource storing executable instructions. Instructions may include instructions to receive a first signal from a host computing device. Instructions may further include instructions to initiate communications with the host computing device in response to receiving the first signal. Instructions may further include instructions to receive a configuration parameter from the host computing device in response to initiation of communications with the host computing device. The apparatus may further include a processing resource to execute the instructions stored on the memory resource.

Подробнее
14-08-2007 дата публикации

Cipher message assist instructions

Номер: US0007257718B2

A method, system and program product for enciphering or deciphering storage of a computing environment by specifying, via an instruction, a unit of storage to be enciphered or deciphered. The unit of storage to be enciphered or deciphered includes a plurality of pages which may be operated on in a chaining operation.

Подробнее
10-01-2017 дата публикации

System, method, and apparatus for data processing

Номер: US0009544262B2

A data process system including a unit receiving a mail data including an output data or a target output data via a network, a unit identifying a user-identification data to be associated with the output data based on an address data of a transmission source of the mail data by referring to first and second units, the first unit storing a first address data in correspondence with each user-identification data, the second unit storing a second address data in correspondence with each user-identification data, a unit storing data-identification data in correspondence with the output data in a unit in a case where the user-identification data is identified by referring to the second unit instead of by the first unit, a unit notifying the data-identification data via the network, and a unit transmitting the output data corresponding to the user-identification data or the data-identification data received via the network.

Подробнее
06-08-2015 дата публикации

CONTEXT CONFIGURATION

Номер: US20150220482A1
Принадлежит:

Certain examples described herein relate to configuring a call control context in a media gateway. The media gateway has a set of digital signal processors, each having one or more digital signal processor cores. The cores implement digital signal processor channels that are grouped into digital signal processor contexts. When a request to configure a call control context is received, certain examples described herein are configured to assign a set of digital signal processor contexts to process data streams associated with the call control context. In particular, certain examples described herein couple a first digital signal processor context to at least a second digital signal processor context using at least one digital signal processor channel in each of the first and second digital signal processor contexts.

Подробнее
09-01-2020 дата публикации

DATA TRANSFER PATH SELECTION

Номер: US20200014753A1
Принадлежит:

A system contains a network testing engine that sends test data along different paths of a network between a source and a destination, wherein each path contains a plurality of network nodes, and receives, in response to sending the test data, response data about the paths. The system further contains a network path characteristics engine that determines characteristics of each path based on the response data, and a delivery parameters engine that receives a request for delivery of a data load from the source to the destination and determines, based on the request, delivery parameters. Furthermore, the system contains the source and a path selection engine that determines a selected path of the different paths based on the characteristics of the paths and the delivery parameters, and sends the selected data path to the source, which sends the data load along the selected path to the destination. 1. A system , comprising: send test data along a plurality of different paths of a network between a source and a destination, wherein each of the plurality of different paths comprises a plurality of nodes of the network; and', 'receive, in response to sending the test data, response data comprising information about the plurality of different paths;, 'a network testing processor configured toa network path characteristics processor configured to determine a plurality of characteristics of each of the plurality of different paths based on the response data; [ at least one of the response data and the plurality of characteristics of each of the plurality of different paths; and', 'at least one of prior response data and a plurality of prior characteristics of each of the plurality of different paths;, 'determine a trend, the trend describing change over time of one or more characteristics of at least some of the plurality of different paths, based on, 'determine a selected path for delivery of a data load, wherein the selected path is one of the plurality of different paths, ...

Подробнее
08-10-2019 дата публикации

Remote direct memory access in computing systems

Номер: US0010437775B2

Distributed computing systems, devices, and associated methods of remote direct memory access (“RDMA”) packet routing are disclosed herein. In one embodiment, a server includes a main processor, a network interface card (“NIC”), and a field programmable gate array (“FPGA”) operatively coupled to the main processor via the NIC. The FPGA includes an inbound processing path having an inbound packet buffer configured to receive an inbound packet from the computer network, a NIC buffer, and a multiplexer between the inbound packet buffer and the NIC, and between the NIC buffer and the NIC. The FPGA also includes an outbound processing path having an outbound action circuit having an input to receive the outbound packet from the NIC, a first output to the computer network, and a second output to the NIC buffer in the inbound processing path.

Подробнее
15-11-2018 дата публикации

PROCESSING CORE WITH OPERATION SUPPRESSION BASED ON CONTRIBUTION ESTIMATE

Номер: US20180329723A1
Принадлежит: Tenstorrent Inc.

Processing cores with the ability to suppress operations based on a contribution estimate for those operation for purposes of increasing the overall performance of the core are disclosed. Associated methods that can be conducted by such processing cores are also disclosed. One such method includes generating a reference value for a composite computation. A complete execution of the composite computation generates a precise output and requires execution of a set of component computations. The method also includes generating a component computation approximation. The method also includes evaluating the component computation approximation with the reference value. The method also includes executing a partial execution of the composite computation using the component computation approximation to produce an estimated output. The method also includes suppressing the component computation, while executing the partial execution, based on the evaluation of the component computation approximation ...

Подробнее
29-03-2012 дата публикации

Processor and method thereof

Номер: US20120079179A1
Автор: Kwon Taek Kwon
Принадлежит: SAMSUNG ELECTRONICS CO LTD

A processor and an operating method are described. By diversifying an L1 memory being accessed, based on an execution mode of the processor, an operating performance of the processor may be enhanced. By disposing a local/stack section in a system dynamic random access memory (DRAM) located external to the processor, a size of a scratch pad memory may be reduced without deteriorating a performance. While a core of the processor is performing in a very long instruction word (VLIW) mode, the core may data-access a cache memory and thus, a bottleneck may not occur with respect to the scratch pad memory even though a memory access occurs with respect to the scratch pad memory by an external component.

Подробнее
29-03-2012 дата публикации

Performing a multiply-multiply-accumulate instruction

Номер: US20120079252A1
Автор: Eric S. Sprangle
Принадлежит: Intel Corp

In one embodiment, the present invention includes a processor having multiple execution units, at least one of which includes a circuit having a multiply-accumulate (MAC) unit including multiple multipliers and adders, and to execute a user-level multiply-multiply-accumulate instruction to populate a destination storage with a plurality of elements each corresponding to an absolute value for a pixel of a pixel block. Other embodiments are described and claimed.

Подробнее
19-04-2012 дата публикации

Intelligent architecture creator

Номер: US20120096420A1
Принадлежит: Individual

Systems and methods are disclosed to automatically generate a processor architecture for a custom integrated circuit (IC) described by a computer readable code. The IC has one or more timing and hardware constraints. The system extracts parameters defining the processor architecture from a static profile and a dynamic profile of the computer readable code; iteratively optimizes the processor architecture by changing one or more parameters until all timing and hardware constraints expressed as a cost function are met; and synthesizes the generated processor architecture into a computer readable description of the custom integrated circuit for semiconductor fabrication.

Подробнее
26-04-2012 дата публикации

Stall propagation in a processing system with interspersed processors and communicaton elements

Номер: US20120102299A1
Принадлежит: Individual

A processing system includes processors and dynamically configurable communication elements (DCCs) coupled together in an interspersed arrangement. A source device may transfer a data item through an intermediate subset of the DCCs to a destination device. The source and destination devices may each correspond to different processors, DCCs, or input/output devices, or mixed combinations of these. In response to detecting a stall after the source device begins transfer of the data item to the destination device and prior to receipt of all of the data item at the destination device, a stalling device is operable to propagate stalling information through one or more of the intermediate subset towards the source device. In response to receiving the stalling information, at least one of the intermediate subset is operable to buffer all or part of the data item.

Подробнее
20-09-2012 дата публикации

Multi-core distributed processing for machine vision applications

Номер: US20120239905A1
Принадлежит: Microscan Systems Inc

Embodiments of an apparatus including a first processor core having a local agent running thereon, the agent comprising a local process and a proxy agent and a second processor core having a remote agent running thereon, the remote agent being an instance of the local agent. A shared memory wherein coupled to the first processor core and the second processor core, wherein the local agent and the remote agent communicate via the shared memory. Other embodiments are disclosed and claimed.

Подробнее
03-01-2013 дата публикации

Unified, workload-optimized, adaptive ras for hybrid systems

Номер: US20130007412A1
Принадлежит: International Business Machines Corp

A method, system, and computer program product for maintaining reliability in a computer system. In an example embodiment, the method includes managing workloads on a first processor with a first processor architecture by an agent process executing on a second processor with a second processor architecture. The method proceeds by activating redundant computation on the second processor by the agent process. The method continues by performing a same computation from a workload of the workloads at least twice. Finally, the method includes comparing results of the same computation. In this embodiment the first processor is coupled the second processor by a network, and the first processor architecture and second processor architecture are different architectures.

Подробнее
11-04-2013 дата публикации

PARALLEL COMPUTER ARCHITECTURE FOR COMPUTATION OF PARTICLE INTERACTIONS

Номер: US20130091341A1
Принадлежит: D.E. Shaw Research LLC

A computation system for computing interactions in a multiple-body simulation includes an array of processing modules arranged into one or more serially interconnected processing groups of the processing modules. Each of the processing modules includes storage for data elements and includes circuitry for performing pairwise computations between data elements each associated with a spatial location. Each of the pairwise computations makes use of a data element from the storage of the processing module and a data element passing through the serially interconnected processing modules. Each of the processing modules includes circuitry for selecting the pairs of data elements according to separations between spatial locations associated with the data elements. 1112-. (canceled)113. A computation system for computing interactions in a multiple-body simulation , said system comprising: an array of processing modules arranged into one or more serially interconnected processing groups of the processing modules; wherein each of the processing modules includes a storage for data elements and includes circuitry for performing pairwise computations between data elements each associated with a spatial location , each of the pairwise computations making use of a data element from the storage of the processing module and a data element passing through the serially interconnected processing modules; and wherein each of the processing modules includes circuitry for selecting the pairs of data elements according to separations between spatial locations associated with the data elements.114. The system of claim 113 , wherein the array of processing modules includes two or more serially interconnected processing groups.115. The system of claim 113 , wherein at least one of the groups of the processing modules includes a serial interconnection of two or more of the processing modules.116. The system of claim 113 , further comprising distribution modules for distributing data elements ...

Подробнее
18-04-2013 дата публикации

Cluster computing using special purpose microprocessors

Номер: US20130097406A1
Принадлежит: Advanced Cluster Systems Inc

In some embodiments, a computer cluster system comprises a plurality of nodes and a software package comprising a user interface and a kernel for interpreting program code instructions. In certain embodiments, a cluster node module is configured to communicate with the kernel and other cluster node modules. The cluster node module can accept instructions from the user interface and can interpret at least some of the instructions such that several cluster node modules in communication with one another and with a kernel can act as a computer cluster.

Подробнее
18-04-2013 дата публикации

Unified, workload-optimized, adaptive ras for hybrid systems

Номер: US20130097407A1
Принадлежит: International Business Machines Corp

A method, system, and computer program product for maintaining reliability in a computer system. In an example embodiment, the method includes managing workloads on a first processor with a first processor architecture by an agent process executing on a second processor with a second processor architecture. The method proceeds by activating redundant computation on the second processor by the agent process. The method continues by performing a same computation from a workload of the workloads at least twice. Finally, the method includes comparing results of the same computation. In this embodiment the first processor is coupled the second processor by a network, and the first processor architecture and second processor architecture are different architectures.

Подробнее
25-04-2013 дата публикации

Method, Apparatus, And System For Optimizing Frequency And Performance In A Multidie Microprocessor

Номер: US20130103928A1
Принадлежит:

With the progress toward multi-core processors, each core is can not readily ascertain the status of the other dies with respect to an idle or active status. A proposal for utilizing an interface to transmit core status among multiple cores in a multi-die microprocessor is discussed. Consequently, this facilitates thermal management by allowing an optimal setting for setting performance and frequency based on utilizing each core status. 1. A processor comprising:a plurality of cores; anda turbo mode logic to increase an operating frequency of at least one of the plurality of cores during a period in which at least one other of the plurality of cores is idle, wherein the at least one other of the plurality of cores is to communicate a core power status to the turbo mode logic via an interface.2. The processor of claim 1 , wherein the operating frequency is higher than a guaranteed frequency while the at least one other of the plurality of cores is idle.3. The processor of claim 1 , wherein the at least one of the plurality of cores is to use an available power of the at least one other of the plurality of cores to increase the operating frequency of the at least one of the plurality of cores.4. The processor of further comprising a plurality of phase locked loops (PLLs) coupled to the plurality of cores claim 1 , wherein the one of the plurality of phase locked loops coupled to the at least one of the plurality of cores is to increase the operating frequency of the at least one of the plurality of cores.5. The processor of claim 1 , wherein the processor comprises a multi-die processor.6. The processor of claim 1 , wherein the plurality of cores are to send and receive their power status on one or more interfaces including the interface.7. The processor of claim 6 , wherein the interface comprises a serial interface.8. The processor of claim 7 , wherein the serial interface is one of a two wire interface and a dedicated serial link interface.9. The processor of claim ...

Подробнее
02-05-2013 дата публикации

Circuit Arrangement for a Data Processing System and Method for Data Processing

Номер: US20130111189A1
Принадлежит: ROBERT BOSCH GMBH

A circuit arrangement for a data processing system is configured to process data in multiple modules. The circuit arrangement is configured to provide a clock as well as a time base and/or a base of at least one further physical quantity for each of the multiple modules. The circuit arrangement also comprises a central routing unit, which is connected to several of the multiple modules. Via the central routing unit, the modules can periodically exchange data based on the time base and/or on the base of the at least one further physical quantity. The several modules are configured to process data independently of and in parallel to other modules of the several modules.

Подробнее
09-05-2013 дата публикации

COPROCESSOR HAVING TASK SEQUENCE CONTROL

Номер: US20130117533A1
Автор: Hayek Jan
Принадлежит:

A coprocessor has: a processing unit for processing tasks in a data-processing system subject to at least one master processor; at least one storage module having memory areas, assignable in each case to the tasks, for storing data assigned to the tasks; and a buffer area for buffering instructions assigned to the tasks, the instructions including processing instructions, and upon retrieval of the processing instructions from the buffer area, the data stored in the storage module being processed on the basis of the processing instructions. 110-. (canceled)11. A coprocessor in a data-processing system having at least one master processor , the coprocessor comprising:a processing unit for processing tasks;at least one storage module having memory areas allocated to the tasks and storing data assigned to the tasks; anda buffer area for buffering instructions assigned to the tasks, wherein the instructions include processing instructions, and upon retrieval of the processing instructions from the buffer area, the data stored in the storage module are processed on the basis of the processing instructions.12. The coprocessor as recited in claim 11 , further comprising:a flow register, wherein the instructions further include status instructions, and a status of the processing of the data on the basis of the processing instructions is indicated in the flow register.13. The coprocessor as recited in claim 12 , wherein at least one of: (i) the memory area is in the form of at least one of RAM memory and a register; and (ii) the buffer area is in the form of at least one of a FIFO and a sequentially operating RAM memory.14. The coprocessor as recited in claim 12 , further comprising:a finite state machine which indicates in the flow register the status of the processing.15. The coprocessor as recited in claim 14 , wherein the processing unit is configured to process tasks in accordance with the finite state machine.16. The coprocessor as recited in claim 14 , wherein the ...

Подробнее
06-06-2013 дата публикации

Method of debugging control flow in a stream processor

Номер: US20130145070A1
Принадлежит: Maxeler Technologies Ltd

Disclosed is a method of monitoring operation of programmable logic for a streaming processor, the method comprising: generating a graph representing the programmable logic to be implemented in hardware, the graph comprising nodes and edges connecting nodes in the graph; inserting, on each edge, monitoring hardware to monitor flow of data along the edge. Also disclosed is a method of monitoring operation of programmable logic for a streaming processor, the method comprising: generating a graph representing the programmable logic to be implemented in hardware, the graph comprising nodes and edges connecting the nodes in the graph; inserting, on at least one edge, data-generating hardware arranged to receive data from an upstream node and generate data at known values having the same flow control pattern as the received data for onward transmission to a connected node.

Подробнее
20-06-2013 дата публикации

Reducing issue-to-issue latency by reversing processing order in half-pumped simd execution units

Номер: US20130159666A1
Принадлежит: International Business Machines Corp

Techniques for reducing issue-to-issue latency by reversing processing order in half-pumped single instruction multiple data (SIMD) execution units are described. In one embodiment a processor functional unit is provided comprising a frontend unit, and execution core unit, a backend unit, an execution order control signal unit, a first interconnect coupled between and output and an input of the execution core unit and a second interconnect coupled between an output of the backend unit and an input of the frontend unit. In operation, the execution order control signal unit generates a forwarding order control signal based on the parity of an applied clock signal on reception of a first vector instruction. This control signal is in turn used to selectively forward first and second portions of an execution result of the first vector instruction via the interconnects for use in the execution of a dependent second vector instruction.

Подробнее
27-06-2013 дата публикации

MULTIPROCESSOR SYSTEM AND SYNCHRONOUS ENGINE DEVICE THEREOF

Номер: US20130166879A1
Принадлежит:

The invention discloses a multiprocessor System and synchronous engine device thereof. the synchronous engine includes: a plurality of storage queues, wherein one of the queues stores all synchronization primitives from one of the processors; a plurality of scheduling modules, selecting the synchronization primitives for execution from the plurality of storage queues and then according to the type of the synchronization primitive transmitting the selected synchronization primitives to corresponding processing modules for processing, scheduling modules corresponding in a one-to-one relationship with the storage queues; a plurality of processing modules, receiving the transmitted synchronization primitives to execute different functions; a virtual synchronous memory structure module, using small memory space and mapping main memory spaces of all processors into a synchronization memory structure to realize the function of all synchronization primitives through a control logic; a main memory port, communicating with virtual synchronous memory structure module to read and write the main memory of all processors, and initiate an interrupt request to processors; a configuration register, storing various configuration information required by processing modules. 1. A synchronous engine device of multiprocessor , characterized in that the synchronous engine device includes:a plurality of storage queues, being configured to receive synchronization primitives transmitted by a plurality of processors, wherein one of the queues stores all synchronization primitives from one of the processors;a plurality of scheduling modules, being configured to select the synchronization primitives for execution from the plurality of storage queues and then according to the type of the synchronization primitive transmitting the selected synchronization primitives to corresponding processing modules for processing, the scheduling modules corresponding in a one-to-one relationship with the ...

Подробнее
04-07-2013 дата публикации

PROGRAMMABLE DEVICE FOR SOFTWARE DEFINED RADIO TERMINAL

Номер: US20130173884A1
Принадлежит:

A programmable device suitable for software defined radio terminal is disclosed. In one aspect, the device includes a scalar cluster providing a scalar data path and a scalar register file and arranged for executing scalar instructions. The device may further include at least two interconnected vector clusters connected with the scalar cluster. Each of the at least two vector clusters provides a vector data path and a vector register file and is arranged for executing at least one vector instruction different from vector instructions performed by any other vector cluster of the at least two vector clusters. 1. A programmable device comprising:a scalar portion providing a scalar data path and a scalar register file and configured to execute scalar instructions; andat least two interconnected vector portions, the vector portions being connected with the scalar portion, each of the at least two vector portions providing a vector data path and a vector register file and configured to execute at least one vector instruction different from vector instructions performed by any other vector portion of the at least two vector portions.2. The programmable device of claim 1 , wherein the scalar portion and each of the at least two vector portions are provided with a local storage unit configured to store respective instructions.3. The programmable device of claim 1 , further comprising a software controlled interconnect for data communication between the vector portions.4. The programmable device of claim 1 , wherein a first vector portion of the at least two vector portions comprises operators for arithmetic logic unit instructions and wherein a second vector portion comprises multiplication operators.5. The programmable device of claim 1 , further comprising a programming unit configured to provide the at least one vector instruction.6. The programmable device of claim 1 , further comprising a second scalar portion claim 1 , wherein the at least two interconnected vector ...

Подробнее
11-07-2013 дата публикации

Performing A Multiply-Multiply-Accumulate Instruction

Номер: US20130179661A1
Автор: Eric Sprangle
Принадлежит: Individual

In one embodiment, the present invention includes a processor having multiple execution units, at least one of which includes a circuit having a multiply-accumulate (MAC) unit including multiple multipliers and adders, and to execute a user-level multiply-multiply-accumulate instruction to populate a destination storage with a plurality of elements each corresponding to an absolute value for a pixel of a pixel block. Other embodiments are described and claimed.

Подробнее
18-07-2013 дата публикации

PROCESSOR WITH TABLE LOOKUP AND HISTOGRAM PROCESSING UNITS

Номер: US20130185539A1
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

A processor includes a scalar processor core and a vector coprocessor core coupled to the scalar processor core. The scalar processor core is configured to retrieve an instruction stream from program storage, and pass vector instructions in the instruction stream to the vector coprocessor core. The vector coprocessor core includes a register file, a plurality of execution units, and a table lookup unit. The register file includes a plurality of registers. The execution units are arranged in parallel to process a plurality of data values. The execution units are coupled to the register file. The table lookup unit is coupled to the register file in parallel with the execution units. The table lookup unit is configured to retrieve table values from one or more lookup tables stored in memory by executing table lookup vector instructions in a table lookup loop. 1. A processor , comprising:a scalar processor core; anda vector coprocessor core coupled to the scalar processor core; retrieve an instruction stream from program storage, the instruction stream comprising scalar instructions executable by the scalar processor core and vector instructions executable by the vector processor core; and', 'pass the vector instructions to the vector coprocessor core;, 'the scalar processor core configured to a register file comprising a plurality of registers;', 'a plurality of execution units arranged in parallel to process the data values, the execution units coupled to the register file; and', 'a table lookup unit coupled to the register file in parallel with the execution units, the table lookup unit configured to retrieve table values from one or more lookup tables stored in memory by executing table lookup vector instructions in a table lookup loop;', 'wherein the vector coprocessor core is configured to identify table lookup vector instructions forming a complete table lookup loop; and based on identification of a complete table lookup loop, execute the table lookup vector ...

Подробнее
18-07-2013 дата публикации

Processor with multi-level looping vector coprocessor

Номер: US20130185540A1
Принадлежит: Texas Instruments Inc

A processor includes a scalar processor core and a vector coprocessor core coupled to the scalar processor core. The scalar processor core includes a program memory interface through which the scalar processor retrieves instructions from a program memory. The instructions include scalar instructions executable by the scalar processor and vector instructions executable by the vector coprocessor core. The vector coprocessor core includes a plurality of execution units and a vector command buffer. The vector command buffer is configured to decode vector instructions passed by the scalar processor core, to determine whether vector instructions defining an instruction loop have been decoded, and to initiate execution of the instruction loop by one or more of the execution units based on a determination that all of the vector instructions of the instruction loop have been decoded.

Подробнее
18-07-2013 дата публикации

Processor with instruction variable data distribution

Номер: US20130185544A1
Принадлежит: Texas Instruments Inc

A vector processor includes a plurality of execution units arranged in parallel, a register file, and a plurality of load units. The register file includes a plurality of registers coupled to the execution units. Each of the load units is configured to load, in a single transaction, a plurality of the registers with data retrieved from memory. The loaded registers corresponding to different execution units. Each of the load units is configured to distribute the data to the registers in accordance with an instruction selectable distribution. The instruction selectable distribution specifies one of plurality of distributions. Each of the distributions specifies a data sequence that differs from the sequence in which the data is stored in memory.

Подробнее
18-07-2013 дата публикации

Method, apparatus, and system for optimizing frequency and performance in a multidie microprocessor

Номер: US20130185577A1
Принадлежит: Individual

With the progress toward multi-core processors, each core is can not readily ascertain the status of the other dies with respect to an idle or active status. A proposal for utilizing an interface to transmit core status among multiple cores in a multi-die microprocessor is discussed. Consequently, this facilitates thermal management by allowing an optimal setting for setting performance and frequency based on utilizing each core status.

Подробнее
25-07-2013 дата публикации

PROCESSOR CONTROL APPARATUS AND METHOD THEREFOR

Номер: US20130191613A1
Автор: Takahashi Tetsuya
Принадлежит: CANON KABUSHIKI KAISHA

Whether each of a plurality of processor cores is in a suspend state or operation state is detected. The processor utilization of a processor core of interest in the operation state is acquired. The number of processes assigned to the processor core of interest is obtained. The stop control or startup control of a processor core is performed based on the suspend state or operation state, the processor utilization, and the number of processes. 1. A control apparatus for controlling a multicore processor which has a plurality of processor cores , the apparatus comprising:a detector configured to detect a suspend state or an operation state of each of the plurality of processor cores;an acquisition section configured to acquire processor utilization of a processor core of interest in the operation state;an obtaining section configured to obtain a number of processes assigned to the processor core of interest; anda controller configured to perform stop control or startup control of a processor core, based on the acquired processor utilization and the obtained number of processes.2. The apparatus according to claim 1 , further comprising a memory which holds a processor core stop threshold value and a process stop threshold value as conditions for shifting a processor core in the operation state to the suspend state claim 1 ,wherein, in a case that there are a plurality of processor cores each of which has the processor utilization smaller than the processor core stop threshold value, has the number of processes smaller than the process stop threshold value, and is in the operation state, the controller performs stop control on the processor core of interest.3. The apparatus according to claim 2 , wherein claim 2 , in a case that the number of processes assigned to the processor core of interest is not less than the process stop threshold value claim 2 , the controller performs no stop control on the processor core of interest.4. The apparatus according to claim 1 , ...

Подробнее
01-08-2013 дата публикации

DISPLAY APPARATUS, UPGRADING APPARATUS, CONTROL METHOD THEREOF AND DISPLAY SYSTEM

Номер: US20130194283A1
Автор: Kim Moon-Soo
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

The display apparatus includes: an image processor which processes an image signal; a display which displays an image based on the processed image signal; an interface to which an upgrading apparatus for processing the image signal is connected; and a controller which controls the interface to cut off power supplied to the upgrading apparatus upon receiving a user's selection to change the upgrading apparatus mode from an active mode to a passive mode. Thus, the active and passive modes of the upgrading apparatus are changed without removing the upgrading apparatus and without rebooting the display apparatus, thereby improving user's convenience. 1. A display apparatus comprising:a first image processor which processes an image signal;a display which displays an image based on the processed image signal;an interface to which an upgrading apparatus comprising a second image processor for processing the image signal is connected; anda controller which controls the interface to cut off power supplied to the upgrading apparatus upon receiving a user's selection to change an upgrading apparatus mode from an active mode to a passive mode.2. The display apparatus according to claim 1 , wherein the controller perceives the connected upgrading apparatus as a storage medium when the upgrading apparatus mode is the passive mode.3. The display apparatus according to claim 1 , wherein the controller controls the first image processor to process the image signal instead of transmitting the image signal to the upgrading apparatus to be processed by the second image processor claim 1 , when the upgrading apparatus mode is changed to the passive mode.4. The display apparatus according to claim 1 , wherein the controller controls the interface to supply power to the upgrading apparatus upon receiving a user's input to change the upgrading apparatus mode from the passive mode to the active mode.5. The display apparatus according to claim 4 , wherein claim 4 , when the upgrading ...

Подробнее
19-09-2013 дата публикации

PROCESSOR, ELECTRONIC CONTROL UNIT AND GENERATING PROGRAM

Номер: US20130246736A1
Автор: HONTANI Kenji
Принадлежит: TOYOTA JIDOSHA KABUSHIKI KAISHA

A processor in which plural cores perform respective programs includes: a first own core execution point acquiring part configured to acquire first code block information if a first core executes an execution history recording instruction described at an execution history recording point in the program, the first code block information indicating, with a single address, a series of instructions executed by the first core; a first other core execution point acquiring part configured to acquire first execution address information of an instruction, the instruction being executed by a second core, if the first core executes the execution history recording instruction; and a first execution point information recording part configured to record the first code block information and the first execution address information in a shared memory in time series such that they are associated with each other. 1. A processor in which plural cores perform respective programs , comprising:a first own core execution point acquiring part configured to acquire first code block information if a first core executes an execution history recording instruction described at an execution history recording point in the program, the first code block information indicating, with a single address, a series of instructions executed by the first core;a first other core execution point acquiring part configured to acquire first execution address information of an instruction, the instruction being executed by a second core, if the first core executes the execution history recording instruction;a first execution point information recording part configured to record ID information of the core which executes the execution history recording instruction, the first code block information and the first execution address information in a shared memory in time series such that they are associated with each other;a second own core execution point acquiring part configured to acquire second code block ...

Подробнее
19-09-2013 дата публикации

Vector find element not equal instruction

Номер: US20130246751A1
Принадлежит: International Business Machines Corp

Processing of character data is facilitated. A Find Element Not Equal instruction is provided that compares data of multiple vectors for inequality and provides an indication of inequality, if inequality exists. An index associated with the unequal element is stored in a target vector register. Further, the same instruction, the Find Element Not Equal instruction, also searches a selected vector for null elements, also referred to as zero elements. A result of the instruction is dependent on whether the null search is provided, or just the compare.

Подробнее
26-09-2013 дата публикации

Processing System With Interspersed Processors and Communication Elements Having Improved Wormhole Routing

Номер: US20130254515A1
Принадлежит: Coherent Logix, Incorporated

A processing system includes processors and dynamically configurable communication elements (DCCs) coupled together in an interspersed arrangement. A source device may transfer a data item through an intermediate subset of the DCCs to a destination device. The source and destination devices may each correspond to different processors, DCCs, or input/output devices, or mixed combinations of these. In response to detecting a stall after the source device begins transfer of the data item to the destination device and prior to receipt of all of the data item at the destination device, a stalling device is operable to propagate stalling information through one or more of the intermediate subset towards the source device. In response to receiving the stalling information, at least one of the intermediate subset is operable to buffer all or part of the data item. 1. A system , comprising:a plurality of processors; anda plurality of dynamically configurable communication elements, each comprising a plurality of communication ports, a first memory, and a routing engine;wherein said plurality of processors and said plurality of dynamically configurable communication elements are coupled together in an interspersed arrangement;wherein different pathways are operable to be created for transfer of one or more messages among different subsets of said dynamically configurable communication elements, wherein each respective message comprises a plurality of data elements, the plurality of data elements comprising a header and a body;wherein the header comprises one or more navigation units, wherein the one or more navigation units comprise navigation information for creating a first pathway for the respective message among a first subset of said dynamically configurable communication elements.2. The system of claim 1 ,wherein at least a subset of the one or more navigation units further comprise at least one control bit and navigation commands.3. The system of claim 2 ,wherein the ...

Подробнее
31-10-2013 дата публикации

PERFORMING A DETERMINISTIC REDUCTION OPERATION IN A PARALLEL COMPUTER

Номер: US20130290673A1

Performing a deterministic reduction operation in a parallel computer that includes compute nodes, each of which includes computer processors and a CAU (Collectives Acceleration Unit) that couples computer processors to one another for data communications, including organizing processors and a CAU into a branched tree topology in which the CAU is a root and the processors are children; receiving, from each of the processors in any order, dummy contribution data, where each processor is restricted from sending any other data to the root CAU prior to receiving an acknowledgement of receipt from the root CAU; sending, by the root CAU to the processors in the branched tree topology, in a predefined order, acknowledgements of receipt of the dummy contribution data; receiving, by the root CAU from the processors in the predefined order, the processors' contribution data to the reduction operation; and reducing, by the root CAU, the processors' contribution data. 1. A method of performing a deterministic reduction operation in a parallel computer , the parallel computer comprising a plurality of compute nodes , each compute node comprising a plurality of computer processors and a Collectives Acceleration Unit (CAU) , the CAU coupling computer processors of compute nodes to one another for data communications in a cluster data communications network , the method comprising:organizing a particular plurality of processors of a particular plurality of compute nodes of the parallel computer and a root CAU into a branched tree topology, wherein the root CAU comprises a root of the branched tree topology and the particular plurality of processors comprise children of the root CAU;receiving, by the root CAU from each of the processors in the particular plurality of processors of the branched tree topology, in any order, dummy contribution data, wherein each processor of the particular plurality of processors is restricted from sending any other data to the root CAU prior to ...

Подробнее
14-11-2013 дата публикации

PERFORMING A CYCLIC REDUNDANCY CHECKSUM OPERATION RESPONSIVE TO A USER-LEVEL INSTRUCTION

Номер: US20130305011A1
Принадлежит:

In one embodiment, the present invention includes a method for receiving incoming data in a processor and performing a checksum operation on the incoming data in the processor pursuant to a user-level instruction for the checksum operation. For example, a cyclic redundancy checksum may be computed in the processor itself responsive to the user-level instruction. Other embodiments are described and claimed. 16-. (canceled)7. A processor comprising: a cache;', 'a plurality of general purpose registers; and', 'a plurality of execution units comprising a store data unit, an integer execution unit, a floating point execution unit, and a single instruction multiple data (SIMD) execution unit, wherein at least one of the plurality of execution units comprises logic to:', 'perform a cyclic redundancy check (CRC) operation in response to one or more CRC32 instructions executed in a 32-bit mode of operation or a 64-bit mode of operation, wherein the logic is to perform the CRC operation on one of a plurality of data sizes, including a data size of 8-bits, 16-bits, and 32-bits, and wherein the one or more CRC32 instructions are to indicate the data size on which to perform the CRC operation., 'a plurality of cores, wherein at least one of the cores comprises8. The processor of claim 7 , wherein each CRC instruction is referenced by a respective opcode to perform the CRC operation on each data size.9. The processor of claim 7 , wherein the variable number of data sizes further comprises a data size of 64-bits in the 64-bit mode of operation. This application is a continuation of U.S. patent application Ser. No. 13/796,032, filed Mar. 12, 2013, which is a continuation of U.S. patent application Ser. No. 13/484,787, filed May 31, 2012, which is now U.S. Pat. No. 8,413,024 issued on Apr. 2, 2013, which is a continuation of U.S. patent application Ser. No. 13/097,462, filed Apr. 29, 2011, which is now U.S. Pat. No. 8,225,184 issued on Jul. 17, 2012, which is a continuation of U.S. ...

Подробнее
14-11-2013 дата публикации

Performing a cyclic redundancy checksum operation responsive to a user-level instruction

Номер: US20130305118A1
Принадлежит: Intel Corp

In one embodiment, the present invention includes a method for receiving incoming data in a processor and performing a checksum operation on the incoming data in the processor pursuant to a user-level instruction for the checksum operation. For example, a cyclic redundancy checksum may be computed in the processor itself responsive to the user-level instruction. Other embodiments are described and claimed.

Подробнее
05-12-2013 дата публикации

SYSTEM AND METHOD FOR DISTRIBUTED COMPUTING

Номер: US20130326191A1
Принадлежит:

The invention refers to tightly coupled multiprocessor distributed computing systems. The proposed solution enables to develop distributed applications as usual monolithic applications with use of typical compilers and builders. These applications support complicated logic of interaction between elements executed in different nodes and, at that, have limited complexity of development. The invention determines requirements to a distributed application and a method of its execution, memory organization and system node interaction manner. 1. A method of distributed computing in a distributed system comprising of one or more than one interacting execution modules , one or more than one memory modules connected to the said execution modules and containing execution module instructions , in which the said execution modules support instructions of remote control transfer with return , characterized in that for memory addressing it is necessary to use distributed memory with base-displacement addressing for which:a memory block is allocated when control under the said control transfer instruction is obtained or when the first instruction in a memory module of an execution module is performed,the said memory block is deallocated when control in distributed stack of calls is returned,when transferring control from one execution module to another remotely, size of the said memory block is fixed, and current size relating to the said distributed memory and being a sum of sizes of all allocated blocks in distributed stack of calls is transferred,relative address in the said distributed memory is computed from the beginning of the first block, at that, collection of blocks in stack of distributed calls is considered as continuous memory,when executing instructions with distributed base-displacement addressing, an execution module controls overrun of its memory block,if an execution module detects overrun of its memory block, it requests data of the previous execution module in ...

Подробнее
02-01-2014 дата публикации

APPARATUS AND METHOD OF VECTOR UNIT SHARING

Номер: US20140006748A1
Принадлежит: COGNIVUE CORPORATION

A reconfigurable vector processor is described that allows the size of its vector units to be changed in order to process vectors of different sizes. The reconfigurable vector processor comprises a plurality of processor units. Each of the processor units comprises a control unit for decoding instructions and generating control signals, a scalar unit for processing instructions on scalar data, and a vector unit for processing instructions on vector data under control of control signals. The reconfigurable vector processor architecture also comprises a vector control selector for selectively providing control signals generated by one processor unit of the plurality of processor units to the vector unit of a different processor unit of the plurality of processor units. 1. A reconfigurable vector processor comprising: a control unit for decoding instructions and generating control signals;', 'a scalar unit for processing instructions on scalar data; and', 'a vector unit for processing instructions on vector data based on the generated control signals; and, 'a plurality of processor units, each comprisinga vector control selector for selectively providing control signals generated by one of the plurality of processor units to a vector unit associated with a different processor unit of the plurality of processor units.2. The reconfigurable vector processor of claim 1 , wherein the vector control selector comprises a vector control multiplexer associated with a first processor unit of the plurality of processor units for selectively coupling the vector unit of the first processor unit to the control unit of the first processor unit or to a control unit of a second processor unit of the plurality of processor units to selectively provide the one or more control signals generated by the first processor unit or the second processor unit to the vector unit of the first processor unit.3. The reconfigurable vector processor of claim 1 , wherein the vector control selector ...

Подробнее
02-01-2014 дата публикации

Source Code Level Multistage Scheduling Approach for Software Development and Testing for Multi-Processor Environments

Номер: US20140006751A1
Принадлежит: LSI Corporation

In one embodiment, a heterogeneous multi-processor computer system includes (i) a plurality of dedicated processors (DPs), each DP configured to implement one or more program modules during runtime operations; (ii) two or more control processors (CPs), each CP configured to run scheduling software for controlling the runtime operations by a corresponding subset of DPs; and (iii) one or more buses interconnecting the DPs and CPs. Each CP is configured to vary timing of implementation of the program modules for the corresponding subset of DPs based on resource availability, and each CP is configured to vary timing of data transfers by the corresponding subset of DPs based on resource availability. 1. A heterogeneous multi-processor computer system comprising:a plurality of dedicated processors (DPs), each DP configured to implement one or more program modules during runtime operations;a plurality of control processors (CPs), each CP configured to run scheduling software for controlling the runtime operations by a corresponding subset of DPs; and each CP is configured to vary timing of implementation of the program modules for the corresponding subset of DPs based on resource availability; and', 'each CP is configured to vary timing of data transfers by the corresponding subset of DPs based on the resource availability., 'one or more buses interconnecting the DPs and CPs, wherein2. The invention of claim 1 , wherein the resource availability comprises one or more of processor availability claim 1 , memory availability claim 1 , and bus availability.3. The invention of claim 1 , wherein the runtime operations of the heterogeneous multi-processor computer system are implemented during an online processing phase of a software development scheme that further comprises an offline processing phase that generates the scheduling software for the heterogeneous multi-processor computer system.4. The invention of claim 3 , wherein the offline processing phase maps each program ...

Подробнее
06-02-2014 дата публикации

Programmable device for software defined radio terminal

Номер: US20140040594A1

A programmable device suitable for software defined radio terminal is disclosed. In one aspect, the device includes a scalar cluster providing a scalar data path and a scalar register file and arranged for executing scalar instructions. The device may further include at least two interconnected vector clusters connected with the scalar cluster. Each of the at least two vector clusters provides a vector data path and a vector register file and is arranged for executing at least one vector instruction different from vector instructions performed by any other vector cluster of the at least two vector clusters.

Подробнее
20-03-2014 дата публикации

INTELLIGENT ARCHITECTURE CREATOR

Номер: US20140082325A1
Принадлежит:

Systems and methods are disclosed to automatically generate a processor architecture for a custom integrated circuit (IC) described by a computer readable code. The IC has one or more timing and hardware constraints. The system extracts parameters defining the processor architecture from a static profile and a dynamic profile of the computer readable code; iteratively optimizes the processor architecture by changing one or more parameters until all timing and hardware constraints expressed as a cost function are met; and synthesizes the generated processor architecture into a computer readable description of the custom integrated circuit for semiconductor fabrication. 1. A method to automatically generate a processor architecture for a custom integrated circuit (IC) described by a computer readable code , the IC having at least one or more timing and hardware constraints , comprising:a. extracting parameters defining the processor architecture from a static profile and a dynamic profile of the computer readable code;b. iteratively optimizing the processor architecture by changing one or more parameters until all timing and hardware constraints expressed as a cost function are met and using a compiler to compile, assemble and link code for each processor architecture iteration to arrive at a customized architecture; andc. synthesizing the generated processor architecture into a computer readable description of the custom integrated circuit for semiconductor fabrication.2. The method of claim 1 , comprising optimizing processor scalarity and instruction grouping rules3. The method of claim 1 , comprising optimizing the number of processor cores needed and automatically splitting an instruction stream to use the processor cores effectively.4. The method of claim 1 , wherein the processor architecture optimization comprises changing an instruction set claim 1 , including reducing the number of instructions required and encoding the instructions to improve instruction ...

Подробнее
20-03-2014 дата публикации

VEHICLE ELECTRONIC CONTROLLER

Номер: US20140082326A1
Принадлежит: SUMITOMO WIRING SYSTEMS, LTD.

Some embodiments relate to a vehicle electronic controller having a microcomputer and a port expansion element, with reduced power consumption and radio noise. An MCU (microcomputer) performs determination processing that determines whether an output condition is established that is based on a signal that is input via a signal input port of the MCU. If the output condition is established, the MCU transmits a signal output instruction to a port expansion element via a communication port, and if not, the instruction is not transmitted. The port expansion element outputs a signal via a signal output port in response to an instruction from the MCU. The port expansion element automatically switches, depending on whether communication via the MCU is being suspended, between operation in a waiting mode in which the internal oscillation circuit is suspended, and operation in a normal mode in which the internal oscillation circuit is operated. 1. A vehicle electronic controller , comprising:a microcomputer having a signal input port and a first communication port, the microcomputer configured to provide an instruction; anda port expansion element having: (i) a second communication port electrically connected to the first communication port of the microcomputer, (ii) a plurality of signal output ports that output signals in response to receipt of the instruction from the microcomputer, and (iii) an internal oscillation circuit,wherein automatic switching is performed, based on whether communication via the second communication port is being suspended, the automatic switching including switching between a waiting mode in which the internal oscillation circuit is suspended, and a normal mode in which the internal oscillation circuit is operated, andwherein the microcomputer performs determination processing that determines whether an output condition is established that is based on a direct input signal that is input via the signal input port of the microcomputer, such that: 1) ...

Подробнее
02-01-2020 дата публикации

DATA PROCESSING SYSTEMS FOR IDENTITY VALIDATION OF DATA SUBJECT ACCESS REQUESTS AND RELATED METHODS

Номер: US20200004507A1
Принадлежит: OneTrust, LLC

In particular embodiments, a computer-implemented data processing method for responding to a data subject access request comprises: (A) receiving a data subject access request from a requestor comprising one or more request parameters; (B) validating an identity of the requestor by prompting the requestor to identify information associated with the requestor; (C) in response to validating the identity of the requestor, processing the request by identifying one or more pieces of personal data associated with the requestor, the one or more pieces of personal data being stored in one or more data repositories associated with a particular organization; and (D) taking one or more actions based at least in part on the data subject access request, the one or more actions including one or more actions related to the one or more pieces of personal data. 1. A computer-implemented data processing method for validating a data subject access request , the computer-implemented data processing method comprising:receiving, by at least one computer processor, a data subject access request provided by a requestor, wherein the data subject access request comprises a request for a particular organization to perform one or more actions with regard to one or more pieces of personal data associated with a data subject that the particular organization has obtained on the identified data subject;determining, by at least one computer processor, one or more authentication methods required to validate the requestor as the data subject of the data subject access request;providing, by at least one computer processor, the one or more authentication methods to the requestor, wherein each of the one or more authentication methods comprise prompting the requestor to provide particular information to validate the requestor as the data subject;determining, by at least one processor, whether to validate the requestor as the data subject based at least in part on the particular information provided by ...

Подробнее
07-01-2021 дата публикации

PROCESSOR WITH TABLE LOOKUP UNIT

Номер: US20210004349A1
Принадлежит:

A processor includes a scalar processor core and a vector coprocessor core coupled to the scalar processor core. The scalar processor core is configured to retrieve an instruction stream from program storage, and pass vector instructions in the instruction stream to the vector coprocessor core. The vector coprocessor core includes a register file, a plurality of execution units, and a table lookup unit. The register file includes a plurality of registers. The execution units are arranged in parallel to process a plurality of data values. The execution units are coupled to the register file. The table lookup unit is coupled to the register file in parallel with the execution units. The table lookup unit is configured to retrieve table values from one or more lookup tables stored in memory by executing table lookup vector instructions in a table lookup loop. 1. A device comprising: a set of address generators; and', receive an instruction to retrieve a set of table values from a set of tables stored in the memory, wherein the instruction includes a first field that specifies a first address generator of the set of address generators that stores a table offset associated with a first table of the set of tables; and', 'in response to the instruction, read the set of table values from the memory., 'a table lookup unit configured to], 'a vector core configured to couple to a memory, wherein the vector core includes2. The device of claim 1 , wherein the instruction includes a second field that specifies a number of the set of tables from which the set of table values are to be read.3. The device of claim 1 , wherein the instruction includes a second field that specifies a number of table values to be read from each table of the set of tables.4. The device of claim 1 , wherein the instruction includes a second field that specifies an offset of a first table value of the set of table values into the first table.5. The device of claim 1 , wherein the table lookup unit is ...

Подробнее
02-01-2020 дата публикации

CORE MAPPING

Номер: US20200004721A1
Принадлежит:

The disclosed technology is generally directed to peripheral access. In one example of the technology, stored configuration information is read. The stored configuration information is associated with mapping a plurality of independent execution environments to a plurality of peripherals such that the peripherals of the plurality of peripherals have corresponding independent execution environments of the plurality of independent execution environments. A configurable interrupt routing table is programmed based on the configuration information. An interrupt is received from a peripheral. The interrupt is routed to the corresponding independent execution environment based on the configurable interrupt routing table. 120-. (canceled)21. An apparatus , comprising:a plurality of processing cores;a plurality of peripherals; anda configurable interrupt routing table that selectively maps each of the plurality of peripherals to an individual processing core of the plurality of processing cores, wherein the mapping of each of the plurality of peripherals to the individual processing core is configurable while a lock bit of the apparatus is not set, wherein the mapping of each of the plurality of peripherals to the individual processing core is locked in response to the lock bit of the apparatus being set, and wherein, once locked, the mapping of each of the plurality of peripherals to the individual processing core remains locked until a reboot of the apparatus.22. The apparatus of claim 21 , wherein the configurable interrupt routing table includes a plurality of configuration registers.23. The apparatus of claim 21 , wherein a first processing core of the plurality of processing cores is associated with at least two independent execution environments.24. The apparatus of claim 23 , wherein a first independent execution environment associated with the first processing core is a Secure World operating environment of the first processing core claim 23 , and wherein a second ...

Подробнее
02-01-2020 дата публикации

DATA PROCESSING SYSTEMS AND METHODS FOR AUTOMATICALLY DETECTING AND DOCUMENTING PRIVACY-RELATED ASPECTS OF COMPUTER SOFTWARE

Номер: US20200004762A1
Принадлежит: OneTrust, LLC

Data processing systems and methods according to various embodiments are adapted for automatically detecting and documenting privacy-related aspects of computer software. Particular embodiments are adapted for: (1) automatically scanning source code to determine whether the source code include instructions for collecting personal data; and (2) facilitating the documentation of the portions of the code that collect the personal data. For example, the system may automatically prompt a user for comments regarding the code. The comments may be used, for example, to populate: (A) a privacy impact assessment; (B) system documentation; and/or (C) a privacy-related data map. The system may comprise, for example, a privacy comment plugin for use in conjunction with a code repository. 1. A data processing computer system for automatically analyzing computer code to determine whether computer software associated with the computer code collects personal data , the system comprising:at least one computer processor; andcomputer memory storing computer-executable instructions for:automatically, by at least one computer processor, analyzing at least one segment of the computer code to determine whether the at least one segment of computer code comprises instructions for collecting one or more pieces of personal data;in response to determining that the at least one segment of computer code comprises instructions for collecting one or more pieces of personal data, automatically, by at least one computer processor, prompting a user to input particular information as to why the at least one segment of computer code comprises instructions for collecting the one or more pieces of personal data;receiving the particular information from the user; andat least partially in response to receiving the particular information from the user, using the particular information to complete at least one action selected from a group consisting of:(A) using the particular information to populate at least ...

Подробнее
02-01-2020 дата публикации

SYSTEM AND METHOD FOR AUTOMATED MULTI-DIMENSIONAL NETWORK MANAGEMENT

Номер: US20200004767A1
Принадлежит:

Systems, methods, and devices for automated provisioning are disclosed herein. The system can include a memory including a user profile database having n-dimension attributes of a user. The system can include a user device and a source device. The system can include a server that can: generate and store a user profile in the user profile database and generate and store a characterization vector from the user profile. The server can identify a service for provisioning, receive updates to at least some of the attributes of the first user, and trigger regeneration of the characterization vector from the received inputs. The server can: regenerate the characterization vector, determine an efficacy of the provisioned services, and automatically identify a second service for provisioning for a second user based on the efficacy of the provisioned services to the first user. 1. (canceled)2. An automated multi-dimensional network management system comprising:a memory comprising: an electronic health records (EHR) database; and a network database comprising a plurality of nodes linked by a plurality of edges, at least some of the nodes corresponding to a user state, a user characteristic, and a remediation; and identify, via a machine-learning model, a first remediation to mitigate a likelihood of an adverse outcome identified in a risk profile based on the user state of a first user;', 'identify a data insufficiency based on missing data in the EHR database, wherein the data insufficiency prevents identification of a remediation;', 'select a medical service comprising a digital component and a non-digital component for provisioning to the first user, the medical service is selected to generate data to remedy the data insufficiency;', 'resolve the data insufficiency via provisioning of the selected medical service and receipt of electronic data generated from the provisioned medical service;', 'upon resolution of the data insufficiency, identify a second remediation to ...

Подробнее
03-01-2019 дата публикации

STREAMING ENGINE WITH SHORT CUT START INSTRUCTIONS

Номер: US20190004853A1
Принадлежит:

A streaming engine employed in a digital data processor specifies a fixed read only data stream recalled memory. Streams are started by one of two types of stream start instructions. A stream start ordinary instruction specifies a register storing a stream start address and a register of storing a stream definition template which specifies stream parameters. A stream start short-cut instruction specifies a register storing a stream start address and an implied stream definition template. A functional unit is responsive to a stream operand instruction to receive at least one operand from a stream head register. The stream template supports plural nested loops with short-cut start instructions limited to a single loop. The stream template supports data element promotion to larger data element size with sign extension or zero extension. A set of allowed stream short-cut start instructions includes various data sizes and promotion factors. 1. A digital data processor comprising:an instruction memory storing instructions each specifying a data processing operation and at least one data operand field;an instruction decoder connected to said instruction memory for sequentially recalling instructions from said instruction memory and determining said specified data processing operation and said specified at least one operand;at least one functional unit connected to said data register file and said instruction decoder for performing data processing operations upon at least one operand corresponding to an instruction decoded by said instruction decoder and storing results; an address generator for generating stream memory addresses corresponding to said stream of an instruction specified sequence of a plurality of data elements,', 'a stream head register storing a data element of said stream next to be used by said at least one functional unit;, 'a streaming engine connected to said instruction decoder operable in response to a stream start instruction to recall from memory a ...

Подробнее
03-01-2019 дата публикации

EXECUTION OF AN INSTRUCTION FOR PERFORMING A CONFIGURATION VIRTUAL TOPOLOGY CHANGE

Номер: US20190004867A1
Принадлежит:

In a logically partitioned host computer system comprising host processors (host CPUs) partitioned into a plurality of guest processors (guest CPUs) of a guest configuration, a perform topology function instruction is executed by a guest processor specifying a topology change of the guest configuration. The topology change preferably changes the polarization of guest CPUs, the polarization being related to the amount of a host CPU resource provided to a guest CPU. 1. A computer-implemented method comprising: obtaining, based on the perform topology function instruction, a requested horizontal polarization change of the topology;', 'determining whether the requested horizontal polarization change may be performed;', 'initiating the requested horizontal polarization change, based on determining the requested horizontal polarization change may be performed;', 'rejecting the requested horizontal polarization change, based on determining the requested horizontal polarization change is not to be performed; and', 'setting a condition code, by the processor, to a value indicating whether the requested horizontal polarization change is initiated or rejected., 'executing, by a processor, a perform topology function instruction to request a configuration change of a topology of a plurality of processors of a configuration, the executing comprising2. The computer-implemented method according to claim 1 , wherein the plurality of processors are guest processors and the configuration is a guest configuration in a logically partitioned computer system.3. The computer-implemented method according to claim 1 , wherein the perform topology function instruction comprises an opcode field and a register field for requesting the change in horizontal polarization.4. The computer-implemented method according to claim 3 , further comprising:obtaining from a function code (FC) field of a register specified by the register field a function code, the function code comprising a horizontal ...

Подробнее
07-01-2021 дата публикации

Privacy management systems and methods

Номер: US20210004740A1
Принадлежит: OneTrust LLC

Data processing systems and methods, according to various embodiments, are adapted for mapping various questions regarding a data breach from a master questionnaire to a plurality of territory-specific data breach disclosure questionnaires. The answers to the questions in the master questionnaire are used to populate the territory-specific data breach disclosure questionnaires and determine whether disclosure is required in territory. The system can automatically notify the appropriate regulatory bodies for each territory where it is determined that data breach disclosure is required.

Подробнее
03-01-2019 дата публикации

MONOLITHIC SILICON BRIDGE STACK INCLUDING A HYBRID BASEBAND DIE SUPPORTING PROCESSORS AND MEMORY

Номер: US20190006318A1
Принадлежит:

A semiconductive device stack, includes a baseband processor die with an active surface and a backside surface, and a recess in the backside surface. A recess-seated device is disposed in the recess, and a through-silicon via in the baseband processor die couples the baseband processor die at the active surface to the recess-seated die at the recess. A processor die is disposed on the baseband processor die backside surface, and a memory die is disposed on the processor die. The several dice are coupled by through-silicon via groups. 1. A semiconductive device stack , comprising:a baseband processor die including an active surface and a backside surface;a recess disposed in the backside surface;recess-seated device disposed in the recess; anda through-silicon via (TSV) in the baseband processor die that couples the active surface to the recess-seated die at the recess.2. The semiconductive device stack of claim 1 , further including:a processor die disposed on the baseband processor die backside surface, wherein the processor die is coupled to the baseband die through a TSV at the backside surface.3. The semiconductive device stack of claim 1 , further including:a processor die disposed on the baseband processor die backside surface, wherein the processor die is coupled to the baseband die through a TSV at the backside surface; anda memory die disposed on the processor die, wherein the processor die and the memory die communicate through a TSV in the processor die.4. The semiconductive device stack of claim 1 , further including:a processor die disposed on the baseband processor die backside surface, wherein the processor die is coupled to the baseband die through a TSV at the backside surface;a memory die disposed on the processor die, wherein the processor die and the memory die communicate through a TSV in the processor die;a redistribution layer (RDL) disposed on the active surface;a ball-grid array disposed on the RDL; anda package substrate coupled to the ball ...

Подробнее
20-01-2022 дата публикации

DATA PROCESSING SYSTEMS FOR GENERATING AND POPULATING A DATA INVENTORY

Номер: US20220019693A1
Принадлежит: OneTrust, LLC

A computer-implemented method for populating a privacy-related data model by: (1) providing a data model that comprises one or more respective populated or unpopulated fields; (2) determining that at least a particular one of the fields for a particular data asset is an unpopulated field; (3) at least partially in response to determining that the at least one particular field is unpopulated, automatically generating a privacy questionnaire comprising at least one question that, if properly answered, would result in a response that may be used to populate the at least one particular unpopulated field; (4) transmitting the privacy questionnaire to at least one individual; (5) receiving a response to the questionnaire, the response comprising a respective answer to the at least one question; and (6) in response to receiving the response, populating the at least one particular unpopulated field with information from the received response. 1. A method comprising:identifying, by computing hardware, a data inventory, wherein the data inventory defines a plurality of inventory attributes for a data asset;determining, by the computing hardware, that a first inventory attribute of the plurality of inventory attributes is unpopulated; a question soliciting an answer for populating the first inventory attribute; and', 'a user-selectable risk indicium associated with the question;, 'responsive to determining that the first inventory attribute is unpopulated, automatically generating, by the computing hardware, an inventory questionnaire for populating the first inventory attribute, the inventory questionnaire comprisingcausing, by the computing hardware, a video display device to present a graphical user interface comprising the inventory questionnaire to a user;receiving, by the computing hardware as input via the graphical user interface, a response to the inventory questionnaire, the response comprising the answer to the question and an indication of a user selection of the ...

Подробнее
27-01-2022 дата публикации

HIGH-PERFORMANCE INPUT-OUTPUT DEVICES SUPPORTING SCALABLE VIRTUALIZATION

Номер: US20220027207A1
Принадлежит: Intel Corporation

Techniques for scalable virtualization of an Input/Output (I/O) device are described. An electronic device composes a virtual device comprising one or more assignable interface (AI) instances of a plurality of AI instances of a hosting function exposed by the I/O device. The electronic device emulates device resources of the I/O device via the virtual device. The electronic device intercepts a request from the guest pertaining to the virtual device, and determines whether the request from the guest is a fast-path operation to be passed directly to one of the one or more AI instances of the I/O device or a slow-path operation that is to be at least partially serviced via software executed by the electronic device. For a slow-path operation, the electronic device services the request at least partially via the software executed by the electronic device. 120.-. (canceled)21. A processor comprising:an interface to an input/output (I/O) device; 'compose a virtual device including one or more assignable interface (AI) instances of a hosting function exposed by the I/O device, wherein the virtual device is to be utilized by a guest to be executed by processor core, wherein the one or more AI instances are independently assignable to the guest via the virtual device to provide I/O device functionality to the guest.', 'a processor core to22. The processor of claim 21 , wherein the processor core is also to:emulate device resources of the I/O device via the virtual device; andintercept a request from the guest pertaining to the virtual device.23. The processor of claim 22 , wherein the processor core is also to determine whether the request from the guest is a first operation to be passed directly to one of the one or more AI instances of the I/O device or a second operation to be at least partially serviced via software to be executed by the core.24. The processor of claim 23 , wherein the processor core is also to claim 23 , in response to determining that the request is ...

Подробнее
08-01-2015 дата публикации

PROCESSOR USING MINI-CORES

Номер: US20150012723A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A mini-core and a processor using such a mini-core are provided in which functional units of the mini-core are divided into a scalar domain processor and a vector domain processor. The processor includes at least one such mini-core, and all or a portion of functional units from among the functional units of the mini-core operate based on an operation mode. 1. A mini-core comprising:a scalar domain processor configured to process scalar data;a vector domain processor configured to process vector data; anda pack/unpack functional unit (FU) configured to be shared by the scalar domain processor and the vector domain processor, and to process a conversion of data to be transmitted between the scalar domain processor and the vector domain processor.2. The mini-core of claim 1 , wherein the scalar domain processor comprises a scalar FU configured to process scalar data.3. The mini-core of claim 1 , wherein the pack/unpack FU is configured to convert multiple instances of scalar data to an instance of vector data claim 1 , and to generate an instance of scalar data by extracting an element at a predetermined position of the vector data.4. The mini-core of claim 1 , wherein the vector domain processor comprises:a vector load (LD)/store (ST) FU configured to process loading and storing of vector data; anda vector FU configured to process the vector data.5. The mini-core of claim 4 , wherein the vector domain processor comprises vector FUs and the vector domain processor operates by interconnecting the vector FUs to process vector data of a longer bit length than a bit-length processable by the vector FUs individually.6. The mini-core of claim 4 , wherein the vector domain processor further comprises:a vector memory configured to store the vector data.7. The mini-core of claim 1 , wherein the mini-core transmits the scalar data to another mini-core via a scalar data channel claim 1 , andthe mini-core transmits the vector data to the other mini-core via a vector data channel.8 ...

Подробнее
14-01-2016 дата публикации

DIGITAL FILTER WITH A PIPELINE STRUCTURE, AND A CORRESPONDING DEVICE

Номер: US20160011625A1
Автор: Pirozzi Francesco
Принадлежит: STMICROELECTRONICS S.R.L.

A digital filter with a pipeline structure includes processing structures timed by respective clock signals. Each processing structure in turn is formed by a number of processing modules for processing input samples. A phase generator aligns the processing modules with the input samples so that each input sample is processed by a respective one of the processing modules. An up-sampling buffer and a down-sampling buffer are used when the processing structures operate at different clock frequencies (thus implementing different clock domains) so as to convert signal samples between the clock domains for processing in the processing structures. 1. A digital filter , comprising:a pipeline structure including a plurality of processing structures each timed by a respective clock signal, wherein each processing structure includes plural processing modules for processing input samples,a phase generator configured to align said processing modules to said input samples so that each said input sample is processed by a respective one of the processing modules in said processing structures,up-sampling and down-sampling buffers configured to be activated when said processing structures operate at different clock frequencies,a first clock domain, anda second clock domain,said up-sampling and down-sampling buffers configured to convert signal samples between said first clock domain and said second clock domain.2. The filter of claim 1 , wherein said phase generator is a centralized phase generator configured to distribute a phase alignment signal across the processing modules of said processing structures.3. The filter of claim 1 , wherein said phase generator is a distributed phase generator claim 1 , each processing module being provided with a dedicated phase alignment signal to be fed to a subsequent processing module along with the serialized data in the pipeline.4. The filter of any claim 1 , further including a clock generator configured to deliver a higher clock frequency of ...

Подробнее
14-01-2016 дата публикации

SYSTEMS AND DEVICES FOR QUANTUM PROCESSOR ARCHITECTURES

Номер: US20160012347A1
Автор: King Andrew Douglas
Принадлежит:

Quantum processor architectures employ unit cells tiled over an area. A unit cell may include first and second sets of qubits where each qubit in the first set crosses at least one qubit in the second set. Angular deviations between qubits in one set may allow qubits in the same set to cross one another. Each unit cell is positioned proximally adjacent at least one other unit cell. Communicatively coupling between qubits is realized through respective intra-cell and inter-cell coupling devices. 1. A quantum processor comprising: a first set of qubits and a second set of qubits, at least a portion of an acute-angled qubit in the first set of qubits oriented at an angle greater than zero degrees and less than ninety degrees to at least a portion of at least one qubit in the second set of qubits, at least one qubit in the first set of qubits crossing at least one qubit in the second set of qubits and the acute-angled qubit in the first set of qubits crossing at least one other qubit in the first set of qubits;', 'a first set of intra-cell coupling devices, wherein each coupling device in the first set of intra-cell coupling devices is positioned proximate a respective point where a respective one of qubits in the first set of qubits crosses one of the qubits in the second set of qubits and provides controllable communicative coupling between the qubit in the first set of qubits and the respective qubit in the second set of qubits; and, 'a plurality of unit cells tiled over an area such that each unit cell is positioned adjacent to at least one other unit cell, each unit cell comprisinga second set of intra-cell coupling devices, wherein each coupling device in the second set of intra-cell coupling devices is positioned proximate a respective point at which each at least one qubit in the first set of qubits crosses the at least one other qubit in the first set of qubits and provides controllable communicative coupling between the at least one qubit in the first set of ...

Подробнее
10-01-2019 дата публикации

METHOD AND SYSTEM FOR HIGH PERFORMANCE REAL TIME PATTERN RECOGNITION

Номер: US20190012293A1
Принадлежит:

Systems and methods supporting high performance real time pattern recognition by including time and regional multiplexing using high bandwidth, board-to-board communications channels, and 3D vertical integration. An array of processing boards can each be coupled a rear transition board, the array achieving time and regional multiplexing using high bandwidth board-to-board communications channels and 3D vertical integration. 1. A system supporting high performance real time data processing , comprising:an array of processing boards; anda rear transition board coupled to each of the processing boards, said coupling and array achieving time and regional multiplexing using high bandwidth board-to-board communications channels.2. The system of claim 1 , wherein the processing boards incorporate a full-mesh architecture permitting high bandwidth claim 1 , inter-board communication and wherein said processing boards conform to advanced telecommunications computing architecture specification standards.3. The system of further comprising a backplane configured to time-multiplex a high volume of incoming data in a manner that manages input and output demands of the system.4. The system of further comprising at least one mezzanine card per processing board.5. The system of wherein said at least one mezzanine card further comprises:a field programmable gate array;data processing circuitry; anda plurality of fiber optic transceivers.6. The system of wherein said data processing circuitry comprises pattern recognition circuitry implemented in a field programmable gate array.7. The system of wherein said data processing circuitry comprises pattern recognition circuitry implemented in a pattern recognition application specific integrated circuit.8. The system of further comprising:at least one through silicon via connecting a plurality of two dimensional integrated circuits configured on at least one of said mezzanine cards in order to provide pattern recognition associative memory ...

Подробнее
09-01-2020 дата публикации

High-performance input-output devices supporting scalable virtualization

Номер: US20200012530A1
Принадлежит: Intel Corp

Techniques for scalable virtualization of an Input/Output (I/O) device are described. An electronic device composes a virtual device comprising one or more assignable interface (AI) instances of a plurality of AI instances of a hosting function exposed by the I/O device. The electronic device emulates device resources of the I/O device via the virtual device. The electronic device intercepts a request from the guest pertaining to the virtual device, and determines whether the request from the guest is a fast-path operation to be passed directly to one of the one or more AI instances of the I/O device or a slow-path operation that is to be at least partially serviced via software executed by the electronic device. For a slow-path operation, the electronic device services the request at least partially via the software executed by the electronic device.

Подробнее
09-01-2020 дата публикации

EXPLOITING INPUT DATA SPARSITY IN NEURAL NETWORK COMPUTE UNITS

Номер: US20200012608A1
Принадлежит:

A computer-implemented method includes receiving, by a computing device, input activations and determining, by a controller of the computing device, whether each of the input activations has either a zero value or a non-zero value. The method further includes storing, in a memory bank of the computing device, at least one of the input activations. Storing the at least one input activation includes generating an index comprising one or more memory address locations that have input activation values that are non-zero values. The method still further includes providing, by the controller and from the memory bank, at least one input activation onto a data bus that is accessible by one or more units of a computational array. The activations are provided, at least in part, from a memory address location associated with the index. 1. (canceled)2. A hardware circuit configured to implement a neural network comprising a plurality of neural network layers , the circuit comprising: receive a batch of inputs for processing through a first neural network layer; and', 'generate a compressed representation of inputs based on a respective value of each input in the batch of inputs; and, 'a controller configured toan address register configured to store addresses identifying memory locations that store inputs in the compressed representation of inputs, and 'process the inputs in the compressed representation of inputs through the first neural network layer to generate an output for the neural network layer.', 'wherein the circuit is operable to3. The circuit of claim 2 , comprising a multiply accumulate cell configured to:receive an input in the compressed representation of inputs from a memory location identified by an address in the address register; andprocess the input through the first neural network layer, comprising performing a multiplication between the input and a corresponding weight value for the neural network layer to generate the output for the neural network layer.4. ...

Подробнее
09-01-2020 дата публикации

DATA PROCESSING SYSTEMS FOR PRIORITIZING DATA SUBJECT ACCESS REQUESTS FOR FULFILLMENT AND RELATED METHODS

Номер: US20200012813A1
Принадлежит: OneTrust, LLC

In various embodiments, a data subject request fulfillment system may be adapted to prioritize the processing of data subject access requests based on metadata of the data subject access request. For example, the system may be adapted for: (1) in response to receiving a data subject access request, obtaining metadata regarding the location from which the data subject access request is being made; (2) using the metadata to determine whether a priority of the data subject access request should be adjusted based on the obtained metadata; and (3) in response to determining that the priority of the data subject access request should be adjusted based on the obtained metadata, adjusting the priority of the data subject access request. 1. A computer-implemented data processing method for prioritizing data subject access requests , the method comprising:receiving from a requestor, by one or more processors, a data subject access request comprising one or more request parameters;applying, by the one or more processors, a prioritization level to the data subject access request based at least in part on the one or more request parameters;determining, by the one or more processors, one or more pieces of metadata associated with the data subject access request;determining, by the one or more processors based on the one or more pieces of metadata, a location from which the data subject access request is being made;determining, by the one or more processors, whether the location from which the data subject access request is being made relates to a priority of fulfilling the data subject access request;in response to determining that the location from which the data subject access request is being made relates to the priority of fulfilling the data subject access request, analyzing, by the one or more processors, the location from which the data subject access request is being made to determine whether the prioritization level of the data subject access request should be adjusted; ...

Подробнее
03-02-2022 дата публикации

DATA PROCESSING SYSTEMS FOR IDENTIFYING AND MODIFYING PROCESSES THAT ARE SUBJECT TO DATA SUBJECT ACCESS REQUESTS

Номер: US20220035946A1
Принадлежит: OneTrust, LLC

In particular embodiments, in response a data subject submitting a request to delete their personal data from an organization's systems, the system may: (1) automatically determine where the data subject's personal data is stored; (2) in response to determining the location of the data (which may be on multiple computing systems), automatically facilitate the deletion of the data subject's personal data from the various systems; and (3) determine a cause of the request to identify one or more processing activities or other sources that result in a high number of such requests. 1. A method comprising:analyzing, by computing hardware, metadata for each of a plurality of data subject access requests;identifying, by the computing hardware, a plurality of processing activities associated with the plurality of data subject access requests based on the metadata;identifying, by the computing hardware, a particular processing activity from the plurality of processing activities based on a set of rules; andin response to identifying the particular processing activity, causing performance, by the computing hardware, of an action with respect to the particular processing activity.2. The method of claim 1 , wherein causing performance of the action with respect to the particular processing activity comprises causing a modification of a type of data collected as part of the particular processing activity.3. The method of claim 1 , wherein causing performance of the action with respect to the particular processing activity comprises:generating a graphical user interface by configuring a presentation element configured for presenting data related to the particular processing activity and the plurality of data subject access requests based on the set of rules; andtransmitting an instruction to a user device to retrieve and present the graphical user interface on the user device.4. The method of claim 1 , wherein the set of rules define a rule identifying the particular processing ...

Подробнее
21-01-2016 дата публикации

Synchronizing a translation lookaside buffer with an extended paging table

Номер: US20160019164A1
Принадлежит: Intel Corp

A processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a physical address of the host of the virtualization based system (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization based system.

Подробнее
17-04-2014 дата публикации

Method, apparatus, and system for optimizing frequency and performance in a multidie microprocessor

Номер: US20140108849A1
Принадлежит: Individual

With the progress toward multi-core processors, each core is can not readily ascertain the status of the other dies with respect to an idle or active status. A proposal for utilizing an interface to transmit core status among multiple cores in a multi-die microprocessor is discussed. Consequently, this facilitates thermal management by allowing an optimal setting for setting performance and frequency based on utilizing each core status.

Подробнее
26-01-2017 дата публикации

Hybrid programmable many-core device with on-chip interconnect

Номер: US20170024355A1
Принадлежит: Altera Corp

The present invention provides a hybrid programmable logic device which includes a programmable field programmable gate array logic fabric and a many-core distributed processing subsystem. The device integrates both a fabric of programmable logic elements and processors in the same device, i.e., the same chip. The programmable logic elements may be sized and arranged such that place and route tools can address the processors and logic elements as a homogenous routing fabric. The programmable logic elements may provide hardware acceleration functions to the processors that can be defined after the device is fabricated. The device may include scheduling circuitry that can schedule the transmission of data on horizontal and vertical connectors in the logic fabric to transmit data between the programmable logic elements and processor in an asynchronous manner.

Подробнее
22-01-2015 дата публикации

Multiprocessor Fabric Having Configurable Communication that is Selectively Disabled for Secure Processing

Номер: US20150026451A1
Принадлежит:

Disabling communication in a multiprocessor fabric. The multiprocessor fabric may include a plurality of processors and a plurality of communication elements and each of the plurality of communication elements may include a memory. A configuration may be received for the multiprocessor fabric, which specifies disabling of communication paths between one or more of: one or more processors and one or more communication elements; one or more processors and one or more other processors; or one or more communication elements and one or more other communication elements. Accordingly, the multiprocessor fabric may be automatically configured in hardware to disable the communication paths specified by the configuration. The multiprocessor fabric may be operated to execute a software application according to the configuration. 1. A method for disabling communication in a multiprocessor fabric , the method comprising: one or more processors and one or more communication elements;', 'one or more processors and one or more other processors; or', 'one or more communication elements and one or more other communication elements;, 'receiving a configuration for the multiprocessor fabric, wherein the multiprocessor fabric comprises a plurality of processors and a plurality of communication elements, wherein the configuration specifies disabling of communication paths between one or more ofautomatically configuring, in hardware, the multiprocessor fabric to disable the communication paths specified by the configuration, wherein after said automatically configuring, the disabled communication paths are not restorable via software;operating the multiprocessor fabric to execute a software application, wherein the multiprocessor fabric operates according to the configuration.2. The method of claim 1 , said automatically configuring the multiprocessor fabric comprises setting register values corresponding one or more processors and/or one or more communication elements to disable the ...

Подробнее
10-02-2022 дата публикации

STREAMING ENGINE WITH SHORT CUT START INSTRUCTIONS

Номер: US20220043670A1
Принадлежит:

A streaming engine employed in a digital data processor specifies a fixed read only data stream recalled memory. Streams are started by one of two types of stream start instructions. A stream start ordinary instruction specifies a register storing a stream start address and a register of storing a stream definition template which specifies stream parameters. A stream start short-cut instruction specifies a register storing a stream start address and an implied stream definition template. A functional unit is responsive to a stream operand instruction to receive at least one operand from a stream head register. The stream template supports plural nested loops with short-cut start instructions limited to a single loop. The stream template supports data element promotion to larger data element size with sign extension or zero extension. A set of allowed stream short-cut start instructions includes various data sizes and promotion factors. 1. A device comprising:a memory; and receive an instruction to fetch a plurality of data elements from the memory, wherein the instruction includes an op code, and wherein the instruction specifies a start address and a data path; and', 'transmit the plurality of data elements in the data path., 'a data load unit configured to2. The device of claim 1 , wherein:the start address is specified by a scalar register in a scalar register file.3. The device of claim 1 , wherein:the data path is one of a first data stream and a second data stream.4. The device of claim 1 , wherein:the op code specifies an implied template.5. The device of claim 4 , wherein:the implied template includes a first set of bits, a second set of bits, and a third set of bits;the first set of bits corresponds to the op code;each of the second set of bits is a zero; andeach of the third set of bits is a one.6. The device of claim 4 , wherein:the implied template specifies a predetermined number of iterations of an inner loop executed by the data load unit.7. The ...

Подробнее
28-01-2021 дата публикации

System for Automated Data Engineering for Large Scale Machine Learning

Номер: US20210026818A1
Автор: Dai Wei, Xing Eric, Yu Weiren
Принадлежит:

Accordingly, a data engineering system for machine learning at scale is disclosed. In one embodiment, the data engineering system includes an ingest processing module having a schema update submodule and a feature statistics update submodule, wherein the schema update submodule is configured to discover new features and add them to a schema, and wherein the feature statistics update submodule collects statistics for each feature to be used in an online transformation, a record store to store data from a data source, and a transformation module, to receive a low dimensional data instance from the record store and to receive the schema and feature statistics from the ingest processing module, and to transform the low dimensional data instance into a high dimensional representation. One embodiment provides a method for data engineering for machine learning at scale, the method including calling a built-in feature transformation or defining a new transformation, specifying a data source and compressing and storing the data, providing ingest-time processing by automatically analyzing necessary statistics for features, and then generating a schema for a dataset for subsequent data engineering. Other embodiments are disclosed herein. 1. A method for data engineering for machine learning at scale , the method comprising:calling a built-in feature transformation or defining a new transformation;specifying a data source and compressing and storing the data;providing ingest-time processing by automatically analyzing necessary statistics for features, and then generating a schema for a dataset for subsequent data engineering.2. The method of claim 1 , further comprising the feature transformation connecting to models trained by upstream training systems claim 1 , and using the trained models to generate new features.3. The method of claim 2 , further comprising selectively caching features for better efficiency in a subsequent transformation.4. The method of claim 1 , wherein ...

Подробнее
24-04-2014 дата публикации

ASYMMETRIC MESH NoC TOPOLOGIES

Номер: US20140115298A1
Принадлежит: Netspeed Systems

A method of interconnecting blocks of heterogeneous dimensions using a NoC interconnect with sparse mesh topology includes determining a size of a mesh reference grid based on dimensions of the chip, dimensions of the blocks of heterogeneous dimensions, relative placement of the blocks and a number of host ports required for each of the blocks of heterogeneous dimensions, overlaying the blocks of heterogeneous dimensions on the mesh reference grid based on based on a guidance floor plan for placement of the blocks of heterogeneous dimensions, removing ones of a plurality of nodes and corresponding ones of links to the ones of the plurality of nodes which are blocked by the overlaid blocks of heterogeneous dimensions, based on porosity information of the blocks of heterogeneous dimensions, and mapping inter-block communication of the network-on-chip architecture over remaining ones of the nodes and corresponding remaining ones of the links. 122-. (canceled)23. A system on chip , comprising:a plurality of blocks of substantially non-uniform shapes and dimensions;a plurality of routers; anda plurality of links between routers,wherein the plurality of blocks and the plurality of routers are interconnected by the plurality of links using a Network-on-Chip architecture with a sparse mesh topology, andwherein the sparse mesh topology comprises a sparsely populated mesh which is a subset of a full mesh having one or more of the plurality of routers or links removed, andwherein the plurality of blocks communicate among each other by routing messages over the remaining ones of the plurality of routers and links of the sparse mesh. This application claims the benefit of U.S. application Ser. No. 13/658,663, filed Oct. 23, 2012, the disclosure of which is hereby incorporated by reference.1. Technical FieldMethods and example embodiments described herein are generally directed to interconnect architecture, and more specifically, to network-on-chip system interconnect ...

Подробнее
29-01-2015 дата публикации

SYSTEMS AND METHODS FOR IMPROVING THE PERFORMANCE OF A QUANTUM PROCESSOR BY REDUCING ERRORS

Номер: US20150032994A1
Принадлежит:

Techniques for improving the performance of a quantum processor are described. Some techniques employ improving the processor topology through design and fabrication, reducing intrinsic/control errors, reducing thermally-assisted errors and methods of encoding problems in the quantum processor for error correction. 1. A hybrid computational system comprising:at least one quantum processor comprising a plurality of qubits and a plurality of couplers;a configuration subsystem communicatively coupled to configure the at least one quantum processor, the configuration subsystem including at least one digital processor, and at least one non-transitory computer-readable storage medium communicatively coupled to the at least one digital processor and that stores at least one of processor-executable instructions or data, where in use the at least one digital processor:receives a problem Hamiltonian defined over at least two of the qubits, the problem Hamiltonian having a ground state that encodes a solution to a computational problem; determines a plurality of change values for the problem Hamiltonian;', 'updates the problem Hamiltonian to a new problem Hamiltonian using the plurality of change values;', 'sends the new problem Hamiltonian to the at least one quantum processor;', 'receives a changed solution set from the at least one quantum processor; and', 'transforms the changed solution set to a solution set., 'during a first iteration on the computational problem2. The hybrid computational system of wherein claim 1 , in use claim 1 , the at least one digital processor further:returns the solution set.3. The hybrid computational of wherein claim 1 , in use claim 1 , the at least one digital processor selects at random for each entry in the plurality of change values either a change value or a no-change value in order to determine the plurality of change values for the problem Hamiltonian.4. The hybrid computational of wherein the change value is negative claim 3 , the no- ...

Подробнее
04-02-2016 дата публикации

Memory mapping in a processor having multiple programmable units

Номер: US20160034420A1
Принадлежит: Intel Corp

The disclosure includes, in general, among other aspects, an apparatus having multiple programmable units integrated within a processor. The apparatus has circuitry to map addresses in a single address space to resources within the multiple programmable units where the single address space includes addresses for different ones of the resources in different ones of the multiple programmable units and where there is a one-to-one correspondence between respective addresses in the single address space and resources within the multiple programmable units.

Подробнее
01-02-2018 дата публикации

CONTEXT CONFIGURATION

Номер: US20180032474A9
Принадлежит:

Certain examples described herein relate to configuring a call control context in a media gateway. The media gateway has a set of digital signal processors, each having one or more digital signal processor cores. The cores implement digital signal processor channels that are grouped into digital signal processor contexts. When a request to configure a call control context is received, certain examples described herein are configured to assign a set of digital signal processor contexts to process data streams associated with the call control context. In particular, certain examples described herein couple a first digital signal processor context to at least a second digital signal processor context using at least one digital signal processor channel in each of the first and second digital signal processor contexts. 1. A media gateway comprising: a digital signal processor core in the one or more digital signal processor', 'wherein the plurality of digital signal processor channels are grouped into one or more digital signal processor contexts, each digital signal processor context defining a group of communicatively-couplable digital signal processor channels that are able to exchange data by way of the digital signal processor core;', 'cores implementing a plurality of digital signal processor channels,'}], 'the one or more digital signal processors comprising one or more digital signal processor cores,'}, 'one or more digital signal processors,'} 'each termination comprising an end point for a data stream associated with the call; and', 'the call control context comprising a plurality of terminations,'}, 'an interface to receive a request to configure a call control context for a call,'} wherein the controller is arranged to configure a set of digital signal processor channels in the plurality of digital signal processor contexts to process the set of data streams corresponding to the plurality of terminations, and', 'wherein the controller is arranged to ...

Подробнее
17-02-2022 дата публикации

Systems, Apparatuses, And Methods For Fused Multiply Add

Номер: US20220050678A1
Принадлежит: Intel Corp

Embodiments of systems, apparatuses, and methods for fused multiple add. In some embodiments, a decoder decodes a single instruction having an opcode, a destination field representing a destination operand, and fields for a first, second, and third packed data source operand, wherein packed data elements of the first and second packed data source operand are of a first, different size than a second size of packed data elements of the third packed data operand. Execution circuitry then executes the decoded single instruction to perform, for each packed data element position of the destination operand, a multiplication of a M N-sized packed data elements from the first and second packed data sources that correspond to a packed data element position of the third packed data source, add of results from these multiplications to a full-sized packed data element of a packed data element position of the third packed data source, and storage of the addition result in a packed data element position destination corresponding to the packed data element position of the third packed data source, wherein M is equal to the full-sized packed data element divided by N.

Подробнее
30-01-2020 дата публикации

GENERATING REFINED OBJECT PROPOSALS USING DEEP-LEARNING MODELS

Номер: US20200034653A1
Принадлежит:

In one embodiment, a feature map of an image having h×w pixels and a patch having one or more pixels of the image are received. The patch has been processed by a first set of layers of a convolutional neural network and contains an object centered within the patch. The patch is then processed using the feature map and one or more pixel classifiers of a classification layer of a deep-learning model, where the classification layer includes h×w pixel classifiers, with each pixel classifier corresponding to a respective pixel of the patch. Each of the pixel classifiers used to process the patch outputs a respective value indicating whether the corresponding pixel belongs to the object centered in the patch. 1. A method comprising:receiving a feature map of an input image having h×w pixels;receiving a patch of the input image, wherein the patch contains an object centered within the patch;processing the patch using the feature map and a classification layer of a deep-learning model, wherein the classification layer comprises h×w pixel classifiers, each pixel classifier corresponding to a respective pixel of the patch; andoutputting, by each of one or more of the h×w pixel classifiers, a value indicating whether the corresponding pixel belongs to the object centered in the patch.2. The method of claim 1 , wherein the feature map is represented by a first vector having no spatial dimensions.3. The method of claim 1 , wherein the pixel classifiers are locally connected.4. The method of claim 1 , wherein the pixel classifiers are fully connected.5. The method of claim 1 , further comprising:selecting a set of n×m pixel classifiers from among the h×w pixel classifiers; andprocessing the patch using the feature map and the selected set of n×m pixel classifiers.6. The method of claim 5 , further comprising:upsampling the respective output values of the selected set of n×m pixel classifiers to h×w.7. The method of claim 1 , wherein the deep-learning model comprises a ...

Подробнее
30-01-2020 дата публикации

METHOD AND APPARATUS FOR STACKING CORE AND UNCORE DIES HAVING LANDING SLOTS

Номер: US20200035659A1
Автор: RUSU Stefan
Принадлежит: Intel Corporation

A method is described for stacking a plurality of cores. For example, one embodiment comprises: mounting an uncore die on a package, the uncore die comprising a plurality of exposed landing slots, each landing slot including an inter-die interface usable to connect vertically to a cores die, the uncore die including a plurality of uncore components usable by cores within the cores die; and vertically coupling a first cores die comprising a first plurality of cores on top of the uncore die, the cores spaced on the first cores die to correspond to all or a first subset of the landing slots on the uncore die, each of the cores having an inter-die interface positioned to be communicatively coupled to a corresponding inter-die interface within a landing slot on the uncore die when the first cores die is vertically coupled on top of the uncore die. 1. A method comprising:providing a secure website to a user for configuring a custom server processor, the secure website including a graphical user interface (GUI);providing a first graphical element in the GUI, the first graphical element including a plurality of base die options from which a user is to select a base die and an associated package, each of the base die options associated with a respective number of landing slots and a respective plurality of supported I/O interface options;providing a second graphical element in the GUI, the second graphical element including a plurality of building block options selectable by the user to populate the landing slots in the base die, each of the building block options associated with a respective building block and includes one or more fields for the user to specify a number of horizontal landing slots and a number of vertical landing slots to be occupied by the building block on the base die;providing a third graphical element in the GUI, the third graphical element including a visual representation of a user configuration, the visual representation comprising one or more ...

Подробнее
12-02-2015 дата публикации

SYSTEMS AND DEVICES FOR QUANTUM PROCESSOR ARCHITECTURES

Номер: US20150046681A1
Автор: King Andrew Douglas
Принадлежит:

Quantum processor architectures employ unit cells tiled over an area. A unit cell may include first and second sets of qubits where each qubit in the first set crosses at least one qubit in the second set. Angular deviations between qubits in one set may allow qubits in the same set to cross one another. Each unit cell is positioned proximally adjacent at least one other unit cell. Communicatively coupling between qubits is realized through respective intra-cell and inter-cell coupling devices. 1. A quantum processor comprising: a first set of qubits and a second set of qubits, each of the qubits of the first and the second sets of qubits having a respective major axis, the major axes of the qubits of the first set parallel with one another along at least a majority of a length thereof, and the major axes of the qubits of the second set parallel with one another along at least a majority of a length thereof, the major axes of the qubits of the second set of qubits nonparallel with the major axes of the qubits of the first set of qubits, and each qubit in the first set of qubits crosses at least one qubit in the second set of qubits and at least one qubit in the first set of qubits crosses at least one other qubit in the first set of qubits, and for each unit cell none of the qubits of the respective unit cell cross any of the respective qubits of any other one of the unit cells;', 'a first set of intra-cell coupling devices, wherein each coupling device in the first set of intra-cell coupling devices is positioned proximate a respective point where a respective one of qubits in the first set of qubits crosses one of the qubits in the second set of qubits and provides controllable communicative coupling between the qubit in the first set of qubits and the respective qubit in the second set of qubits; and', 'a second set of intra-cell coupling devices, wherein each coupling device in the second set of intra-cell coupling devices is positioned proximate a respective ...

Подробнее
07-02-2019 дата публикации

System with programmable multi-context accelerator circuitry

Номер: US20190042329A1
Принадлежит: Intel Corp

A system is provided that includes a host processor coupled to a programmable acceleration coprocessor. The coprocessor may include logic for implementing a physical function and multiple associated virtual functions. The coprocessor may include a static programmable resource interface circuit (PIC) configured to perform management functions and one or more partial reconfiguration regions, each of which can be loaded with an accelerator function unit (AFU). An AFU may further be partitioned into AFU contexts (AFCs), each of which can be mapped to one of the virtual functions. The PIC enables hardware discovery/enumeration and loading of device drivers such that security isolation and interface performance are maintained.

Подробнее
07-02-2019 дата публикации

Superimposing butterfly network controls for pattern combinations

Номер: US20190042517A1
Принадлежит: Texas Instruments Inc

A multilayer butterfly network is shown that is operable to transform and align a plurality of fields from an input to an output data stream. Many transformations are possible with such a network which may include separate control of each multiplexer. This invention supports a limited set of multiplexer control signals, which enables a similarly limited set of data transformations. This limited capability is offset by the reduced complexity of the multiplexor control circuits. This invention used precalculated inputs and simple combinatorial logic to generate control signals for the butterfly network. Controls are independent for each layer and therefore are dependent only on the input and output patterns. Controls for the layers can be calculated in parallel.

Подробнее
07-02-2019 дата публикации

Dynamic Deep Learning Processor Architecture

Номер: US20190042529A1
Принадлежит: Intel Corp

Methods and systems for dynamically reconfiguring a deep learning processor by operating the deep learning processor using a first configuration. The deep learning processor then tracking one or more parameters of a deep learning program executed using the deep learning processor in the first configuration. The deep learning processor then reconfigures the deep learning processor to a second configuration to enhance efficiency of the deep learning processor executing the deep learning program based at least in part on the one or more parameters.

Подробнее
06-02-2020 дата публикации

DATA PROCESSING SYSTEMS FOR GENERATING AND POPULATING A DATA INVENTORY

Номер: US20200042543A1
Принадлежит: OneTrust, LLC

In particular embodiments, a data processing data inventory generation system is configured to: (1) generate a data model (e.g., a data inventory) for one or more data assets utilized by a particular organization; (2) generate a respective data inventory for each of the one or more data assets; and (3) map one or more relationships between one or more aspects of the data inventory, the one or more data assets, etc. within the data model. In particular embodiments, a data asset (e.g., data system, software application, etc.) may include, for example, any entity that collects, processes, contains, and/or transfers personal data (e.g., such as a software application, “internet of things” computerized device, database, website, data-center, server, etc.). For example, a first data asset may include any software or device (e.g., server or servers) utilized by a particular entity for such data collection, processing, transfer, storage, etc. 1. A computer-implemented data processing method of populating a data inventory with one or more data inventory attribute values , the method comprising:generating, by one or more processors, the data inventory for one or more data assets used in the collection or storage of personal data;storing, by the one or more processors, the data inventory in computer memory;modifying, by the one or more processors, the data inventory to include one or more data fields that each define a respective data inventory attribute of a plurality of data inventory attributes;determining, by the one or more processors, which of the one or more data fields of the data inventory are unpopulated data fields; requesting, by the one or more processors via an application programming interface to an application, data associated with the respective data inventory attribute value for the one or more of the unpopulated data fields,', 'receiving, by the one or more processors from the application, the data associated with the respective data inventory attribute ...

Подробнее
06-02-2020 дата публикации

DATA PROCESSING SYSTEMS FOR GENERATING AND POPULATING A DATA INVENTORY

Номер: US20200042738A1
Принадлежит: OneTrust, LLC

A computer-implemented method for populating a privacy-related data model by: (1) providing a data model that comprises one or more respective populated or unpopulated fields; (2) determining that at least a particular one of the fields for a particular data asset is an unpopulated field; (3) at least partially in response to determining that the at least one particular field is unpopulated, automatically generating a privacy questionnaire comprising at least one question that, if properly answered, would result in a response that may be used to populate the at least one particular unpopulated field; (4) transmitting the privacy questionnaire to at least one individual; (5) receiving a response to the questionnaire, the response comprising a respective answer to the at least one question; and (6) in response to receiving the response, populating the at least one particular unpopulated field with information from the received response. 1. A computer-implemented data processing method for populating a privacy-related data model , the method comprising: an organization that owns or uses a particular primary data asset;', 'one or more departments within the organization that are responsible for the primary data asset;', 'one or more particular software applications that collect personal data for storage in, or use by, the primary data asset;', 'one or more particular data subjects, or categories of data subjects, from which information is collected for storage in, or use by, the primary data asset;', 'one or more particular types of personal data that are collected by each of the one or more particular software applications for storage in, or use by, the primary data asset;', 'one or more particular individuals, or categories of individuals, that are permitted to access the personal data stored in, or used by, the primary data asset;', 'one or more particular types of the personal data that each of the one or more particular individuals, or types of individuals, are ...

Подробнее
06-02-2020 дата публикации

DATA PROCESSING SYSTEMS FOR GENERATING AND POPULATING A DATA INVENTORY FOR PROCESSING DATA ACCESS REQUESTS

Номер: US20200042743A1
Принадлежит: OneTrust, LLC

In particular embodiments, a data processing data inventory generation system is configured to: (1) generate a data model (e.g., a data inventory) for one or more data assets utilized by a particular organization; (2) generate a respective data inventory for each of the one or more data assets; and (3) map one or more relationships between one or more aspects of the data inventory, the one or more data assets, etc. within the data model. In particular embodiments, a data asset (e.g., data system, software application, etc.) may include any entity that collects, processes, contains, and/or transfers personal data (e.g., a software application, database, website, server, etc.). A data asset may include any software or device (e.g., server or servers) utilized by a particular entity for such data collection, processing, transfer, storage, etc. The system may then utilize the generated model to fulfil a data subject access request. 1. A computer-implemented data processing method for identifying one or more pieces of personal data associated with a data subject within a data system in order to fulfill a data subject access request , the method comprising:receiving, by one or more computer processors from a data subject, a data subject access request comprising one or more request parameters, wherein the data subject access request is a request for a particular organization to provide, to the data subject based at least in part on the one or more request parameters, one or more pieces of personal data associated with the data subject obtained by the particular organization; accessing a data model that defines one or more electronic links between the one or more data repositories and stores a plurality of data inventories that define a plurality of inventory attributes for each of the one or more data repositories;', 'scanning each of the plurality of data inventories to identify one or more data attributes associated with each of the one or more data inventories; and', ' ...

Подробнее
01-05-2014 дата публикации

Generating And Communicating Platform Event Digests From A Processor Of A System

Номер: US20140122834A1
Принадлежит:

In an embodiment, a processor includes a plurality of counters each to provide a count of a performance metric of at least one core of the processor, a plurality of threshold registers each to store a threshold value with respect to a corresponding one of the plurality of counters, and an event logic to generate an event digest packet including a plurality of indicators each to indicate whether an event occurred based on a corresponding threshold value and a corresponding count value. Other embodiments are described and claimed. 1. A processor comprising:a plurality of cores;a plurality of counters each to provide a count value of a performance metric of at least one core of the processor;a plurality of threshold registers each to store a threshold value with respect to a corresponding one of the plurality of counters; andan event logic to generate an event digest packet including a plurality of indicators each to indicate whether an event occurred based on a corresponding threshold value and a corresponding count value, the event digest packet not including the corresponding threshold value and the corresponding count.2. The processor of claim 1 , wherein each of the plurality of indicators are to indicate whether the count value of the corresponding counter overflowed claim 1 , underflowed claim 1 , or crossed a corresponding threshold value.3. The processor of claim 1 , wherein the event logic is to communicate the event digest packet to a manageability engine of a peripheral controller coupled to the processor via a platform environment control interface transparently to an operating system (OS) executing on the processor.4. The processor of claim 3 , wherein the manageability engine is to communicate the event digest packet to a datacenter manager via a sideband channel.5. The processor of claim 4 , wherein the event logic is to enable communication of the event digest packet to the datacenter manager at a periodic interval claim 4 , without polling by the ...

Подробнее
18-02-2021 дата публикации

Data processing systems and methods for bundled privacy policies

Номер: US20210049527A1
Принадлежит: OneTrust LLC

Data processing systems and methods, according to various embodiments, are adapted for determining an applicable privacy policy based on various criteria associated with a user and the associated product or service. User and product criteria may be obtained automatically and/or based on user input and analyzed by a privacy policy rules engine to determine the applicable policy. Text from the applicable policy can then be presented to the user. A default policy can be used when no particular applicable policy can be identified using by the rules engine. Policies may be ranked or prioritized so that a policy can be selected in the event the rules engine identifies two, conflicting policies based on the criteria.

Подробнее
26-02-2015 дата публикации

Multi-core microprocessor configuration data compression and decompression system

Номер: US20150055427A1
Принадлежит: Via Technologies Inc

An apparatus has a fuse array, a device programmer, and a plurality of cores. The fuse array is disposed on a die, where the fuse array comprises a plurality of semiconductor fuses. The device programmer is coupled to the fuse array and is configured to access the configuration data, to compress the configuration data to yield compressed configuration data, and to program the fuse array with the compressed configuration data. The plurality of cores is disposed separately on the die and is coupled to the fuse array, where each of the plurality of cores accesses and decompresses all of the compressed configuration data upon power-up/reset, for initialization of elements within the each of the plurality of cores.

Подробнее
25-02-2021 дата публикации

MESSAGE BASED GENERAL REGISTER FILE ASSEMBLY

Номер: US20210055930A1
Принадлежит:

In an example, an apparatus comprises a plurality of execution units, and logic, at least partially including hardware logic, to assemble a general register file (GRF) message and hold the GRF message in storage in a data port until all data for the GRF message is received. Other embodiments are also disclosed and claimed. 120-. (canceled)21. An apparatus comprising a processor to:assemble, in an instruction cache of a graphics processor, a general register file (GRF) message for an instruction to be executed by a plurality of processing cores of the graphics processor;hold the GRF message in storage in a data port of the instruction cache until all data necessary to implement the instruction is assembled in the instruction cache; andoffload a completed GRF message to a secondary storage communicatively coupled to the instruction cache by a data bus comprising at least one FLOP structure.22. The apparatus of claim 21 , the processor to:match an internal storage capacity of the general register file with a memory footprint of the instruction; andstore one or more controls to assemble the GRF message based on the memory footprint.23. The apparatus of claim 22 , the processor to:generate a message ready signal.24. The apparatus of claim 24 , the processor to:monitor message fragments for the GRF message.25. The apparatus of claim 21 , the at least one FLOP structure comprising an even row of latches and an odd row of latches.26. The apparatus of claim 25 , wherein data is load balanced in the FLOP structure between the even row of latches and the odd row of latches to sustain a 64B/CLK cycle bus rate.27. An electronic device claim 25 , comprising:an instruction cache to receive a stream of instructions;an instruction unit to execute the stream of instructions;a general-purpose graphics processing compute block comprising a plurality of processing cores;a general register file (GRF) communicatively coupled to the plurality of processing cores; and assemble, in an ...

Подробнее
13-02-2020 дата публикации

SUPERIMPOSING BUTTERFLY NETWORK CONTROLS FOR PATTERN COMBINATIONS

Номер: US20200050573A1
Принадлежит:

A multilayer butterfly network is shown that is operable to transform and align a plurality of fields from an input to an output data stream. Many transformations are possible with such a network which may include separate control of each multiplexer. This invention supports a limited set of multiplexer control signals, which enables a similarly limited set of data transformations. This limited capability is offset by the reduced complexity of the multiplexor control circuits. This invention used precalculated inputs and simple combinatorial logic to generate control signals for the butterfly network. Controls are independent for each layer and therefore are dependent only on the input and output patterns. Controls for the layers can be calculated in parallel. 1. An apparatus for data transformation of an input data word of 2sections , where N is an integer , comprising: a first input receiving a bit corresponding to a precalculated shuffle pattern;', 'a second input receiving a bit corresponding to a precalculated replicate pattern;', 'a third input receiving a bit corresponding to a precalculated rotate pattern;', 'a first exclusive OR gate having a first input receiving the bit corresponding to the precalculated shuffle pattern, a second input receiving the bit corresponding the precalculated replicate pattern, and an output;', 'a second exclusive OR gate having a first input receiving the bit corresponding to the precalculated replicate pattern, a second input receiving the bit corresponding to the precalculated rotate pattern, and an output;', 'a third exclusive OR gate having a first input receiving the bit corresponding to the precalculated rotate pattern, a second input receiving the bit corresponding to the precalculated shuffle pattern, and an output; and', 'a control multiplexer having a first input receiving the bit corresponding to the precalculated shuffle pattern, a second input receiving the bit corresponding to the precalculated replicate pattern, a ...

Подробнее
26-02-2015 дата публикации

Apparatus and method for extended cache correction

Номер: US20150058564A1
Принадлежит: Via Technologies Inc

An apparatus includes a semiconductor fuse array, a cache memory, and a plurality of cores. The semiconductor fuse array is disposed on a die, into which is programmed the configuration data. The semiconductor fuse array has a first plurality of semiconductor fuses that is configured to store compressed cache correction data. The a cache memory is disposed on the die. The plurality of cores is disposed on the die, where each of the plurality of cores is coupled to the semiconductor fuse array and the cache memory, and is configured to access the semiconductor fuse array upon power-up/reset, to decompress the compressed cache correction data, and to distribute decompressed cached correction data to initialize the cache memory.

Подробнее
25-02-2021 дата публикации

METHODS, SYSTEMS, AND MEDIA FOR PAIRING DEVICES TO COMPLETE A TASK USING AN APPLICATION REQUEST

Номер: US20210058488A1
Автор: Pham Thien Van
Принадлежит:

Methods, systems, and media for pairing devices for completing tasks are provided. In some embodiments, the method comprises: identifying, at a first user device, an indication of a task to be completed; transmitting, by the first user device to a server, information indicating the task to be completed and identifying information corresponding to the first user device; determining whether a predetermined duration of time has elapsed; in response to determining that the predetermined duration of time has elapsed, transmitting, from the first user device to the server, a request to determine whether the task has been completed by a second user device; and in response to receiving, from the server, an indication that the task has been completed by the second user device, retrieving data corresponding to the task from the server. 1. A method comprising:identifying, at a first user device, a task to be completed;generating a passcode corresponding to the task to be completed and required to be entered by a user of a second user device;transmitting, by the first user device to a server: first information indicating the task to be completed; second information identifying the first user device; a request identifier; and the passcode corresponding to the task to be completed and required to be entered by the user of the second user device, wherein the request identifier is different from the passcode;determining whether a predetermined duration of time has elapsed;in response to determining that the predetermined duration of time has elapsed, transmitting, from the first user device to the server, a request to determine whether the task has been completed by the second user device; andin response to receiving, from the server, an indication that the task has been completed by the second user device, retrieving data corresponding to the task from the server.2. The method of claim 1 , further comprising generating an alphanumeric identifier that identifies the task to be ...

Подробнее
13-02-2020 дата публикации

DATA PROCESSING SYSTEMS FOR DATA-TRANSFER RISK IDENTIFICATION, CROSS-BORDER VISUALIZATION GENERATION, AND RELATED METHODS

Номер: US20200053130A1
Принадлежит: OneTrust, LLC

In particular embodiments, a Cross-Border Visualization Generation System is configured to: (1) identify one or more data assets associated with a particular entity; (2) analyze the one or more data assets to identify one or more data elements stored in the identified one or more data assets; (3) define a plurality of physical locations and identify, for each of the identified one or more data assets, a respective particular physical location of the plurality of physical locations; (4) analyze the identified one or more data elements to determine one or more data transfers between the one or more data systems in different particular physical locations; (5) determine one or more regulations that relate to the one or more data transfers; and (6) generate a visual representation of the one or more data transfers based at least in part on the one or more regulations. 1. A non-transitory computer-readable medium storing computer-executable instructions for:identifying one or more data assets associated with a particular entity;analyzing the one or more data assets to identify one or more data elements stored in the identified one or more data assets;defining a plurality of physical locations and identifying, for each of the identified one or more data assets, a respective particular physical location of the plurality of physical locations;analyzing the identified one or more data elements to determine one or more data transfers between the one or more data assets in different particular physical locations;determining one or more regulations that relate to the one or more data transfers;generating a visual representation of a map comprising the plurality of physical locations;superimposing an indicia for each of the one or more data assets that indicates the respective particular physical location of the plurality of physical locations for each of the one or more data assets; andmodifying the visual representation to indicate the one or more data transfers between the one ...

Подробнее
10-03-2022 дата публикации

DATA PROCESSING SYSTEMS FOR FULFILLING DATA SUBJECT ACCESS REQUESTS AND RELATED METHODS

Номер: US20220075896A1
Принадлежит: OneTrust, LLC

Responding to a data subject access request includes receiving the request and identifying the requestor and source. In response to identifying the requestor and source, a computer processor determines whether the data subject access request is subject to fulfillment constraints, including whether the requestor or source is malicious. If so, then the computer processor denies the request or requests a processing fee prior to fulfillment. If not, then the computer processor fulfills the request. 1. A method comprising:providing, by computing hardware, a query interface that is accessible via a public data network and that is configured for querying a plurality of data storage systems included in a private data network;determining, with the computing hardware, that a plurality of queries comprising data subject access requests have been received via the query interface from an Internet Protocol (IP) address;responsive to determining that the plurality of queries have originated from the IP address, adding a processing constraint for the IP address to fulfillment constraint data in a data repository;receiving, via the query interface and the public data network, a query comprising a data subject access request from a computing device;determining, by the computing hardware, that the computing device is associated with the IP address;querying, by the computing hardware and using the IP address, the fulfillment constraint data from the data repository to identify the processing constraint;determining, by the computing hardware, that the data subject access request is subject to the processing constraint; andpreventing, based on the determining that the data subject access request is subject to the processing constraint, the plurality of data storage systems from executing processing operations or performing network communication for retrieving data responsive to the data subject access request from a plurality of data sources included in the private data network.2. The ...

Подробнее
15-05-2014 дата публикации

EXPOSING HOST OPERATING SYSTEM SERVICES TO AN AUXILLARY PROCESSOR

Номер: US20140136817A1
Принадлежит: QUALCOMM INCORPORATED

Aspect methods, systems and devices may be configured to perform two-way and/or reverse procedure calls in a computing device or across a network to offload the bulk of processing operations from a general purpose processor to an auxiliary processor, while perform operations that require access to context information locally on the general purpose processor (e.g., application processor, CPU, etc.). The two-way and/or reverse procedure calls allow an auxiliary processor to perform operations that include subroutines that require access to an application processor's or a calling process's context information, without requiring the calling process to send the context information to the auxiliary processor (e.g., as part of the procedure call/method invocation, etc.). 1. A method of executing general purpose application operations on an auxiliary processor , comprising:creating in an application processor of a computing device a first process and a second process, the first and second process having a first context;invoking by the second process a first service of the auxiliary processor, the first service causing the second process to enter a blocked state;invoking by the first process a second service of the auxiliary processor, the second service having a second context;unblocking the second process in response to receiving a communication from the first service of the auxiliary processor;performing, by the unblocked second process, context-based operations within the first context in the application processor;sending a result of performing context-based operations from the application processor to the auxiliary processor, the auxiliary processor performing additional operations based on the result of performing context-based operations to accomplish the second service; andreceiving by the first process information generated in the auxiliary processor when accomplishing the second service.2. The method of claim 1 , wherein invoking a first service of the auxiliary ...

Подробнее
21-02-2019 дата публикации

RECONFIGURABLE MICROPROCESSOR HARDWARE ARCHITECTURE

Номер: US20190056941A1
Автор: Wang Xiaolin, Wu Qian
Принадлежит:

A reconfigurable, multi-core processor includes a plurality of memory blocks and programmable elements, including units for processing, memory interface, and on-chip cognitive data routing, all interconnected by a self-routing cognitive on-chip network. In embodiments, the processing units perform intrinsic operations in any order, and the self-routing network forms interconnections that allow the sequence of operations to be varied and both synchronous and asynchronous data to be transmitted as needed. A method for programming the processor includes partitioning an application into modules, determining whether the modules execute in series, program-driven parallel, or data-driven parallel, determining the data flow required between the modules, assigning hardware resources as needed, and automatically generating machine code for each module. In embodiments, Time Fields are added to the instruction format for all programming units that specify the number of clock cycles for which only one fetched and decoded instruction will be executed. 1. A reconfigurable and programmable multi-core processor architecture comprising at least one programmable unit that can execute Time Field instructions , wherein each Time Field instruction includes a Time Field opcode that specifies a number of clock cycles during which only a single fetch and decode of an instruction will be performed , followed by repeated executions of the instruction by functional units of the programmable unit.2. The processing architecture of claim 1 , wherein the instructions that the programmable unit is able to repeatedly perform during the clock cycles specified by the Time Field include at least one of:multiplication;addition;subtraction;left shift;right shift; andnormalization.3. The processing architecture of claim 1 , wherein the Time Field opcode contains an integer value that explicitly defines the number of clock cycles during which the single fetched and decoded instruction is repeatedly ...

Подробнее
01-03-2018 дата публикации

Synchronizing a translation lookaside buffer with an extended paging table

Номер: US20180060247A1
Принадлежит: Intel Corp

A processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a physical address of the host of the virtualization based system (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization based system.

Подробнее
01-03-2018 дата публикации

Processor system and accelerator

Номер: US20180060275A1
Принадлежит: WASEDA UNIVERSITY

It is provided a processor system comprising at least one processor core provided on a semiconductor chip and including a processor, a memory and an accelerator. The memory includes an instruction area, a synchronization flag area and a data area. The accelerator starts, even if the processor is executing another processing, acceleration processing and executes the task in a case of confirming that a flag indicating that the processor has completed predetermined processing has been written into the synchronization flag area; and stores the data subjected to the acceleration processing into the data area, and further writes a flag indicating that the completion of the acceleration processing. The processor starts, even if the accelerator is executing another processing, the task corresponding to a flag in a case of confirming that the flag indicating the completion of the acceleration processing has been written into the synchronization flag area.

Подробнее
22-05-2014 дата публикации

GRAPHIC PROCESSING UNIT VIRTUAL APPARATUS, GRAPHIC PROCESSING UNIT HOST APPARATUS, AND GRAPHIC PROCESSING UNIT PROGRAM PROCESSING METHODS THEREOF

Номер: US20140139533A1
Принадлежит: INSTITUTE FOR INFORMATION INDUSTRY

A graphic processing unit (GPU) virtual apparatus, a GPU host apparatus and GPU program processing methods thereof are provided. The GPU virtual apparatus determines a priority of a GPU program, determines a processing order of the GPU program according to the priority, processes the GPU program according to the processing order, and transmits the processed GPU program to the GPU host apparatus. The GPU host apparatus receives the processed GPU program from the GPU virtual apparatus, determines a priority of the processed GPU program, determines a processing order of the processed GPU program according to the priority, further processes the processed GPU program according to the processing order, and transmits an operation result of the processed GPU program to the GPU virtual apparatus. 1. A graphic processing unit (GPU) virtual apparatus , comprising:a transmitting/receiving interface;a priority determining device, being configured to determine a priority of a GPU program; and determining a processing order of the GPU program according to the priority;', 'processing the GPU program according to the processing order;', 'transmitting a processed GPU program to a GPU host apparatus via the transmitting/receiving interface; and', 'receiving an operation result of the processed GPU program from the GPU host apparatus via the transmitting/receiving interface., 'a processor electrically connected to the transmitting/receiving interface and the priority determining device, being configured to execute the following operations2. The GPU virtual apparatus as claimed in claim 1 , wherein the processor stops processing a predetermined program so as to preferentially process the GPU program according to the processing order.3. The GPU virtual apparatus as claimed in claim 2 , wherein the processor further resumes processing of the predetermined program after having processed the GPU program.4. The GPU virtual apparatus as claimed in claim 1 , wherein the priority determining ...

Подробнее
22-05-2014 дата публикации

Processing System With Interspersed Processors With Multi-Layer Interconnect

Номер: US20140143520A1
Принадлежит: Coherent Logix, Incorporated

Embodiments of a multi-processor array are disclosed that may include a plurality of processors and configurable communication elements coupled together in a interspersed arrangement. Each configurable communication element may include a local memory and a plurality of routing engines. The local memory may be coupled to a subset of the plurality of processors. Each routing engine may be configured to receive one or more messages from a plurality of sources, assign each received message to a given destination of a plurality of destinations dependent upon configuration information, and forward each message to assigned destination. The plurality of destinations may include the local memory, and routing engines included in a subset of the plurality of configurable communication elements. 1. An apparatus , comprising:a plurality of processors; and a local memory coupled to a subset of the plurality of processors; and', receive one or more messages from a plurality of sources, wherein each message of the one or more messages includes one or more data words;', 'assign each message of the one or more messages to a given destination of a plurality of destinations dependent upon configuration information;', 'forward each message of the one or more messages to the destination assigned to the message;, 'a plurality of routing engines, wherein each routing engine of the plurality of routing engines is configured to, the local memory; and', 'a set of routing engines comprising the plurality of routing engines included in a first subset of the plurality of configurable communication elements., 'wherein the plurality of destinations includes], 'a plurality of configurable communication elements coupled to the plurality of processors in an interspersed arrangement, wherein each configurable communication element of the plurality of configurable communication elements includes2. The apparatus of claim 1 , wherein the configuration information is included in at least one word of the ...

Подробнее
17-03-2022 дата публикации

EXPLOITING INPUT DATA SPARSITY IN NEURAL NETWORK COMPUTE UNITS

Номер: US20220083480A1
Принадлежит:

A computer-implemented method includes receiving, by a computing device, input activations and determining, by a controller of the computing device, whether each of the input activations has either a zero value or a non-zero value. The method further includes storing, in a memory bank of the computing device, at least one of the input activations. Storing the at least one input activation includes generating an index comprising one or more memory address locations that have input activation values that are non-zero values. The method still further includes providing, by the controller and from the memory bank, at least one input activation onto a data bus that is accessible by one or more units of a computational array. The activations are provided, at least in part, from a memory address location associated with the index. 1receiving, by a computing device, a plurality of input activations, the input activations being provided, at least in part, from a source external to the computing device;determining, by a controller of the computing device, whether each of the plurality of input activations has one of a zero value or a non-zero value;storing, in a memory bank of the computing device, at least one of the input activations;generating, by the controller, an index comprising one or more memory address locations having input activation values that are non-zero values; andproviding, by the controller and from the memory bank, at least one input activation onto a data bus that is accessible by one or more units of a computational array, wherein the activations are provided, at least in part, from a memory address location associated with the index.. A computer-implemented method, comprising: This application is a continuation of U.S. patent application Ser. No. 16/514,562 filed Jul. 17, 2019, which is a continuation of U.S. patent application Ser. No. 15/336,066, filed on Oct. 27, 2016. The prior applications are incorporated herein by reference in its entirety.This ...

Подробнее
17-03-2022 дата публикации

PRIVACY MANAGEMENT SYSTEMS AND METHODS

Номер: US20220083934A1
Принадлежит: OneTrust, LLC

Data processing systems and methods, according to various embodiments, are adapted for mapping various questions regarding a data breach from a master questionnaire to a plurality of territory-specific data breach disclosure questionnaires. The answers to the questions in the master questionnaire are used to populate the territory-specific data breach disclosure questionnaires and determine whether disclosure is required in territory. The system can automatically notify the appropriate regulatory bodies for each territory where it is determined that data breach disclosure is required. 1. A method comprising: configuring a first prompt for requesting a first answer to a first master question of the master compliance readiness questionnaire, and', 'configuring a second prompt for requesting a second answer to a second master question of the master compliance readiness questionnaire;, 'generating, by computing hardware, a graphical user interface based on a master compliance readiness questionnaire for a first set of requirements for a first regulation and a second set of requirements for a second regulation applicable to operations performed by an entity, wherein generating the graphical user interface comprisesproviding, by the computing hardware, the graphical user interface for display, wherein displaying the graphical user interface involves providing the first prompt requesting the first answer to the first master question and providing the second prompt requesting the second answer to the second master question;receiving, by the computing hardware, the first answer and the second answer;accessing, by the computing hardware, an ontology that maps a data structure to the first set of requirements and the second set of requirements, wherein the data structure is configured to be populated via the master compliance readiness questionnaire;updating, by the computing hardware, a first element of the data structure for the entity with the first answer, wherein the ...

Подробнее
28-02-2019 дата публикации

Deferred response to a prefetch request

Номер: US20190065378A1
Принадлежит: International Business Machines Corp

Modifying prefetch request processing. A prefetch request is received by a local computer from a remote computer. The local computer responds to a determination that execution of the prefetch request is predicted to cause an address conflict during an execution of a transaction of the local processor by determining an evaluation of the prefetch request prior to execution of the program instructions included in the prefetch request. The evaluation is based, at least in part, on (i) a comparison of a priority of the prefetch request with a priority of the transaction and (ii) a condition that exists in one or both of the local processor and the remote processor. Based on the evaluation, the local computer modifies program instructions that govern execution of the program instructions included in the prefetch request.

Подробнее
08-03-2018 дата публикации

COMPILER ARCHITECTURE FOR PROGRAMMABLE APPLICATION SPECIFIC INTEGRATED CIRCUIT BASED NETWORK DEVICES

Номер: US20180067728A1
Принадлежит:

A processing network including a plurality of lookup and decision engines (LDEs) each having one or more configuration registers and a plurality of on-chip routers forming a matrix for routing the data between the LDEs, wherein each of the on-chip routers is communicatively coupled with one or more of the LDEs. The processing network further including an LDE compiler stored on a memory and communicatively coupled with each of the LDEs, wherein the LDE compiler is configured to generate values based on input source code that when programmed into the configuration registers of the LDEs cause the LDEs to implement the functionality defined by the input source code. 1. A processing network comprising:a plurality of lookup and decision engines (LDEs); andan LDE compiler stored on a non-transitory computer-readable memory and communicatively coupled with each of the LDEs, wherein the LDE compiler is configured to generate values based on input source code that enable the LDEs to implement functionality defined by the input source code, wherein the source code includes a plurality of assignment statements and a plurality of conditions, wherein the plurality of conditions describe when each of the assignment statements would be executed if a compiled version of the source code was executed, and further wherein the LDE compiler comprises a code parallelizer that based on the source code determines all logical execution permutations of the conditions permitted by the source code.2. The network of claim 1 , wherein the LDE compiler comprises a symbol mapper that creates one or more symbol tables that correlate one or more symbols of the input source code to one or more of the group consisting of an input layer claim 1 , a bit offset into the input layer claim 1 , and a length of the symbol.3. The network of claim 1 , wherein the LDE compiler comprises a code generator that generates instructions executable by the LDEs for one or more assignment statements of the source code.4. ...

Подробнее
10-03-2016 дата публикации

SYSTEMS AND METHODS FOR IMPROVING THE PERFORMANCE OF A QUANTUM PROCESSOR VIA REDUCED READOUTS

Номер: US20160071021A1
Автор: Raymond Jack
Принадлежит:

Techniques for improving the performance of a quantum processor are described. The techniques include reading out a fraction of the qubits in a quantum processor and utilizing one or more post-processing operations to reconstruct qubits of the quantum processor that are not read. The reconstructed qubits may be determined using a perfect sampler to provide results that are strictly better than reading all of the qubits directly from the quantum processor. The composite sample that includes read qubits and reconstructed qubits may be obtained faster than if all qubits of the quantum processor are read directly. 1. A computational system comprising: a plurality of qubits including a first set of qubits and a second set of qubits;', 'a plurality of coupling devices, wherein each coupling device provides controllable communicative coupling between two of the plurality of qubits;', 'a first readout subsystem responsive to a state of each of the qubits in the first set of qubits to generate a first set of detected samples, each detected sample in the first set of detected samples represents a respective one of the qubits in the first set of qubits;, 'at least one quantum processor comprisingat least one post-processing processor-based device communicatively coupled to the at least one quantum processor; and receives the first set of detected samples that represents the qubits in the first set of qubits; and', 'post-processes the first set of detected samples to generate a first set of derived samples, wherein each sample in the first set of derived samples represents a respective one of the qubits in the second set of qubits., 'at least one non-transitory computer-readable storage medium communicatively coupled to the at least one post-processing processor-based device and that stores at least one of processor-executable instructions or data, where in use the at least one post-processing processor-based device2. The computational system of wherein each coupling device is ...

Подробнее
27-02-2020 дата публикации

DYNAMIC THREAD STATUS RETRIEVAL USING INTER-THREAD COMMUNICATION

Номер: US20200065159A1
Принадлежит:

A circuit arrangement and program product for dynamically providing a status of a hardware thread/hardware resource independent of the operation of the hardware thread/hardware resource using an inter-thread communication protocol. A master hardware thread may be configured to communicate status requests to associated slave hardware threads and/or hardware resources. Each slave hardware thread/hardware resource may be configured with hardware logic configured to automatically determine status information for the slave hardware thread/hardware resource and communicate a status response to the master hardware thread without interrupting processing of the slave hardware thread/hardware resource. 1. A circuit arrangement comprising: a slave hardware thread;', 'an inbox coupled to the slave hardware thread and configured to receive a status request from a master hardware thread disposed in a different integrated processor block; and', 'status logic coupled to the inbox and configured to determine a status associated with the slave hardware thread and communicate a status response to the master hardware thread based at least in part on the status, wherein the status logic is configured to determine the status associated with the slave hardware thread without interrupting processing of the slave hardware thread., 'a plurality of interconnected integrated processor blocks arranged in a network on a chip (NOC) configuration, a first integrated processor block among the plurality of integrated processor blocks comprising2. The circuit arrangement of claim 1 , wherein the first integrated processor block further comprises a status register configured to store status information for the slave hardware thread claim 1 , and the status logic is configured to determine the status associated with the slave hardware thread by analyzing the status register.3. The circuit arrangement of claim 1 , wherein the inbox is configured to receive a configuration message from the master ...

Подробнее
27-02-2020 дата публикации

DATA PROCESSING SYSTEMS FOR USE IN AUTOMATICALLY GENERATING, POPULATING, AND SUBMITTING DATA SUBJECT ACCESS REQUESTS

Номер: US20200065519A1
Принадлежит: OneTrust, LLC

Computer systems and methods for: (1) analyzing electronic correspondence associated with a data subject (e.g., the emails within one or more email in-boxes associated with the data subject); (2) based on the analysis, identifying at least one entity that that the data subject does not actively do business with (e.g., as evidenced by the fact that the data subject no longer opens emails from the entity, and/or has set up a rule to automatically delete emails received from the entity); and (3) in response to identifying the entity as an entity that the data subject no longer does business with, at least substantially automatically populating and/or submitting a data subject access request to the entity (e.g., to delete all personal information being processed by the entity). 1. A computer-implemented data processing method of automatically submitting a data subject access request , the method comprising:analyzing, by at least one computer processor, a plurality of pieces of electronic correspondence sent to a particular data subject from a particular entity;determining, by at least one computer processor based on the analysis, that the particular data subject has established a software rule to automatically direct electronic correspondence received from the particular entity to a particular correspondence folder;at least partially in response to determining that the particular data subject has established the software rule to automatically direct electronic correspondence received from the particular entity to the particular correspondence folder, determining, by at least one computer processor, that the particular data subject does not actively do business with the particular entity; and (1) a request to rectify inaccurate personal data of the particular data subject;', '(2) a request to access of a copy of personal data of the particular data subject processed by the entity;', '(3) a request to restrict the processing of the personal data of the particular data ...

Подробнее
11-03-2021 дата публикации

DATA PROCESSING SYSTEMS FOR IDENTIFYING AND MODIFYING PROCESSES THAT ARE SUBJECT TO DATA SUBJECT ACCESS REQUESTS

Номер: US20210073415A1
Принадлежит: OneTrust, LLC

In particular embodiments, in response a data subject submitting a request to delete their personal data from an organization's systems, the system may: () automatically determine where the data subject's personal data is stored; () in response to determining the location of the data (which may be on multiple computing systems), automatically facilitate the deletion of the data subject's personal data from the various systems; and () determine a cause of the request to identify one or more processing activities or other sources that result in a high number of such requests. 1. A personal data processing and analysis system comprising;one or more processors;one or more data assets that store a plurality of personal data associated with a plurality of data subjects, each piece of the plurality of personal data being associated with a respective particular processing activity of a plurality of processing activities undertaken by an organization; and the computer memory stores one or more data models defining one or more data transfers among the one or more data assets; and', receiving a plurality of data subject requests;', 'analyzing each of the plurality of data subject requests to identify a respective associated processing activity of the plurality of processing activities;', 'identifying a particular processing activity of the plurality of processing activities that is associated with at least a particular number of the plurality of data subject requests based on each respective associated processing activity; and', analyzing each of the plurality of data subject requests to identify the respective associated processing activity of the plurality of processing activities comprises using one or more data mapping techniques to identify each respective associated processing activity; and', 'using the one or more data mapping techniques to identify each respective associated processing activity comprises:', 'accessing each of the one or more data models; and', ' ...

Подробнее