Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 54199. Отображено 199.
17-01-2019 дата публикации

СИСТЕМА И СПОСОБ ДОПОЛНЕНИЯ ИЗОБРАЖЕНИЯ СТИЛИЗОВАННЫМИ СВОЙСТВАМИ

Номер: RU2677573C2

Изобретение относится к процессу обработки изображения. Техническим результатом является расширение арсенала технических средств для дополнения объекта цифрового изображения стилизованными графическими свойствами. В способе дополнения объекта цифрового изображения стилизованными графическими свойствами идентифицируют на первом изображении первую версию объекта, обладающую первым набором графических свойств. Идентифицируют на втором изображении вторую версию объекта, обладающую вторым набором графических свойств. Извлекают первый и второй наборы графических свойств из первого и второго изображений. Создают третий набор графических свойств путем вычисления различий между первым и вторым наборами графических свойств. Используя третий набор графических свойств, дополняют первую версию объекта на первом изображении. 3 н. и 17 з.п. ф-лы, 8 ил., 1 табл.

Подробнее
27-05-2013 дата публикации

УСТАНОВКА, СПОСОБ И СИСТЕМА КЭШИРОВАНИЯ

Номер: RU2483347C2
Принадлежит: ИНТЕЛ КОРПОРЕЙШН (US)

Изобретение относится к вычислительной технике. Технический результат заключается в улучшении кэширования трансляции адресов при виртуализации для направленного ввода/вывода (VTd). Устройство кэширования содержит: кэш-память для хранения одной или нескольких записей, в которой каждая запись соответствует запросу на доступ к памяти ввода-вывода и каждая запись должна содержать физический адрес гостя (GPA), соответствующий запросу на доступ к памяти ввода/вывода, и соответствующий физический адрес хозяина (НРА); и первую логику, которая получает первый запрос на доступ к памяти ввода/вывода от оконечного устройства и определяет, включает ли первый запрос на доступ к памяти ввода/вывода подсказку будущего доступа, связанную с адресом, причем подсказка будущего доступа должна указывать хозяину, может ли выполняться доступ к адресу в будущем, и записи в кэш-памяти, которые не содержат подсказку, соответствующую предыдущим запросам на доступ к памяти ввода/вывода, содержащим подсказки на будущий ...

Подробнее
27-02-2009 дата публикации

СПОСОБЫ И УСТРОЙСТВА ДЛЯ УПРЕЖДАЮЩЕГО УПРАВЛЕНИЯ ПАМЯТЬЮ

Номер: RU2348067C2

Изобретение относится к системам и способам упреждающего, гибкого и самонастраивающегося управления памятью. Техническим результатом является повышение эффективности управления памятью за счет загрузки и поддержания данных, которые, вероятно, потребуются в памяти, прежде чем эти данные действительно потребуются. Система содержит механизмы, направленные на контроль предшествующего использования памяти, анализ использования памяти, обновление памяти страницами высокой значимости, эффективность упреждающей выборки ввода/вывода и агрессивное управление диском. На основе информации об использовании памяти страницам присваиваются приоритеты согласно относительной значимости, и механизмы действуют для осуществления упреждающей выборки и/или поддержания в памяти более значимых страниц. Страницы выбираются с упреждением и поддерживаются в резервном наборе страниц с присвоенными приоритетами, включающем ряд поднаборов, причем более значимые страницы сохраняются в памяти с приоритетом перед менее ...

Подробнее
18-04-2018 дата публикации

СПОСОБ, УСТРОЙСТВО И КОМПЬЮТЕРНЫЙ НОСИТЕЛЬ ДАННЫХ ДЛЯ ПЕРЕМЕЩЕНИЯ ДАННЫХ

Номер: RU2651216C2
Автор: ЧЖОУ Ян (CN)
Принадлежит: ЗТЕ КОРПОРЕЙШН (CN)

Изобретение относится к способу и устройству иерархического хранения данных. Технический результат изобретения заключается в возможности гибкого перемещения данных между устройствами хранения данных. Способ включает: запуск перемещения данных на основании политики динамического размещения данных; определение данных, которые должны быть перемещены в файлах, связанных с ранее перемещенными файлами, как данных, которые должны быть перемещены в соответствии с взаимной ассоциацией между файлами; и определение того, следует ли перемещать данные, которые должны быть перемещены, на основании механизма управления скоростью перемещения. 3 н. и 12 з.п. ф-лы, 4 ил.

Подробнее
24-09-2018 дата публикации

СЕЛЕКТИВНОЕ ОБЕСПЕЧЕНИЕ СОБЛЮДЕНИЯ ЦЕЛОСТНОСТИ КОДА, ОБЕСПЕЧИВАЕМОЕ МЕНЕДЖЕРОМ ВИРТУАЛЬНОЙ МАШИНЫ

Номер: RU2667713C2

Изобретение относится к вычислительной технике. Технический результат заключается в защите от вредоносных программ. Способ, реализованный в вычислительном устройстве, для защиты от вредоносных программ включает в себя исполняемый код, подлежащий исполнению виртуальным процессором виртуальной машины, причем виртуальная машина управляется менеджером виртуальной машины, определение, должна ли страница памяти быть исполняемой в режиме ядра или в пользовательском режиме, предоставление возможности в ответ на определение, что страница памяти не должна быть исполняемой в режиме ядра, определения операционной системой виртуальной машины, предоставить ли возможность исполнения исполняемого кода в пользовательском режиме. 2 н. и 8 з.п. ф-лы, 5 ил.

Подробнее
27-09-2002 дата публикации

СИСТЕМА ПЕРЕМЕЩЕНИЯ ДАННЫХ В РЕАЛЬНОМ ВРЕМЕНИ И СПОСОБ ПРИМЕНЕНИЯ РАЗРЕЖЕННЫХ ФАЙЛОВ

Номер: RU2190248C2

Изобретение относится к способу управления иерархической памятью в компьютерной сети. Техническим результатом является обеспечение перемещения файла данных и его обратного перемещения в памяти компьютерной сети без использования фиктивного файла. Способ для перемещения данных в реальном времени в сетевой компьютерной системе использует признак операционной системы - разреженный файл - для представления перемещенного файла. Расширенный файл занимает минимальную величину физического пространства в служебном файловом процессоре, но определяется как имеющий те же размер и атрибуты, что и исходный файл. Когда пользователь обращается к перемещенному файлу, этот файл кажется постоянно присутствующим в служебном файловом процессоре и прозрачно возвращается в служебный файловый процессор из оптимизированного положения хранения в системе управления иерархической памятью. 7 з.п. ф-лы, 6 ил., 3 табл.

Подробнее
04-10-2022 дата публикации

ОТВЕТ С ИДЕНТИФИКАТОРОМ ФУНКЦИИ ПРОЦЕССОРА ДЛЯ ВИРТУАЛИЗАЦИИ

Номер: RU2780973C2

Изобретение относится к области вычислительной техники. Технический результат заключается в повышении безопасности при работе реальных вычислительных машин с виртуальными машинами. Технический результат достигается за счет того, что в способе перед приемом исходящего запроса значения идентификатора (ID) функции процессора выполняются следующие этапы: прием посредством физического процессора значения ID функции процессора из супервизорного раздела; сохранение посредством физического процессора значения ID функции процессора в аппаратном регистре, доступном посредством физического процессора; прием посредством физического процессора исходящего из гостевого раздела запроса значения ID функции процессора; и предоставление, посредством физического процессора без вмешательства супервизорного раздела, запрошенного значения ID функции процессора в гостевой раздел из упомянутого аппаратного регистра, доступного посредством физического процессора. 3 н. и 17 з.п. ф-лы, 5 ил.

Подробнее
26-11-2018 дата публикации

Номер: RU2016143088A3
Автор:
Принадлежит:

Подробнее
04-04-2018 дата публикации

Устройство хранения данных

Номер: RU178459U1

Полезная модель относится к области систем хранения данных и, в частности к области многопротокольных устройств хранения данных, поддерживающих файловые и блочные протоколы доступа.1. Устройство хранения данных, содержащее контроллер, N твердотельных дисков (HDD), соединенных входами/выходами с контроллером, и два блока питания, отличающееся тем, что в него дополнительно введены кластер 1U серверов, первый вход/выход которого соединен с первым входом/выходом контроллера, коммутатор, второй вход/выход которого соединен со вторым входом/выходом кластера 1U серверов, а третий вход/выход - со вторым входом/выходом контроллера; блок IP мониторинга, первый вход/выход которого соединен с первым входом/выходом коммутатора; система питания в составе микроконтроллера, постоянного запоминающего устройства, оперативного запоминающего устройства, блока ввода информации, блока вывода информации, блока индикации, объединенных между собой шиной адреса и данных; блока дистанционного контроля и управления ...

Подробнее
12-09-2019 дата публикации

Устройство хранения данных

Номер: RU192299U1

Полезная модель относится к области систем хранения данных и, в частности, к области многопротокольных устройств хранения данных, поддерживающих файловые и блочные протоколы доступа и может быть использована для увеличения объема записи и надежного хранения, а также для повышения эффективности теплообмена жидкостного радиатора для охлаждения центрального процессора (CPU) и программируемой логической интегральной схемы FPGA.1. Устройство хранения данных, содержащее контроллер, N HDD, соединенные входами/выходами с контроллером, кластер 1U серверов с FPGA в составе сетевого адаптера с поддержкой компрессии, центрального процессора (CPU), первый вход/выход которого соединен со вторым входом/выходом сетевого адаптера с поддержкой компрессии, третий вход/выход соединен с первым входом/выходом контроллера, программируемая логическая интегральная схема FPGA, первый вход/выход которой соединен со вторым входом/выходом центрального процессора; коммутатор, второй вход/выход которого соединен со вторым ...

Подробнее
27-08-2013 дата публикации

УСТРОЙСТВО, СПОСОБ И СИСТЕМА УПРАВЛЕНИЯ МАТРИЦАМИ

Номер: RU2491616C2
Принадлежит: ИНТЕЛ КОРПОРЕЙШН (US)

Изобретение относится к вычислительной технике. Технический результат заключается в увеличении быстродействия. Устройство управления матрицами содержит коммутирующую матрицу системы на кристалле (OSF), используемую для связи процессора с логическим блоком, и память для хранения теневого адреса, соответствующего физическому адресу, в ответ на запрос на уровне пользователя, в котором OSF содержит логику для определения физического адреса из теневого адреса, выполненную с возможностью определять физический адрес и инвертировать один или несколько наивысших битов теневого адреса, чтобы определить физический адрес. 3 н. и 24 з.п. ф-лы, 7 ил.

Подробнее
07-05-2018 дата публикации

Абонентское сетевое устройство с виртуализированными сетевыми функциями

Номер: RU179300U1

Полезная модель относится к абонентским сетевым устройствам (vCPE). Технический результат заключается в расширении функций устройства. Устройство с виртуализированными сетевыми функциями, содержит микроконтроллер, постоянное запоминающее устройство, оперативное запоминающее устройство, связанные шиной с микроконтроллером, порты ввода информации, соединенные шиной адреса и данных с микроконтроллером, отличающееся тем, что в него дополнительно введены блок Wi-Fi, блок индикации, GPS Tracker, соединенные шиной адреса и данных с микроконтроллером, датчик температуры окружающей среды, датчик температуры системы охлаждения, выходы которых соединены с входами соответствующих портов ввода информации, модуль высокоскоростной обработки пакетных данных с неблокируемой высокоскоростной матрицей коммутации на базе ПЛИС (FPGA), первый вход/выход которого связан шиной с микроконтроллером, приемопередающие модули Ethernet, первые входы/выходы которых соединены шиной со вторыми входами/выходами модуля высокоскоростной ...

Подробнее
27-03-2008 дата публикации

ВИЗУАЛИЗАЦИЯ ПОЛЬЗОВАТЕЛЬСКОГО ИНТЕРФЕЙСА

Номер: RU2006133383A
Принадлежит:

... 1. Способ визуализации пользовательского интерфейса для устройства, причем способ, содержащий этапы, на которых обеспечивают множество модулей-деятелей, причем каждый из множества модулей-деятелей связан с соответствующим элементом пользовательского интерфейса и содержит один или более атрибутов, определяющих внешний вид и функциональные возможности соответствующего модуля-деятеля, и каждый из атрибутов модулей-деятелей содержит язык разметки; обеспечивают модуль визуализации для приема одного или более атрибутов от одного или более модулей-деятелей из множества модулей-деятелей и визуализируют пользовательский интерфейс исключительно в соответствии с принятыми атрибутами модулей-деятелей.2. Способ по п.1, в котором, если атрибут модуля-деятеля обновляется, то это обновление принимается модулем визуализации и пользовательский интерфейс обновляется соответствующим образом.3. Способ по п.2, в котором атрибут модуля-деятеля обновляется в ответ на обновление со стороны пользователя.4. Способ ...

Подробнее
27-11-2011 дата публикации

СПОСОБ И СИСТЕМА РАСКРУТКИ СТЕКА

Номер: RU2010119541A
Принадлежит:

... 1. Способ раскрутки стека, заключающийся в том, что в системе, содержащей вычислительное устройство, включающее в себя соединенные между собой центральный процессор, память и средство управления раскруткой стека, для каждой процедуры, участвующей в раскрутке стека, выполняют следующие операции: ! - анализируют инструкции кода пролога процедуры или первые инструкции кода процедуры, при этом определяют те инструкции, которые уменьшают указатель вершины стека; ! - вычисляют размер кадра, при этом суммируют величины, на которые указатель вершины стека должен быть уменьшен; ! - анализируют инструкции кода пролога процедуры или первые инструкции кода процедуры, при этом определяют инструкцию, сохраняющую адрес возврата в вызывающую процедуру; ! - определяют смещение относительно начала кадра, по которому находится адрес возврата; ! - вычисляют начальные адреса кадров и ! - считывают адрес возврата в каждом кадре. ! 2. Система раскрутки стека, содержащая вычислительное устройство, включающее в ...

Подробнее
10-10-2010 дата публикации

УПРАВЛЕНИЕ СОСТОЯНИЕМ РАСПРЕДЕЛЕННЫХ АППАРАТНЫХ СРЕДСТВ В ВИРТУАЛЬНЫХ МАШИНАХ

Номер: RU2009111225A
Принадлежит:

... 1. Система для управления операциями в среде виртуальных машин, содержащая: ! по меньшей мере один объект прокси-драйвера, расположенный в первом разделе, причем указанный по меньшей мере один объект прокси-драйвера является прокси-драйвером для устройства; ! по меньшей мере один объект драйвера устройства, расположенный в стеке драйверов во втором разделе, причем указанный по меньшей мере один объект драйвера сконфигурирован для управления упомянутым устройством; ! по меньшей мере один объект первого фильтра, расположенный под упомянутым по меньшей мере одним объектом драйвера устройства в упомянутом стеке драйверов, причем упомянутый по меньшей мере один объект первого фильтра предоставляет интерфейс для упомянутого по меньшей мере одного объекта драйвера устройства, чтобы по меньшей мере один объект драйвера устройства участвовал в шинных функциях, включая управление упомянутым устройством; и ! по меньшей мере один объект второго фильтра, расположенный над упомянутым по меньшей мере ...

Подробнее
10-06-2010 дата публикации

СПОСОБ И УСТРОЙСТВО ДЛЯ КЭШИРОВАНИЯ КОМАНД ПЕРЕМЕННОЙ ДЛИНЫ

Номер: RU2008147131A
Принадлежит:

... 1. Способ кэширования команд переменной длины, содержащий этапы, на которых: ! записывают данные команд в строку кэша и ! сохраняют резервную копию данных команд для одной или более позиций границы кэша. ! 2. Способ по п.1, в котором этап сохранения резервной копии данных команд для одной или более позиций границы кэша содержит этап, на котором копируют во вспомогательное запоминающее устройство данные команд для одной или более позиций границы кэша. ! 3. Способ по п.2, в котором вспомогательное запоминающее устройство содержит одно из массива тегов, ассоциативно связанного с кэшем команд, одного или более элементов резервного запоминающего устройства, включенного или ассоциативно связанного с кэшем команд, отдельного массива памяти и кэша верхнего уровня. ! 4. Способ по п.2, в котором этап копирования во вспомогательное запоминающее устройство данных команд для одной или более позиций границы кэша содержит этап, на котором копируют данные команд для позиции границы внутри строки кэша во ...

Подробнее
27-06-2015 дата публикации

УПРАВЛЕНИЕ ДУБЛИРОВАННЫМ ВИРТУАЛЬНЫМ ХРАНИЛИЩЕМ НА САЙТАХ ВОССТАНОВЛЕНИЯ

Номер: RU2013156675A
Принадлежит:

... 1. Устройство, содержащее:дублированное виртуальное хранилище дублированной виртуальной машины, включающее в себя, по меньшей мере, дублированный базовый виртуальный диск, по существу соответствующий первичному базовому виртуальному диску, который должен быть дублирован;приемник, выполненный с возможностью принимать множество копий разностных дисков множества типов копий, каждый из которых связан с первичным базовым виртуальным диском; имодуль управления дублированием, выполненный с возможностью размещать принятые копии разностных дисков упомянутого множества типов копии относительно дублированного базового виртуального диска, как разностные диски были бы размещены относительно первичного базового виртуального диска.2. Устройство по п. 1, в котором первый из упомянутого множества типов копий содержит согласованный с приложениями тип копии, основанный на данных приложения, которые были подготовлены к созданию копии, при этом второй из упомянутого множества типов копий содержит соответствующий ...

Подробнее
27-03-2008 дата публикации

КОНТЕЙНЕР ДАННЫХ ДЛЯ ДАННЫХ КОНТЕНТА ПОЛЬЗОВАТЕЛЬСКОГО ИНТЕРФЕЙСА

Номер: RU2006133385A
Принадлежит:

... 1. Способ предоставления пользовательского интерфейса мобильному устройству, содержащий этапы, на которых а) создают контейнер, содержащий исполняемый код для пользовательского интерфейса; один или более ресурсов контента, предназначенных для использования в пользовательском интерфейсе, и метаданные, относящиеся к каждому ресурсу контента, причем исполняемый код, каждый ресурс контента и метаданные сохраняют как преобразованные в последовательную форму объекты в контейнере; b) передают контейнер в одно или более мобильных устройств; с) извлекают содержимое контейнера в каждом мобильном устройстве; d) исполняют код, чтобы сгенерировать пользовательский интерфейс для мобильного устройства.2. Способ по п.1, дополнительно содержащий этап, на котором е) обрабатывают содержимое контейнера в формат, предназначенный для передачи в мобильное устройство, причем этап е) выполняют после этапа а) и перед этапом b).3. Способ по п.1, в котором метаданные, относящиеся к каждому ресурсу контента, относятся ...

Подробнее
27-02-2011 дата публикации

СПОСОБ И УСТРОЙСТВО ДЛЯ УСТАНОВКИ ПОЛИТИКИ КЭШИРОВАНИЯ В ПРОЦЕССОРЕ

Номер: RU2009131695A
Принадлежит:

... 1. Способ определения политики кэширования, содержащий этапы, на которых: ! принимают информацию о политике кэширования, связанную с одним или более целевых запоминающих устройств, выполненных с возможностью хранения информации, используемой процессором; и ! устанавливают одну или более политик кэширования, основываясь на информации о политике кэширования. ! 2. Способ по п.1, в котором этап приема информации о политике кэширования содержит этап приема представляемой по собственной инициативе информации о политике кэширования. ! 3. Способ по п.1, в котором этап приема информации о политике кэширования содержит этапы, на которых: ! направляют обращение к памяти на одно из целевых запоминающих устройств; и ! принимают информацию о политике кэширования в ответ на обращение к памяти. ! 4. Способ по п.3, в котором этап приема информации о политике кэширования в ответ на обращение к памяти содержит этап приема информации о политике кэширования, обеспеченной целевым запоминающим устройством, на ...

Подробнее
20-08-2011 дата публикации

УСТАНОВКА, СПОСОБ И СИСТЕМА КЭШИРОВАНИЯ

Номер: RU2010104040A
Принадлежит:

... 1. Устройство кэширования, содержащее: ! кэш-память для хранения одной или нескольких записей, в котором каждая запись соответствует запросу на доступ к памяти между физическим адресом гостя (GPA) и физическим адресом хозяина (НРА); и ! первую логику, которая получает первый запрос на доступ к памяти ввода/вывода от оконечного устройства и определяет, включает ли первый запрос на доступ к памяти ввода/вывода подсказку доступа, связанную с адресом, ! в котором первая логики должна обеспечить обновление одного или нескольких битов соответствующей записи в кэш-памяти в ответ на определение, что первый запрос на доступ к памяти ввода/вывода включает подсказку. ! 2. Устройство по п.1, в котором оконечное устройство должно формировать запрос на доступ к памяти. ! 3. Устройство по п.1, дополнительно содержащее логику предварительной выборки к данным предварительной выборки в кэш-памяти в ответ на запрос, выданный оконечным устройством. ! 4. Устройство по п.1, в котором оконечное устройство включает ...

Подробнее
06-07-2006 дата публикации

DATENMIGRATIONSSYSTEM UND -VERFAHREN UNTER VERWENDUNG VON UNDICHTEN DATEIEN

Номер: DE0069636192D1
Автор: LAM TUNG, LAM, TUNG

Подробнее
31-10-2013 дата публикации

Verwalten von komprimiertem Speicher unter Verwendung gestaffelter Interrupts

Номер: DE112011103408T5

Es werden Systeme und Verfahren zum Verwalten von Speicher bereitgestellt. Ein bestimmtes Verfahren kann das Auslösen einer Speicherkomprimierungsoperation beinhalten. Das Verfahren kann ferner das Auslösen eines ersten Interrupts beinhalten, der so konfiguriert ist, dass er als Reaktion auf eine erste erkannte Speicherstufe einen ersten Prozess beeinflusst, der auf einem Prozessor ausgeführt wird. Ein zweiter ausgelöster Interrupt kann so konfiguriert sein, dass er als Reaktion auf eine zweite erkannte Speicherstufe den ersten Prozess beeinflusst, der auf dem Prozessor ausgeführt wird, und ein dritter Interrupt kann ausgelöst werden, damit er als Reaktion auf eine dritte erkannte Speicherstufe den ersten Prozess beeinflusst, der auf dem Prozessor ausgeführt wird. Es wird zumindest die erste, die zweite oder die dritte Speicherstufe durch die Speicherkomprimierungsoperation beeinflusst.

Подробнее
13-09-2018 дата публикации

Anzeige eines Schreibvorgangs mit Löschen über eine Benachrichtigung von einem Plattenlaufwerk, das Blöcke einer ersten Blockgrösse innerhalb von Blöcken einer zweiten Blockgrösse emuliert

Номер: DE112012002641B4

Verfahren zur Emulation eines Plattenlaufwerks mit einer kleineren ersten Blockgröße durch ein Plattenlaufwerk mit einer größeren zweiten Blockgröße, wobei das Plattenlaufwerk über die Emulation jeweils eine Vielzahl emulierter Blöcke der ersten Blockgröße in jedem Block der zweiten Blockgröße speichert, aufweisend die Schritte::Empfangen einer Anfrage durch ein Plattenlaufwerk, mindestens einen Block einer ersten Blockgröße zu schreiben,Lesen eines ausgewählten Blocks der zweiten Blockgröße, in den der mindestens eine Block der ersten Blockgröße über die Emulation zu schreiben ist;Wenn beim Lesen des ausgewählten Blocks der zweiten Blockgröße ein Lesefehler auftritt, Durchführen der folgenden Verfahrensschritte durch das Plattenlaufwerk:Durchführen eines Schreibvorgangs mit Löschen an ausgewählten emulierten Blöcken der ersten Blockgröße, die das Erzeugen des Lesefehlers verursacht haben, indem diese Blöcke der ersten Blockgröße gelöscht und als nicht länger gültig angegeben werden;Verfolgen ...

Подробнее
14-08-2014 дата публикации

Mehrkernverknüpfung in einem Netzprozessor

Номер: DE112012004551T5
Принадлежит: CAVIUM INC, CAVIUM, INC.

Ein Netzprozessor umfasst mehrere Prozessorkerne zum Verarbeiten von Paketdaten. Zum Bereitstellen von Zugang zu einem Speicher-Teilsystem für die Prozessorkerne leitet eine Verknüpfungsschaltung Kommunikationen zwischen den Prozessorkernen und dem L2-Cache und anderen Speichervorrichtungen. Die Prozessorkerne sind in mehrere Gruppen aufgeteilt, wobei jede Gruppe einen einzelnen Bus gemeinsam benutzt und der L2-Cache in eine Anzahl von Bänken aufgeteilt ist, wobei jede Bank Zugang zu einem getrennten Bus besitzt. Diese Verknüpfungsschaltung verarbeitet Anforderungen zum Speichern und Abrufen von Daten aus den Prozessorkernen über mehrere Busse und verarbeitet Antworten zum Rücksenden von Daten aus den Cache-Bänken. Als Ergebnis bietet der Netzprozessor Speicherzugang mit hoher Bandbreite für mehrere Prozessorkerne.

Подробнее
13-12-2018 дата публикации

Verfahren und Vorrichtungen zur Verwaltung eines Prozesses unter einer Speicherbeschränkung

Номер: DE112017001783T5
Принадлежит: INTEL CORP, Intel Corporation

Verfahren und Vorrichtungen zur Verwaltung eines Prozesses unter einer Speicherbeschränkung sind hierin offenbart. Ein beispielhaftes Verfahren beinhaltet das Erkennen, dass ein Prozess von einem Vordergrundbetriebsmodus in einen Hintergrundbetriebsmodus wechseln wird. Ein projizierter Out-of-Memory-Score wird ohne Wechseln des Prozesses in den Hintergrundbetriebsmodus berechnet. Der projizierte Out-of-Memory-Score wird mit einem Score-Grenzwert ohne Wechseln des Prozesses in den Hintergrundbetriebsmodus verglichen. Der Prozess wird ohne Wechseln des Prozesses in den Hintergrundbetriebsmodus beendet, wenn der projizierte Out-of-Memory-Score größer als der Score-Grenzwert ist.

Подробнее
23-10-2017 дата публикации

Zweiphasige Befehlspuffer zum Überlappen von IOMMU-Abbildung und Lesevorgängen zweitrangiger Datenspeicher

Номер: DE202017103915U1
Автор:
Принадлежит: GOOGLE INC, GOOGLE INC.

Computerprogrammprodukt, das Programmcode umfasst, der, wenn er durch einen oder mehrere Prozessoren ausgeführt wird, Folgendes ausführt: Kopieren von Daten von einer gegebenen Adresse eines zweitrangigen Datenspeichers in einen internen Puffer eines Speichercontrollers in einer ersten Phase unter Verwendung eines oder mehrerer Prozessoren, wobei das Kopieren wenigstens teilweise während des Abbildens einer spezifizierten physikalischen Adresse in eine Eingabe/Ausgabe-Datenspeichermanagementeinheit (IOMMU) durch ein Betriebssystem stattfindet; Bestimmen, ob eine zweite Phase ausgelöst wird, mit dem einen oder den mehreren Prozessoren; und Kopieren der Daten aus dem internen Puffer des Speichercontrollers an die spezifizierte physikalische Adresse des dynamischen Schreib-Lese-Datenspeichers (DRAM) mit dem einen oder den mehreren Prozessoren, falls die zweite Phase ausgelöst wird.

Подробнее
15-12-2011 дата публикации

Persistenter Speicher für einen Prozessorhauptspeicher

Номер: DE102011076894A1
Принадлежит:

Der Gegenstand, der hier offenbart wird, bezieht sich auf ein System aus einem oder mehreren Prozessoren, das persistenten Speicher umfasst.

Подробнее
30-04-2009 дата публикации

RAID mit Hochleistungs- und Niedrigleistungsplattenspieler

Номер: DE602005013322D1

Подробнее
13-11-1986 дата публикации

Номер: DE0001774296C2

Подробнее
19-12-2002 дата публикации

Chip card memory management method has virtual addresses of defined virtual address zone assigned to physical addresses of memory locations

Номер: DE0010127179A1
Принадлежит:

The management method defines a virtual address zone for addressing the memory (100), before assigning virtual addresses (VA) of the virtual address zone to physical addresses (PA) of the memory locations (102), with accessing of the memory controlled using the virtual addresses.

Подробнее
26-11-1997 дата публикации

Managing compressed ROM image code

Номер: GB0009720316D0
Автор:
Принадлежит:

Подробнее
27-10-2010 дата публикации

Method and system for implementing a virtual storage pool in a virtual environment

Номер: GB0201015698D0
Автор:
Принадлежит:

Подробнее
15-02-2012 дата публикации

Checkpointing in speculative versioning caches

Номер: GB0201200165D0
Автор:
Принадлежит:

Подробнее
27-01-1988 дата публикации

METHOD OF RAPIDLY OPENING DISK FILES IDENTIFIED BY PATH NAMES

Номер: GB0008728924D0
Автор:
Принадлежит:

Подробнее
06-06-2007 дата публикации

Method and apparatus for pushing data into a processor cache

Номер: GB0002432942A
Принадлежит:

An arrangement is provided for using a centralized pushing mechanism to actively push data into a processor cache in a computing system with at least one processor. Each processor may comprise one or more processing units, each of which may be associated with a cache. The centralized pushing mechanism may predict data requests of each processing unit in the computing system based on each processing unit's memory access pattern. Data predicted to be requested by a processing unit may be moved from a memory to the centralized pushing mechanism which then sends the data to the requesting processing unit. A cache coherency protocol in the computing system may help maintain the coherency among all caches in the system when the data is placed into a cache of the requesting processing unit.

Подробнее
22-08-1990 дата публикации

DATA TRANSFER BUS WITH VIRTUAL MEMORY.

Номер: GB2228349A
Принадлежит:

An improved high speed data transfer bus with virtual memory capability is disclosed. The bus has particular applications in computer systems which employ peripheral devices. The bus allows high speed data transfer through the use of a virtual memory scheme. Moreover, the present invention minimizes the number of lines required to implement the bus. The present invention also minimizes the amount of time a particular device is required to wait before it can access the bus and complete a data transfer. Moreover, the present invention employs control signals that are driven both active and inactive, facilitating interfacing the bus to low-power CMOS technology.

Подробнее
25-04-1990 дата публикации

VIRTUAL MACHINE SYSTEM

Номер: GB0002224140A
Принадлежит:

A virtual machine system which includes a plurality of virtual machines by using a computer system of a multi-processor configuration having a plurality of real instruction processors and a real main storage which is divided into a plurality of storage regions to be allocated to the virtual machines, respectively. Each of the virtual machines is so organized as not to make access to the regions allocated to the other virtual machines. When one and the same virtual machine includes a plurality of real instruction processors, invalidation of entry of a buffer storage of another real instruction processor as conditioned by execution of a predetermined instruction by a real instruction processor is performed only for the other real instruction processor assigned to the same virtual machine as the real instruction processor and is inhibited from affecting the real instruction processors assigned to the other virtual machines.

Подробнее
01-03-1995 дата публикации

Critical line first paging system

Номер: GB0002259795B

Подробнее
23-03-1994 дата публикации

Hierarchic data storage system

Номер: GB0009401522D0
Автор:
Принадлежит:

Подробнее
21-10-2015 дата публикации

Graphics processing method for processing sub-primitives

Номер: GB0201515885D0
Автор:
Принадлежит:

Подробнее
14-02-1996 дата публикации

Multi-way set-associative cache memory

Номер: GB0002292237A
Принадлежит:

A two-or-more way set-associative cache memory includes both a set array 201 and a data array 221. The data array comprises multiple elements, each of which can contain a cache line. The set array comprises multiple sets, with each set in the set array corresponding to an element in the data array. Each set in the set array contains a tag and state information which indicate whether an address received by the cache memory matches the cache line contained in its corresponding element of the data array. If the tag of a particular set matches the address received by the cache memory, then the cache line associated with that particular set is the requested cache line. The state information of a particular set indicates the number of cache lines mapped into that particular set. The cache system has aspects of both a multi-way set-associated cache and a direct-mapped cache, and is applicable to any cache level. ...

Подробнее
16-04-2014 дата публикации

Method and apparatus for using cache memory in a system that supports a low power state

Номер: GB0002506833A
Принадлежит:

A cache memory system is provided that uses multi-bit Error Correcting Code (ECC) with a low storage and complexity overhead. The cache memory system can be operated at very low idle power, without dramatically increasing transition latency to and from an idle power state due to loss of state.

Подробнее
03-06-1998 дата публикации

Managing partially compressed ROM image code

Номер: GB0002319865A
Принадлежит:

In an embedded microprocessor based computer system, compressed portions of a ROM image are decompressed only when accessed. A ROM image is built such that low use segments of the operating system are compressed (figs. 3 and 4). The operating system is initialised into a virtual address space with page table entries only for the uncompressed segments (figs. 6-8). An attempt to execute a compressed segment, 902, results in a page fault, 908. A page fault handler determines that the segment is compressed, 912, allocates a new page and decompresses the page into RAM 916, 918 for execution 906. The RAM copy of the segment is used for execution until the page is reused for another purpose, whereby later execution will cause a new page fault and reallocation. Compression reduces the size of the ROM image and since the entire image does not need to be expanded into RAM for execution, the overall component cost is reduced.

Подробнее
14-05-2003 дата публикации

Virtual memory and paging mechanism

Номер: GB0002381886A
Принадлежит:

A computer system supports virtual memory and a paging mechanism. When a new process is created, this occupies one or more memory region. At least a first memory region occupied by the process at a first virtual address has predefined, fixed, page characteristics (for example page size, page colour, page location). It turns out that these are not optimum for the performance of the process. In order to address this, a routine in a shared library is invoked to copy the component from the first memory region into a second memory region. The second memory region either has different page characteristics from the first memory region, or is modifiable to have such different page characteristics. The second memory region is reallocated in virtual memory so that it replaces the first memory region at the first virtual address. The overall consequence of this is that at least one component of the process can now operate at a more suitable page characteristic, thereby leading to improved performance ...

Подробнее
18-09-2013 дата публикации

Managing a stack of identifiers of free blocks in a storage pool using a hash based linked list

Номер: GB0002500292A
Принадлежит:

A pool of free storage blocks is managed using a stack of storage block identifiers. When a new block is needed, an identifier is popped from the top of the stack. When a block is freed, the identifier is pushed back onto the stack. Associated with the stack is a hash table, containing pointers to the first stack entry, which match the corresponding hash value. Associated with each identifier in the stack is a pointer to the next entry, which matches the hash value, forming a linked list. When an identifier is popped from the stack, the pointers are not changed. When a value is pushed onto the stack, the list is searched to determine if there is a duplicate identifier. If not the end of the list is identified as one of a null pointer, a pointer to an entry beyond the top of the stack or a pointer to an identifier, which maps to a different hash value.

Подробнее
29-10-2014 дата публикации

Virtualization and dynamic resource allocation aware storage level reordering

Номер: GB0002513288A
Принадлежит:

A system and method for reordering storage levels in a virtualized environment includes identifying (302) a virtual machine VM to be transitioned and determining (304) a new storage level order for the VM. The new storage level order reduces a VM live state during a transition, and accounts for hierarchical shared storage memory and criteria imposed by an application to reduce recovery operations after dynamic resource allocation actions. The new storage level order recommendation is propagated (310) to VMs. The new storage level order is applied in the VMs. A different storage-level order is recommended (312) after the transition.

Подробнее
07-09-2005 дата публикации

Distributed computing

Номер: GB0000515536D0
Автор:
Принадлежит:

Подробнее
26-10-2005 дата публикации

Invalidating storage, clearing buffer entries

Номер: GB0000518901D0
Автор:
Принадлежит:

Подробнее
20-07-2016 дата публикации

Supporting multiple types of guests by a hypervisor

Номер: GB0002520909B

Подробнее
10-09-2008 дата публикации

Technique for using memory attributes

Номер: GB0000813998D0
Автор:
Принадлежит:

Подробнее
10-11-2021 дата публикации

Data processing

Номер: GB2586913B
Принадлежит: IOTECH SYSTEMS LTD, IOTECH SYSTEMS LIMITED

Подробнее
21-12-2011 дата публикации

Cache for a multiprocessor system which can treat a local access operation as a shared access operation

Номер: GB0002481232A
Принадлежит:

A data processing system has several processors 300, 320, 340, each with its own cache 310, 330, 350. Accesses by a processor to its cache may be local or shared. The processor contains a flag 307, which causes a local access to be treated as a global access. The operation may be a clean operation, an invalidate operation or a memory barrier operation issued by an operating system. The flag may be set, when a hypervisor moves a virtual machine from one processor to another. Local operations by the hypervisor may be treated as being local when the flag is set. The cache may be a data cache or a translation look aside buffer.

Подробнее
31-12-2008 дата публикации

Data storage and access

Номер: GB0000821737D0
Автор:
Принадлежит:

Подробнее
07-10-1992 дата публикации

MULTI-MODE MICROPROCESSOR WITH ELECTRICAL PIN FOR SELECTIVE REINITIALIZATION OF PROCESSOR STATE

Номер: GB0009217947D0
Автор:
Принадлежит:

Подробнее
15-02-2007 дата публикации

ADAPTIVE CACHE ALGORITHM FOR TEMPERATURE-SENSITIVE MEMORY

Номер: AT0000352065T
Принадлежит:

Подробнее
15-06-2006 дата публикации

DATA MIGRATION SYSTEM AND - PROCEDURES USING LEAKY FILES

Номер: AT0000328324T
Принадлежит:

Подробнее
14-05-1992 дата публикации

SYSTEM AND METHOD FOR VIRTUAL MEMORY MANAGEMENT

Номер: AU0000623446B2
Принадлежит:

Подробнее
26-03-2020 дата публикации

Memory allocation in a data analytics system

Номер: AU2018350897B2

A module manages memory in a computer. The module monitors usage of a primary memory associated with the computer. The primary memory stores memory blocks in a ready state. In response to primary memory usage by the memory blocks in the ready state exceeding a ready state threshold, the module compresses at least some of the memory blocks in the ready state to form memory blocks in a ready and compressed state. In response to primary memory usage by the memory blocks in the ready and compressed state exceeding a release threshold, the module releases at least some of the memory blocks in the ready and compressed state. In response to primary memory usage by the memory blocks in the compressed state exceeding a compressed threshold, the module transfers at least some memory blocks in the compressed state to a secondary memory associated with the computer.

Подробнее
10-11-2016 дата публикации

Система хранения данных с модулем хеширования

Номер: RU0000165821U1

Полезная модель относится к вычислительной технике, в частности к системам хранения данных. Полезная модель может быть применена для уменьшения времени поиска адресов блоков данных, хранимых в дисковой кэш-памяти системы хранения данных. Это достигается тем, что система хранения данных с модулем хеширования содержит дисковые устройства, дисковую кэш-память, управляющий процессор, системную шину, интерфейсы хост-узлов, интерфейсы дисковых устройств, системную память, хранящую управляющие таблицы в виде хеш-таблиц с цепочками коллизий и модуль хеширования, выполняющий поиск адреса блока данных, хранимого в дисковой кэш-памяти системы хранения данных, в одной из цепочек коллизий хеш-таблиц. Техническим результатом, обеспечиваемым приведенной совокупностью признаков, является ускоренный поиск адресов блоков данных, хранимых в дисковой кэш-памяти системы хранения данных, реализуемый модулем хеширования. Более быстрый поиск адресов блоков данных в цепочке коллизий хеш-таблицы в сравнении со списками и таблицами обусловлен тем, что количество адресов блоков данных в цепочке коллизий хеш-таблицы меньше, чем количество адресов блоков данных, хранимых в таблице или двусвязном кольцевом списке. Полезная модель может быть осуществлена на основе системы хранения данных прототипа. Система хранения данных должна иметь модуль хеширования. Управляющие таблицы, хранимые в системной памяти системы хранения данных, должны быть в виде хеш-таблиц с цепочками коллизий. РОССИЙСКАЯ ФЕДЕРАЦИЯ (19) RU (11) (13) 165 821 U1 (51) МПК G06F 12/0893 (2016.01) G06F 12/0864 (2016.01) ФЕДЕРАЛЬНАЯ СЛУЖБА ПО ИНТЕЛЛЕКТУАЛЬНОЙ СОБСТВЕННОСТИ (12) ТИТУЛЬНЫЙ (21)(22) Заявка: ЛИСТ ОПИСАНИЯ ПОЛЕЗНОЙ МОДЕЛИ К ПАТЕНТУ 2016122968/08, 09.06.2016 (24) Дата начала отсчета срока действия патента: 09.06.2016 (72) Автор(ы): Сибиряков Максим Андреевич (RU) (73) Патентообладатель(и): Сибиряков Максим Андреевич (RU) R U Приоритет(ы): (22) Дата подачи заявки: 09.06.2016 (45) Опубликовано: 10.11.2016 Бюл. № 31 R U 1 6 5 8 2 ...

Подробнее
06-08-2018 дата публикации

Устройство хранения данных

Номер: RU0000182176U1

Полезная модель относится к области систем хранения данных и, в частности к области многопротокольных устройств хранения данных, поддерживающих файловые и блочные протоколы доступа, и может быть использована для увеличения объема записи и надежного хранения. 1. Устройство хранения данных, содержащее контроллер, N твердотельных дисков (HDD), соединенные входами/выходами с контроллером, отличающееся тем, что в него дополнительно введены кластер 1U серверов с FPGA в составе сетевого адаптера с поддержкой компрессии, центрального процессора (CPU), первый вход/выход которого соединен со вторым входом/выходом сетевого адаптера с поддержкой компрессии, третий вход/выход соединен с первым входом/выходом контроллера, программируемая логическая интегральная схема FPGA, первый вход/выход которой соединен со вторым входом/выходом центрального процессора; коммутатор, второй вход/выход которого соединен со вторым входом/выходом контроллера, третий вход/выход - с первым входом/выходом сетевого адаптера с поддержкой компрессии, а первый вход/выход является входом/выходом всего устройства. 2. Устройство по п. 1, отличающееся тем, что все его элементы выполнены с использованием цифровых технологий. РОССИЙСКАЯ ФЕДЕРАЦИЯ (19) RU (11) (13) 182 176 U1 (51) МПК G06F 12/08 (2006.01) ФЕДЕРАЛЬНАЯ СЛУЖБА ПО ИНТЕЛЛЕКТУАЛЬНОЙ СОБСТВЕННОСТИ (12) ОПИСАНИЕ ПОЛЕЗНОЙ МОДЕЛИ К ПАТЕНТУ (52) СПК G06F 12/08 (2006.01); G06F 2212/263 (2006.01) (21)(22) Заявка: 2018114141, 18.04.2018 (24) Дата начала отсчета срока действия патента: (73) Патентообладатель(и): Общество с ограниченной ответственностью "Булат" (RU) Дата регистрации: 06.08.2018 (45) Опубликовано: 06.08.2018 Бюл. № 22 1 8 2 1 7 6 R U (54) Устройство хранения данных (57) Реферат: Полезная модель относится к области систем хранения данных и, в частности к области многопротокольных устройств хранения данных, поддерживающих файловые и блочные протоколы доступа, и может быть использована для увеличения объема записи и надежного хранения. 1. ...

Подробнее
15-03-2022 дата публикации

Высокоплотный вычислительный узел

Номер: RU0000209333U1

Полезная модель относится к области вычислительной техники и может быть использована при создании высокопроизводительных вычислительных систем (ВВС). Высокоплотный вычислительный узел содержит корпус высотой 4U, блейд-сервера, коммуникационные устройства и блоки питания. Корпус разделен на переднюю и заднюю секции информационной и электрической объединительными платами, блейд-сервера, устанавливаются в передней части корпуса, а коммуникационные устройства и блоки питания устанавливаются в задней части корпуса. Блейд-сервера и коммуникационные устройства с разных сторон подключаются к объединительным платам. Каждое коммуникационное устройство имеет три группы портов. Первыми группами портов коммуникационные устройства объединяются между собой по полносвязной топологии. Вторые группы портов коммуникационных устройств являются внешними выводами высокоплотного вычислительного узла, которые предназначены для объединения высокоплотных вычислительных узлов в единую высокопроизводительную вычислительную систему неограниченной производительности. К третьей группе портов каждого коммуникационного устройства подключены до двух соответствующих блейд-серверов. Блейд-сервера и коммуникационные устройства оборудованы контактной системой жидкостного охлаждения. Технический результат заключается в увеличении эффективности высокоплотного вычислительного узла. 5 ил. РОССИЙСКАЯ ФЕДЕРАЦИЯ (19) RU (11) (13) 209 333 U1 (51) МПК G06F 12/0813 (2016.01) ФЕДЕРАЛЬНАЯ СЛУЖБА ПО ИНТЕЛЛЕКТУАЛЬНОЙ СОБСТВЕННОСТИ (12) ОПИСАНИЕ ПОЛЕЗНОЙ МОДЕЛИ К ПАТЕНТУ (52) СПК G06F 12/0813 (2021.08) (21)(22) Заявка: 2021128505, 27.09.2021 (24) Дата начала отсчета срока действия патента: Дата регистрации: 15.03.2022 (45) Опубликовано: 15.03.2022 Бюл. № 8 2 0 9 3 3 3 R U (56) Список документов, цитированных в отчете о поиске: Tiffany Trader "Sugon VP on Global Market Strategy, the VMware Venture and Robotic Immersive Cooling", опубл. 18.11.2015 на 2 страницах [прототип], размещено в Интернет по адресу URL:https://www ...

Подробнее
19-01-2012 дата публикации

Caching using virtual memory

Номер: US20120017039A1
Автор: Julien MARGETTS
Принадлежит: PLX Technology Inc

In a first embodiment of the present invention, a method for caching in a processor system having virtual memory is provided, the method comprising: monitoring slow memory in the processor system to determine frequently accessed pages; for a frequently accessed page in slow memory: copy the frequently accessed page from slow memory to a location in fast memory; and update virtual address page tables to reflect the location of the frequently accessed page in fast memory.

Подробнее
19-01-2012 дата публикации

Managing extended raid caches using counting bloom filters

Номер: US20120017041A1
Автор: Ross E. Zwisler
Принадлежит: LSI Corp

Contentual metadata of an extended cache is stored within the extended cache. The contentual metadata of the extended cache is approximated utilizing a counting Bloom filter. The counting Bloom filter is stored within a primary cache. Contentual metadata of the primary cache is stored within the primary cache. One of a data read or a data write is executed without accessing the contentual metadata of the extended cache stored within the extended cache.

Подробнее
19-01-2012 дата публикации

Multi-resolution cache monitoring

Номер: US20120017045A1
Принадлежит: SEAGATE TECHNOLOGY LLC

Multi-resolution cache monitoring devices and methods are provided. Multi-resolution cache devices illustratively have a cache memory, an interface, an information unit, and a processing unit. The interface receives a request for data that may be included in the cache memory. The information unit has state information for the cache memory. The state information is organized in a hierarchical structure. The process unit searches the hierarchical structure for the requested data.

Подробнее
09-02-2012 дата публикации

Semiconductor storage device with volatile and nonvolatile memories

Номер: US20120033496A1
Принадлежит: Individual

A semiconductor storage device includes a first memory area configured in a volatile semiconductor memory, second and third memory areas configured in a nonvolatile semiconductor memory, and a controller which executes following processing. The controller executes a first processing for storing a plurality of data by the first unit in the first memory area, a second processing for storing data outputted from the first memory area by a first management unit in the second memory area, and a third processing for storing data outputted from the first memory area by a second management unit in the third memory area.

Подробнее
09-02-2012 дата публикации

Apparatus and methods to concurrently perform per-thread as well as per-tag memory access scheduling within a thread and across two or more threads

Номер: US20120036509A1
Принадлежит: Sonics Inc

A method, apparatus, and system in which an integrated circuit comprises an initiator Intellectual Property (IP) core, a target IP core, an interconnect, and a tag and thread logic. The target IP core may include a memory coupled to the initiator IP core. Additionally, the interconnect can allow the integrated circuit to communicate transactions between one or more initiator Intellectual Property (IP) cores and one or more target IP cores coupled to the interconnect. A tag and thread logic can be configured to concurrently perform per-thread and per-tag memory access scheduling within a thread and across multiple threads such that the tag and thread logic manages tags and threads to allow for per-tag and per-thread scheduling of memory accesses requests from the initiator IP core out of order from an initial issue order of the memory accesses requests from the initiator IP core.

Подробнее
16-02-2012 дата публикации

Scatter-Gather Intelligent Memory Architecture For Unstructured Streaming Data On Multiprocessor Systems

Номер: US20120042121A1
Принадлежит: Individual

A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.

Подробнее
16-02-2012 дата публикации

Intelligent cache management

Номер: US20120042123A1
Автор: Curt Kolovson
Принадлежит: Curt Kolovson

An exemplary storage network, storage controller, and methods of operation are disclosed. In one embodiment, a method of managing cache memory in a storage controller comprises receiving, at the storage controller, a cache hint generated by an application executing on a remote processor, wherein the cache hint identifies a memory block managed by the storage controller, and managing a cache memory operation for data associated with the memory block in response to the cache hint received by the storage controller.

Подробнее
23-02-2012 дата публикации

Virtualization with fortuitously sized shadow page tables

Номер: US20120047348A1
Принадлежит: VMware LLC

One or more embodiments provides a shadow page table used by a virtualization software wherein at least a portion of the shadow page table shares computer memory with a guest page table used by a guest operating system (OS) and wherein the virtualization software provides a mapping of guest OS physical pages to machine pages.

Подробнее
23-02-2012 дата публикации

Computer system, control apparatus, storage system and computer device

Номер: US20120047502A1
Автор: Akiyoshi Hashimoto
Принадлежит: HITACHI LTD

The computer system includes a server being configured to manage a first virtual machine to which a first part of a server resource included in the server is allocated and a second virtual machine to which a second part of the server resource is allocated. The computer system also includes a storage apparatus including a storage controller and a plurality of storage devices and being configured to manage a first virtual storage apparatus to which a first storage area on the plurality of storage devices is allocated and a second virtual storage apparatus to which a second storage area on the plurality of storage devices is allocated. The first virtual machine can access to the first virtual storage apparatus but not the second virtual storage apparatus and the second virtual machine can access to the second virtual storage apparatus but not the first virtual storage apparatus.

Подробнее
01-03-2012 дата публикации

Method and apparatus for fuzzy stride prefetch

Номер: US20120054449A1
Автор: Shiliang Hu, Youfeng Wu
Принадлежит: Intel Corp

In one embodiment, the present invention includes a prefetching engine to detect when data access strides in a memory fall into a range, to compute a predicted next stride, to selectively prefetch a cache line using the predicted next stride, and to dynamically control prefetching. Other embodiments are also described and claimed.

Подробнее
08-03-2012 дата публикации

Method and apparatus for handling critical blocking of store-to-load forwarding

Номер: US20120059971A1
Принадлежит: Advanced Micro Devices Inc

The present invention provides a method and apparatus for handling critical blocking of store-to-load forwarding. One embodiment of the method includes recording a load that matches an address of a store in a store queue before the store has valid data. The load is blocked because the store does not have valid data. The method also includes replaying the load in response to the store receiving valid data so that the valid data is forwarded from the store queue to the load.

Подробнее
15-03-2012 дата публикации

Scheduling of i/o writes in a storage environment

Номер: US20120066435A1
Принадлежит: Pure Storage Inc

A system and method for effectively scheduling read and write operations among a plurality of solid-state storage devices. A computer system comprises client computers and data storage arrays coupled to one another via a network. A data storage array utilizes solid-state drives and Flash memory cells for data storage. A storage controller within a data storage array comprises an I/O scheduler. The data storage controller is configured to receive requests targeted to the data storage medium, said requests including a first type of operation and a second type of operation. The controller is further configured to schedule requests of the first type for immediate processing by said plurality of storage devices, and queue requests of the second type for later processing by the plurality of storage devices. Operations of the first type may correspond to operations with an expected relatively low latency, and operations of the second type may correspond to operations with an expected relatively high latency.

Подробнее
15-03-2012 дата публикации

System and method of page buffer operation for memory devices

Номер: US20120066442A1
Принадлежит: Mosaid Technologies Inc

Systems and methods are provided for using page buffers of memory devices connected to a memory controller through a common bus. A page buffer of a memory device is used as a temporary cache for data which is written to the memory cells of the memory device. This can allow the memory controller to use memory devices as temporary caches so that the memory controller can free up space in its own memory.

Подробнее
29-03-2012 дата публикации

Hierarchical Memory Addressing

Номер: US20120075319A1
Автор: William James Dally
Принадлежит: Nvidia Corp

One embodiment of the present invention sets forth a technique for addressing data in a hierarchical graphics processing unit cluster. A hierarchical address is constructed based on the location of a storage circuit where a target unit of data resides. The hierarchical address comprises a level field indicating a hierarchical level for the unit of data and a node identifier that indicates which GPU within the GPU cluster currently stores the unit of data. The hierarchical address may further comprise one or more identifiers that indicate which storage circuit in a particular hierarchical level currently stores the unit of data. The hierarchical address is constructed and interpreted based on the level field. The technique advantageously enables programs executing within the GPU cluster to efficiently access data residing in other GPUs using the hierarchical address.

Подробнее
29-03-2012 дата публикации

Cache with Multiple Access Pipelines

Номер: US20120079204A1
Принадлежит: Texas Instruments Inc

Parallel pipelines are used to access a shared memory. The shared memory is accessed via a first pipeline by a processor to access cached data from the shared memory. The shared memory is accessed via a second pipeline by a memory access unit to access the shared memory. A first set of tags is maintained for use by the first pipeline to control access to the cache memory, while a second set of tags is maintained for use by the second pipeline to access the shared memory. Arbitrating for access to the cache memory for a transaction request in the first pipeline and for a transaction request in the second pipeline is performed after each pipeline has checked its respective set of tags.

Подробнее
29-03-2012 дата публикации

Method and apparatus for reducing processor cache pollution caused by aggressive prefetching

Номер: US20120079205A1
Автор: Patrick Conway
Принадлежит: Advanced Micro Devices Inc

A method and apparatus for controlling a first and second cache is provided. A cache entry is received in the first cache, and the entry is identified as having an untouched status. Thereafter, the status of the cache entry is updated to accessed in response to receiving a request for at least a portion of the cache entry, and the cache entry is subsequently cast out according to a preselected cache line replacement algorithm. The cast out cache entry is stored in the second cache according to the status of the cast out cache entry.

Подробнее
05-04-2012 дата публикации

Tracking written addresses of a shared memory of a multi-core processor

Номер: US20120084498A1
Принадлежит: LSI Corp

Described embodiments provide a method of controlling processing flow in a network processor having one or more processing modules. A given one of the processing modules loads a script into a compute engine. The script includes instructions for the compute engine. The given one of the processing modules loads a register file into the compute engine. The register file includes operands for the instructions of the loaded script. A tracking vector of the compute engine is initialized to a default value, and the compute engine executes the instructions of the loaded script based on the operands of the loaded register file. The compute engine updates corresponding portions of the register file with updated data corresponding to the executed script. The tracking vector tracks the updated portions of the register file. The compute engine provides the tracking vector and the updated register file to the given one of the processing modules.

Подробнее
05-04-2012 дата публикации

Circuit and method for determining memory access, cache controller, and electronic device

Номер: US20120084513A1
Автор: Kazuhiko Okada
Принадлежит: Fujitsu Semiconductor Ltd

A memory access determination circuit includes a counter that switches between a first reference value and a second reference value in accordance with a control signal to generate a count value based on the first reference value or the second reference value. A controller performs a cache determination based on an address that corresponds to the count value and outputs the control signal in accordance with the cache determination. A changing unit changes the second reference value in accordance with the cache determination.

Подробнее
12-04-2012 дата публикации

Method for managing and tuning data movement between caches in a multi-level storage controller cache

Номер: US20120089782A1
Принадлежит: LSI Corp

A method for managing data movement in a multi-level cache system having a primary cache and a secondary cache. The method includes determining whether an unallocated space of the primary cache has reached a minimum threshold; selecting at least one outgoing data block from the primary cache when the primary cache reached the minimum threshold; initiating a de-stage process for de-staging the outgoing data block from the primary cache; and terminating the de-stage process when the unallocated space of the primary cache has reached an upper threshold. The de-stage process further includes determining whether a cache hit has occurred in the secondary cache before; storing the outgoing data block in the secondary cache when the cache hit has occurred in the secondary cache before; generating and storing metadata regarding the outgoing data block; and deleting the outgoing data block from the primary cache.

Подробнее
12-04-2012 дата публикации

Query sampling information instruction

Номер: US20120089816A1
Принадлежит: International Business Machines Corp

A measurement sampling facility takes snapshots of the central processing unit (CPU) on which it is executing at specified sampling intervals to collect data relating to tasks executing on the CPU. The collected data is stored in a buffer, and at selected times, an interrupt is provided to remove data from the buffer to enable reuse thereof. The interrupt is not taken after each sample, but in sufficient time to remove the data and minimize data loss.

Подробнее
19-04-2012 дата публикации

Cache memory device, cache memory control method, program and integrated circuit

Номер: US20120096213A1
Автор: Kazuomi Kato
Принадлежит: Panasonic Corp

To aim to provide a cache memory device that performs a line size determination process for determining a refill size, in advance of a refill process that is performed at cache miss time. According to the line size determination process, the number of reads/writes of a management target line that belongs to a set is acquired (S 51 ), and in the case where the numbers of reads completely match one another and the numbers of writes completely match one another (S 52 : Yes), the refill size is determined to be large (S 54 ). Otherwise (S 52 : No), the refill size is determined to be small (S 55 ).

Подробнее
19-04-2012 дата публикации

System and Method for the Synchronization of a File in a Cache

Номер: US20120096228A1
Автор: David Thomas, Scott Wells
Принадлежит: Individual

The present invention provides a system and method for bi-directional synchronization of a cache. One embodiment of the system of this invention includes a software program stored on a computer readable medium. The software program can be executed by a computer processor to receive a database asset from a database; store the database asset as a cached file in a cache; determine if the cached file has been modified; and if the cached file has been modified, communicate the cached file directly to the database. The software program can poll a cached file to determine if the cached file has changed. Thus, bi-directional synchronization can occur.

Подробнее
26-04-2012 дата публикации

Multiplexing Users and Enabling Virtualization on a Hybrid System

Номер: US20120102138A1
Принадлежит: International Business Machines Corp

A method, hybrid server system, and computer program product, support multiple users in an out-of-core processing environment. At least one accelerator system in a plurality of accelerator systems is partitioned into a plurality of virtualized accelerator systems. A private client cache is configured on each virtualized accelerator system in the plurality of virtualized accelerator systems. The private client cache of each virtualized accelerator system stores data that is one of accessible by only the private client cache and accessible by other private client caches associated with a common data set. Each user in a plurality of users is assigned to a virtualized accelerator system from the plurality of virtualized accelerator systems.

Подробнее
03-05-2012 дата публикации

Storage device cache

Номер: US20120110258A1
Автор: Jack Lakey, Ron WATTS
Принадлежит: SEAGATE TECHNOLOGY LLC

Implementations described and claimed herein provide a method and system for comparing a storage location related to a new write command on a storage device with storage locations of a predetermined number of write commands stored in a first table to determine frequency of write commands to the storage location. If the frequency is determined to be higher than a first threshold, the data related to the write command is stored in a write cache.

Подробнее
10-05-2012 дата публикации

Hybrid Server with Heterogeneous Memory

Номер: US20120117312A1
Принадлежит: International Business Machines Corp

A method, hybrid server system, and computer program product, for managing access to data stored on the hybrid server system. A memory system residing at a server is partitioned into a first set of memory managed by the server and a second set of memory managed by a set of accelerator systems. The set of accelerator systems are communicatively coupled to the server. The memory system comprises heterogeneous memory types. A data set stored within at least one of the first set of memory and the second set of memory that is associated with at least one accelerator system in the set of accelerator systems is identified. The data set is transformed from a first format to a second format, wherein the second format is a format required by the at least one accelerator system.

Подробнее
10-05-2012 дата публикации

Apparatus and method for accessing cache memory

Номер: US20120117326A1
Автор: Jui-Yuan Lin, Yen-Ju Lu
Принадлежит: Realtek Semiconductor Corp

The present invention relates to an apparatus and a method for accessing a cache memory. The cache memory comprises a level-one memory and a level-two memory. The apparatus for accessing the cache memory according to the present invention comprises a register unit and a control unit. The control unit receives a first read command and a reject datum of the level-one memory and stores the reject datum of the level-one memory to the register unit. Then the control unit reads and stores a stored datum of the level-two memory to the level-one memory according to the first read command.

Подробнее
10-05-2012 дата публикации

Invalidating a Range of Two ro More Translation Table Entries and Instruction Therefore

Номер: US20120117356A1
Принадлежит: International Business Machines Corp

An instruction is provided to perform invalidation of an instruction specified range of segment table entries or region table entries. The instruction can be implemented by software emulation, hardware, firmware or some combination thereof.

Подробнее
17-05-2012 дата публикации

Secondary Cache Memory With A Counter For Determining Whether to Replace Cached Data

Номер: US20120124291A1
Принадлежит: International Business Machines Corp

A selective cache includes a set configured to receive data evicted from a number of primary sets of a primary cache. The selective cache also includes a counter associated with the set. The counter is configured to indicate a frequency of access to data within the set. A decision whether to replace data in the set with data from one of the primary sets is based on a value of the counter.

Подробнее
24-05-2012 дата публикации

Signal processing system, integrated circuit comprising buffer control logic and method therefor

Номер: US20120131241A1
Принадлежит: FREESCALE SEMICONDUCTOR INC

A signal processing system comprising buffer control logic arranged to allocate a plurality of buffers for the storage of information fetched from at least one memory element. Upon receipt of fetched information to be buffered, the buffer control logic is arranged to categorise the information to be buffered according to at least one of: a first category associated with sequential flow and a second category associated with change of flow, and to prioritise respective buffers from the plurality of buffers storing information relating to the first category associated with sequential flow ahead of buffers storing information relating to the second category associated with change of flow when allocating a buffer for the storage of the fetched information to be buffered.

Подробнее
24-05-2012 дата публикации

Correlation-based instruction prefetching

Номер: US20120131311A1
Автор: Yuan C. Chou
Принадлежит: Oracle International Corp

The disclosed embodiments provide a system that facilitates prefetching an instruction cache line in a processor. During execution of the processor, the system performs a current instruction cache access which is directed to a current cache line. If the current instruction cache access causes a cache miss or is a first demand fetch for a previously prefetched cache line, the system determines whether the current instruction cache access is discontinuous with a preceding instruction cache access. If so, the system completes the current instruction cache access by performing a cache access to service the cache miss or the first demand fetch, and also prefetching a predicted cache line associated with a discontinuous instruction cache access which is predicted to follow the current instruction cache access.

Подробнее
31-05-2012 дата публикации

Method and apparatus for selectively performing explicit and implicit data line reads

Номер: US20120136857A1
Автор: Greggory D. Donley
Принадлежит: Advanced Micro Devices Inc

A method and apparatus are described for selectively performing explicit and implicit data line reads. When a data line request is received, a determination is made as to whether there are currently sufficient data resources to perform an implicit data line read. If there are not currently sufficient data resources to perform an implicit data line read, a time period (number of clock cycles) before sufficient data resources will become available to perform an implicit data line read is estimated. A determination is then made as to whether the estimated time period exceeds a threshold. An explicit tag request is generated if the estimated time period exceeds the threshold. If the estimated time period does not exceed the threshold, the generation of a tag request is delayed until sufficient data resources become available. An implicit tag request is then generated.

Подробнее
07-06-2012 дата публикации

Method and apparatus of route guidance

Номер: US20120143504A1
Принадлежит: Google LLC

Systems and methods of route guidance on a user device are provided. In one aspect, a system and method transmit partitions of map data to a client device. Each map partition may contain road geometries, road names, road network topology, or any other information needed to provide turn-by-turn navigation or driving directions within the partition. Each map partition may be encoded with enough data to allow them to be stitched together to form a larger map. Map partitions may be fetched along each route to be used in the event of a network outage or other loss of network connectivity. For example, if a user deviates from the original route and a network outage occurs, the map data may be assembled and a routing algorithm may be applied to the map data in order to direct the user back to the original route.

Подробнее
07-06-2012 дата публикации

Dynamic adjustment of read/write ratio of a disk cache

Номер: US20120144109A1
Принадлежит: International Business Machines Corp

Embodiments of the invention are directed to optimizing the performance of a split disk cache. In one embodiment, a disk cache includes a primary region having a read portion and write portion and one or more smaller, sample regions also including a read portion and a write portion. The primary region and one or more sample region each have an independently adjustable ratio of a read portion to a write portion. Cached reads are distributed among the read portions of the primary and sample region, while cached writes are distributed among the write portions of the primary and sample region. The performance of the primary region and the performance of the sample region are tracked, such as by obtaining a hit rate for each region during a predefined interval. The read/write ratio of the primary region is then selectively adjusted according to the performance of the one or more sample regions.

Подробнее
07-06-2012 дата публикации

Recommendation based caching of content items

Номер: US20120144117A1
Принадлежит: Microsoft Corp

Content item recommendations are generated for users based on metadata associated with the content items and a history of content item usage associated with the users. Each content item recommendation identifies a user and a content item and includes a score that indicates how likely the user is to view the content item. Based on the content item recommendations, and constraints of one or more caches, the content items are selected for storage in one or more caches. The constraints may include users that are associated with each cache, the geographical location of each cache, the size of each cache, and/or costs associated with each cache such as bandwidth costs. The content items stored in a cache are recommended to users associated with the cache.

Подробнее
07-06-2012 дата публикации

Read-ahead processing in networked client-server architecture

Номер: US20120144123A1
Принадлежит: International Business Machines Corp

Various embodiments for read-ahead processing in a networked client-server architecture by a processor device are provided. Read messages are grouped by a plurality of unique sequence identifications (IDs), where each of the sequence IDs corresponds to a specific read sequence, consisting of all read and read-ahead requests related to a specific storage segment that is being read sequentially by a thread of execution in a client application. The storage system uses the sequence id value in order to identify and filter read-ahead messages that are obsolete when received by the storage system, as the client application has already moved to read a different storage segment. Basically, a message is discarded when its sequence id value is less recent than the most recent value already seen by the storage system. The sequence IDs are used by the storage system to determine corresponding read-ahead data to be loaded into a read-ahead cache.

Подробнее
07-06-2012 дата публикации

Apparatus, method, and system for instantaneous cache state recovery from speculative abort/commit

Номер: US20120144126A1
Принадлежит: Intel Corp

An apparatus and method is described herein for providing instantaneous, efficient cache state recover upon an end of speculative execution. Speculatively accessed entries of a cache memory are marked as speculative, which may be on a thread specific basis. Upon an end of speculation, the speculatively marked entries are transitioned in parallel by a speculative port to their appropriate, thread specific, non-speculative coherency state; these parallel transitions allow for instantaneous commit or recovery of speculative memory state.

Подробнее
07-06-2012 дата публикации

Custom atomics using an off-chip special purpose processor

Номер: US20120144128A1
Принадлежит: Advanced Micro Devices Inc

An apparatus for executing an atomic memory transaction comprises a processing core in a multi-processing core system, where the processing core is configured to store an atomic program in a cache line. The apparatus further comprises an atomic program execution unit that is configured to execute the atomic program as a single atomic memory transaction with a guarantee of forward progress.

Подробнее
14-06-2012 дата публикации

Systems and methods for background destaging storage tracks

Номер: US20120151148A1
Принадлежит: International Business Machines Corp

Systems and methods for background destaging storage tracks from cache when one or more hosts are idle are provided. One system includes a write cache configured to store a plurality of storage tracks and configured to be coupled to one or more hosts, and a processor coupled to the write cache. The processor includes code that, when executed by the processor, causes the processor to perform the method below. One method includes monitoring the write cache for write operations from the host(s) and determining if the host(s) is/are idle based on monitoring the write cache for write operations from the host(s). The storage tracks are destaged from the write cache if the host(s) is/are idle and are not destaged from the write cache if one or more of the hosts is/are not idle. Also provided are physical computer storage mediums including a computer program product for performing the above method.

Подробнее
14-06-2012 дата публикации

Cache Line Fetching and Fetch Ahead Control Using Post Modification Information

Номер: US20120151150A1
Принадлежит: LSI Corp

A method is provided for performing cache line fetching and/or cache fetch ahead in a processing system including at least one processor core and at least one data cache operatively coupled with the processor. The method includes the steps of: retrieving post modification information from the processor core and a memory address corresponding thereto; and the processing system performing, as a function of the post modification information and the memory address retrieved from the processor core, cache line fetching and/or cache fetch ahead control in the processing system.

Подробнее
14-06-2012 дата публикации

Systems and methods for managing cache destage scan times

Номер: US20120151151A1
Принадлежит: International Business Machines Corp

Systems and methods for managing destage scan times in a cache are provided. One system includes a cache and a processor. The processor is configured to utilize a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilize a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time. One method includes utilizing a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilizing a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time. Physical computer storage mediums including a computer program product for performing the above method are also provided.

Подробнее
14-06-2012 дата публикации

Reading core data in a ring bus type multicore system

Номер: US20120151152A1
Автор: Aya Minami, Yohichi Miwa
Принадлежит: International Business Machines Corp

The present invention provides a ring bus type multicore system including one memory, a main memory controller for connecting the memory to a ring bus; and multiple cores connected in the shape of the ring bus, wherein each of the cores further includes a cache interface and a cache controller for controlling or managing the interface, and the cache controller of each of the cores connected in the shape of the ring bus executes a step of snooping data on the request through the cache interface; and when the cache of the core holds the data, a step of controlling the core to receive the request and return the data to the requester core, or, when the cache of the core does not hold the data, the main memory controller executes a step of reading the data from the memory and sending the data to the requester core.

Подробнее
14-06-2012 дата публикации

System and method for maintaining a data redundancy scheme in a solid state memory in the event of a power loss

Номер: US20120151253A1
Автор: Robert L. Horn
Принадлежит: Western Digital Technologies Inc

Embodiments of the invention are directed to systems and methods for reducing an amount of backup power needed to provide power fail safe preservation of a data redundancy scheme such as RAID that is implemented in solid state storage devices where new write data is accumulated and written along with parity data. Because new write data cannot be guaranteed to arrive in integer multiples of stripe size, a full stripe's worth of new write data may not exist when power is lost. Various embodiments use truncated RAID stripes (fewer storage elements per stripe) to save cached write data when a power failure occurs. This approach allows the system to maintain RAID parity data protection in a power fail cache flush case even though a full stripe of write data may not exist, thereby reducing the amount of backup power needed to maintain parity protection in the event of power loss.

Подробнее
14-06-2012 дата публикации

Enhanced Coherency Tracking with Implementation of Region Victim Hash for Region Coherence Arrays

Номер: US20120151297A1
Принадлежит: International Business Machines Corp

A method and system for precisely tracking lines evicted from a region coherence array (RCA) without requiring eviction of the lines from a processor's cache hierarchy. The RCA is a set-associative array which contains region entries consisting of a region address tag, a set of bits for the region coherence state, and a line-count for tracking the number of region lines cached by the processor. Tracking of the RCA is facilitated by a non-tagged hash table of counts represented by a Region Victim Hash (RVH). When a region is evicted from the RCA, and lines from the evicted region still reside in the processor's caches (i.e., the region's line-count is non-zero), the RCA line-count is added to the corresponding RVH count. The RVH count is decremented by the value of the region line count following a subsequent processor cache eviction/invalidation of the region previously evicted from the RCA.

Подробнее
21-06-2012 дата публикации

System and method for handling io to drives in a raid system

Номер: US20120159067A1
Принадлежит: LSI Corp

A system and method for handling IO to drives in a RAID system is described. In one embodiment, the method includes providing a multiple disk system with a predefined strip size. IO request with a logical block address is received for execution on the multiple disk system. A plurality of sub-IO requests with a sub-strip size is generated, where the sub-strip size is smaller than the strip size. The generated sub-IO commands are executed on the multiple disk system. In one embodiment, a cache line size substantially equal to the sub-strip size is assigned to process the IO request.

Подробнее
21-06-2012 дата публикации

Protecting Data During Different Connectivity States

Номер: US20120159078A1
Принадлежит: Microsoft Corp

Aspects of the subject matter described herein relate to data protection. In aspects, during a backup cycle, backup copies may be created for files that are new or that have changed since the last backup. If external backup storage is not available, the backup copies may be stored in a cache located on the primary storage. If backup storage is available, the backup copies may be stored in the backup storage device and backup copies that were previously stored in the primary storage may be copied to the backup storage. The availability of the backup storage may be detected and used to seamlessly switch between backing up files locally and remotely as availability of the backup storage changes.

Подробнее
21-06-2012 дата публикации

Direct Access To Cache Memory

Номер: US20120159082A1
Принадлежит: International Business Machines Corp

Methods and apparatuses are disclosed for direct access to cache memory. Embodiments include receiving, by a direct access manager that is coupled to a cache controller for a cache memory, a region scope zero command describing a region scope zero operation to be performed on the cache memory; in response to receiving the region scope zero command, generating a direct memory access region scope zero command, the direct memory access region scope zero command having an operation code and an identification of the physical addresses of the cache memory on which the operation is to be performed; sending the direct memory access region scope zero command to the cache controller for the cache memory; and performing, by the cache controller, the direct memory access region scope zero operation in dependence upon the operation code and the identification of the physical addresses of the cache memory.

Подробнее
28-06-2012 дата публикации

Weather adaptive environmentally hardened appliances

Номер: US20120167093A1
Принадлежит: International Business Machines Corp

Embodiments of the present invention provide a method, system and computer program product for weather adaptive environmentally hardened appliances. In an embodiment of the invention, a method for weather adaptation of an environmentally hardened computing appliance includes determining a location of an environmentally hardened computing appliance. Thereafter, a weather forecast including a temperature forecast can be retrieved for a block of time at the location. As a result, a cache policy for a cache of the environmentally hardened computing appliance can be adjusted to account for the weather forecast.

Подробнее
05-07-2012 дата публикации

Cache Result Register for Quick Cache Information Lookup

Номер: US20120173825A1
Принадлежит: FREESCALE SEMICONDUCTOR INC

Each level of cache within a memory hierarchy of a device is configured with a cache results register (CRR). The caches are coupled to a debugger interface via a peripheral bus. The device is placed in debug mode, and a debugger forwards a transaction address (TA) of a dummy transaction to the device. On receipt of the TA, the device processor forwards the TA via the system bus to the memory hierarchy to initiate an address lookup operation within each level of cache. For each cache in which the TA hits, the cache controller (debug) logic updates the cache's CRR with Hit, Way, and Index values, identifying the physical storage location within the particular cache at which the corresponding instruction/data is stored. The debugger retrieves information about the hit/miss status, the physical storage location and/or a copy of the data via direct requests over the peripheral bus.

Подробнее
05-07-2012 дата публикации

Apparatus and method for determining a cache line in an n-way set associative cache

Номер: US20120173844A1
Принадлежит: LSI Corp

A method and apparatus for determining a cache line in an N-way set associative cache are disclosed. In one example embodiment, a key associated with a cache line is obtained. A main hash is generated using a main hash function on the key. An auxiliary hash is generated using an auxiliary hash function on the key. A bucket in a main hash table residing in an external memory is determined using the main hash. An entry in a bucket in an auxiliary hash table residing in an internal memory is determined using the determined bucket and the auxiliary hash. The cache line in the main hash table is determined using the determined entry in the auxiliary hash table.

Подробнее
12-07-2012 дата публикации

Method and system for dynamic templatized query language in software

Номер: US20120179720A1
Принадлежит: eBay Inc

A system to automatically generate query language in software is described. The system receives a request for data that is persistently stored in a database. The system selects a predefined query template from a number of query templates based on the request. The system utilizes the query template to receive content from at least one different source, the first source being a prototype data object. The system generates a query statement based on the query template that includes the content. Finally the system queries the database using the query statement to retrieve the requested data.

Подробнее
12-07-2012 дата публикации

Global instructions for spiral cache management

Номер: US20120179872A1
Автор: Volker Strumpen
Принадлежит: International Business Machines Corp

A method of operation of a pipelined cache memory supports global operations within the cache. The cache may be a spiral cache, with a move-to-front M2F network for moving values from a backing store to a front-most tile coupled to a processor or lower-order level of a memory hierarchy and a spiral push-back network for pushing out modified values to the backing-store. The cache controller manages application of global commands by propagating individual commands to the tiles. The global commands may provide zeroing, flushing and reconciling of the given tiles. Commands for interrupting and resuming interrupted global commands may be implemented, to reduce halting or slowing of processing while other global operations are in process. A line detector within each tile supports reconcile and flush operations, and a line patcher in the controller provides for initializing address ranges with no processor intervention.

Подробнее
12-07-2012 дата публикации

Using ephemeral stores for fine-grained conflict detection in a hardware accelerated stm

Номер: US20120179875A1
Принадлежит: Individual

A method and apparatus for fine-grained filtering in a hardware accelerated software transactional memory system is herein described. A data object, which may have an arbitrary size, is associated with a filter word. The filter word is in a first default state when no access, such as a read, from the data object has occurred during a pendancy of a transaction. Upon encountering a first access, such as a first read, from the data object, access barrier operations including an ephemeral/private store operation to set the filter word to a second state are performed. Upon a subsequent/redundant access, such as a second read, the access barrier operations are elided to accelerate the subsequent access, based on the filter word being set to the second state to indicate a previous access occurred.

Подробнее
12-07-2012 дата публикации

Mechanism to support flexible decoupled transactional memory

Номер: US20120179877A1
Принадлежит: UNIVERSITY OF ROCHESTER

The present invention employs three decoupled hardware mechanisms: read and write signatures, which summarize per-thread access sets; per-thread conflict summary tables, which identify the threads with which conflicts have occurred; and a lazy versioning mechanism, which maintains the speculative updates in the local cache and employs a thread-private buffer (in virtual memory) only in the rare event of an overflow. The conflict summary tables allow lazy conflict management to occur locally, with no global arbitration (they also support eager management). All three mechanisms are kept software-accessible, to enable virtualization and to support transactions of arbitrary length.

Подробнее
12-07-2012 дата публикации

Adaptively preventing out of memory conditions

Номер: US20120179889A1
Автор: Kirk J. Krauss
Принадлежит: International Business Machines Corp

A computer-implemented method of preventing an out-of-memory condition can include evaluating usage of virtual memory of a process executing within a computer, detecting a low memory condition in the virtual memory for the process, and selecting at least one functional program component of the process according to a component selection technique. The method also can include sending a notification to each selected functional program component and, responsive to receiving the notification, each selected functional program component releasing at least a portion of a range of virtual memory reserved on behalf of the selected functional program component.

Подробнее
19-07-2012 дата публикации

Semiconductor device including plural chips stacked to each other

Номер: US20120182778A1
Автор: Homare Sato
Принадлежит: Elpida Memory Inc

Such a device is disclosed that includes a first semiconductor chip including a plurality of first terminals, a plurality of second terminals, and a first circuit coupled between the first and second terminals and configured to control combinations of the first terminals to be electrically connected to the second terminals, and a second semiconductor chip including a plurality of third terminals coupled respectively to the second terminals, an internal circuit, and a second circuit coupled between the third terminals and the internal circuit and configured to activate the internal circuit when a combination of signals appearing at the third terminals indicates a chip selection.

Подробнее
19-07-2012 дата публикации

System and method for accessing really simple syndication (rss) enabled content using session initiation protocol (sip) signaling

Номер: US20120185573A1
Принадлежит: International Business Machines Corp

A system and associated method for subscribing Really Simple Syndication (RSS) enabled content using the Session Initiation Protocol (SIP) are disclosed. An application server coupled to a Hypertext Transfer Protocol (HTTP) server in the Internet intermediates a SIP message and a request for a RSS feed. An end device requests subscription of the RSS feed in a SIP message. The HTTP server enables the application server to subscribe the RSS feed and to track changes in the RSS feed over the Internet by use of a Serving Call/Session Control Function (S-CSCF) servicing the SIP message. The HTTP server enables the end device subscribing the RSS feed to fetch the web content from the media cache in later part of the subscription by providing updates to the application server.

Подробнее
19-07-2012 дата публикации

Method and system for cache endurance management

Номер: US20120185638A1
Принадлежит: Sandisk IL Ltd

A system and method for cache endurance management is disclosed. The method may include the steps of querying a storage device with a host to acquire information relevant to a predicted remaining lifetime of the storage device, determining a download policy modification for the host in view of the predicted remaining lifetime of the storage device and updating the download policy database of a download manager in accordance with the determined download policy modification.

Подробнее
19-07-2012 дата публикации

Computer architectures using shared storage

Номер: US20120185725A1
Принадлежит: Boeing Co

A method includes providing a persistent common view of a virtual shared storage system. The virtual shared storage system includes a first shared storage system and a second shared storage system, and the persistent common view includes information associated with data and instructions stored at the first shared storage system and the second shared storage system. The method includes automatically updating the persistent common view to include third information associated with other data and other instructions stored at a third shared storage system in response to adding the third shared storage system to the virtual shared storage system.

Подробнее
26-07-2012 дата публикации

Blow molding apparatus

Номер: US20120189727A1
Принадлежит: Nissei ASB Machine Co Ltd

A blow molding apparatus includes an injection molding station ( 12 ) that injection-molds preforms ( 1 A) held by N (N is an integer equal to or larger than 2) rows of holding plates ( 30 ), a temperature control station ( 14 ) that performs a temperature control operation on the preforms ( 1 A), a blow molding station ( 16 ) that blow-molds the preforms into containers, and a row pitch change section ( 130 ) that changes the row pitch of the N rows of holding plates so that P1<P3<P2 is satisfied, P1 being the row pitch of the N rows of holding plates when they hold the performs, P2 being the row pitch of the N rows of holding plates when they hold the containers, and P3 being the row pitch of the N rows of holding plates when they hold the preforms that are transferred to N rows of blow molds that are opened.

Подробнее
26-07-2012 дата публикации

Managing Access to a Cache Memory

Номер: US20120191917A1
Принадлежит: International Business Machines Corp

Managing access to a cache memory includes dividing said cache memory into multiple of cache areas, each cache area having multiple entries; and providing at least one separate lock attribute for each cache area such that only a processor thread having possession of the lock attribute corresponding to a particular cache area can update that cache area.

Подробнее
02-08-2012 дата публикации

Guest to native block address mappings and management of native code storage

Номер: US20120198122A1
Автор: Mohammad Abdallah
Принадлежит: Soft Machines Inc

A method for managing mappings of storage on a code cache for a processor. The method includes storing a plurality of guest address to native address mappings as entries in a conversion look aside buffer, wherein the entries indicate guest addresses that have corresponding converted native addresses stored within a code cache memory, and receiving a subsequent request for a guest address at the conversion look aside buffer. The conversion look aside buffer is indexed to determine whether there exists an entry that corresponds to the index, wherein the index comprises a tag and an offset that is used to identify the entry that corresponds to the index. Upon a hit on the tag, the corresponding entry is accessed to retrieve a pointer to the code cache memory corresponding block of converted native instructions. The corresponding block of converted native instructions are fetched from the code cache memory for execution.

Подробнее
02-08-2012 дата публикации

Memory Attribute Sharing Between Differing Cache Levels of Multilevel Cache

Номер: US20120198166A1
Принадлежит: Texas Instruments Inc

The level one memory controller maintains a local copy of the cacheability bit of each memory attribute register. The level two memory controller is the initiator of all configuration read/write requests from the CPU. Whenever a configuration write is made to a memory attribute register, the level one memory controller updates its local copy of the memory attribute register.

Подробнее
02-08-2012 дата публикации

Binary Rewriting in Software Instruction Cache

Номер: US20120198169A1
Принадлежит: International Business Machines Corp

Mechanisms are provided for dynamically rewriting branch instructions in a portion of code. The mechanisms execute a branch instruction in the portion of code. The mechanisms determine if a target instruction of the branch instruction, to which the branch instruction branches, is present in an instruction cache associated with the processor. Moreover, the mechanisms directly branch execution of the portion of code to the target instruction in the instruction cache, without intervention from an instruction cache runtime system, in response to a determination that the target instruction is present in the instruction cache. In addition, the mechanisms redirect execution of the portion of code to the instruction cache runtime system in response to a determination that the target instruction cannot be determined to be present in the instruction cache.

Подробнее
02-08-2012 дата публикации

Address-based hazard resolution for managing read/write operations in a memory cache

Номер: US20120198178A1
Принадлежит: International Business Machines Corp

One embodiment provides a cached memory system including a memory cache and a plurality of read-claim (RC) machines configured for performing read and write operations dispatched from a processor. According to control logic provided with the cached memory system, a hazard is detected between first and second read or write operations being handled by first and second RC machines. The second RC machine is suspended and a subset of the address bits of the second operation at specific bit positions are recorded. The subset of address bits of the first operation at the specific bit positions are broadcast in response to the first operation being completed. The second operation is then re-requested.

Подробнее
09-08-2012 дата публикации

Coordinated writeback of dirty cachelines

Номер: US20120203968A1
Принадлежит: International Business Machines Corp

A data processing system includes a processor core and a cache memory hierarchy coupled to the processor core. The cache memory hierarchy includes at least one upper level cache and a lowest level cache. A memory controller is coupled to the lowest level cache and to a system memory and includes a physical write queue from which the memory controller writes data to the system memory. The memory controller initiates accesses to the lowest level cache to place into the physical write queue selected cachelines having spatial locality with data present in the physical write queue.

Подробнее
16-08-2012 дата публикации

Managing read requests from multiple requestors

Номер: US20120210022A1
Автор: Alexander B. Beaman
Принадлежит: Apple Computer Inc

Techniques are disclosed for managing data requests from multiple requestors. According to one implementation, when a new data request is received, a determination is made as to whether a companion relationship should be established between the new data request and an existing data request. Such a companion relationship may be appropriate under certain conditions. If a companion relationship is established between the new data request and an existing data request, then when data is returned for one request, it is used to satisfy the other request as well. This helps to reduce the number of data accesses that need to be made to a data storage, which in turn enables system efficiency to be improved.

Подробнее
16-08-2012 дата публикации

Shared cache for a tightly-coupled multiprocessor

Номер: US20120210069A1
Принадлежит: Plurality Ltd

Computing apparatus ( 11 ) includes a plurality of processor cores ( 12 ) and a cache ( 10 ), which is shared by and accessible simultaneously to the plurality of the processor cores. The cache includes a shared memory ( 16 ), including multiple block frames of data imported from a level-two (L2) memory ( 14 ) in response to requests by the processor cores, and a shared tag table ( 18 ), which is separate from the shared memory and includes table entries that correspond to the block frames and contain respective information regarding the data contained in the block frames.

Подробнее
23-08-2012 дата публикации

Secure management of keys in a key repository

Номер: US20120213369A1
Принадлежит: International Business Machines Corp

A method for managing keys in a computer memory including receiving a request to store a first key to a first key repository, storing the first key to a second key repository in response to the request, and storing the first key from the second key repository to the first key repository within said computer memory based on a predetermined periodicity.

Подробнее
23-08-2012 дата публикации

Recycling of cache content

Номер: US20120215981A1
Принадлежит: International Business Machines Corp

A method of operating a storage system comprises detecting a cut in an external power supply, switching to a local power supply, preventing receipt of input/output commands, copying content of cache memory to a local storage device and marking the content of the cache memory that has been copied to the local storage device. When a resumption of the external power supply is detected, the method continues by charging the local power supply, copying the content of the local storage device to the cache memory, processing the content of the cache memory with respect to at least one storage volume and receiving input/output commands. When detecting a second cut in the external power supply, the system switches to the local power supply, prevents receipt of input/output commands, and copies to the local storage device only the content of the cache memory that is not marked as present.

Подробнее
23-08-2012 дата публикации

Cache and a method for replacing entries in the cache

Номер: US20120215985A1
Автор: Douglas B. Hunt
Принадлежит: Advanced Micro Devices Inc

A processor is provided. The processor including a cache, the cache having a plurality of entries, each of the plurality of entries having a tag array and a data array, and a remapper configured to create at least one identifier, each identifier being unique to a process of the processor, and to assign a respective identifier to the tag array for the entries related to a respective process, the remapper further configured to determine a replacement value for the entries related to each identifier.

Подробнее
30-08-2012 дата публикации

Universal cache management system

Номер: US20120221768A1
Автор: Prasad V. Bagal, Rich Long
Принадлежит: Oracle International Corp

Techniques for universal cache management are described. In an example embodiment, a plurality of caches are allocated, in volatile memory of a computing device, to a plurality of data-processing instances, where each one of the plurality of caches is exclusively allocated to a separate one of the plurality of data-processing instances. A common cache is allocated in the volatile memory of the computing device, where the common cache is shared by the plurality of data-processing instances. Each instance of the plurality of data-processing instances is configured to: indentify a data block in the particular cache allocated to that instance, where the data block has not been changed since the data block was last persistently written to one or more storage devices; cause the data block to be stored in the common cache; and remove the data block from the particular cache. Data blocks in the common cache are maintained without being persistently written to the one or more storage devices.

Подробнее
30-08-2012 дата публикации

Opportunistic block transmission with time constraints

Номер: US20120221792A1
Принадлежит: Endeavors Technology Inc

A technique for determining a data window size allows a set of predicted blocks to be transmitted along with requested blocks. A stream enabled application executing in a virtual execution environment may use the blocks when needed.

Подробнее
30-08-2012 дата публикации

Secure caching technique for shared distributed caches

Номер: US20120221867A1
Принадлежит: International Business Machines Corp

The present invention relates to a secure caching technique for shared distributed caches. A method in accordance with an embodiment of the present invention includes: encrypting a key K to provide a secure key, the key K corresponding to a value to be stored in a cache; and storing the value in the cache using the secure key.

Подробнее
30-08-2012 дата публикации

Three stage power up in computer storage system

Номер: US20120221879A1
Принадлежит: International Business Machines Corp

Following a loss of power, a storage system switches to a local power supply. The system switches to the local power supply, prevents the receipt of input/output commands and copies the content of cache memory to a local storage device. On detecting resumption of external power, the system charges a local power supply, copies the content of the local storage device to the cache memory and processes the content of the cache memory with respect to at least one storage volume. When the charge stored on the local power supply exceeds the charge required to copy the content of the cache memory to the local storage device by a predetermined amount, the system allows the receipt of input/output commands using a reduced portion of the cache memory. Once the charge stored on the local power supply has reached a predetermined level, the system allows the receipt of input/output commands using all cache memory.

Подробнее
06-09-2012 дата публикации

Systems and methods thereto for acceleration of web pages access using next page optimization, caching and pre-fetching techniques

Номер: US20120226766A1
Принадлежит: Limelight Networks Inc

A method and system for acceleration of access to a web page using next page optimization, caching and pre-fetching techniques. The method comprises receiving a web page responsive to a request by a user; analyzing the received web page for possible acceleration improvements of the web page access; generating a modified web page of the received web page using at least one of a plurality of pre-fetching techniques; providing the modified web page to the user, wherein the user experiences an accelerated access to the modified web page resulting from execution of the at least one of a plurality of pre-fetching techniques; and storing the modified web page for use responsive to future user requests.

Подробнее
06-09-2012 дата публикации

Binary tree based multilevel cache system for multicore processors

Номер: US20120226867A1
Автор: Muhammad Ali Ismail
Принадлежит: Individual

A binary tree based multi-level cache system for multi-core processors and its two possible implementations LogN and LogN+1 models maintaining a true pyramid is described.

Подробнее
06-09-2012 дата публикации

File server apparatus, management method of storage system, and program

Номер: US20120226869A1
Принадлежит: Hitachi Solutions Ltd

When a storage capacity of a file server is expanded using an online storage service, elimination of an upper-limit constraint of the file size as a constraint of the online storage service and reduction in the communication cost are realized. A kernel module including logical volumes on the online storage service divides a file into block files at a fixed length and stores and manages the block files to prevent the upper-limit constraint of the file size. When a READ/WRITE request is generated for a mounted file system, only necessary block files are downloaded and used from the online storage service based on an offset value and size information to optimize the communication and realize the communication cost reduction.

Подробнее
06-09-2012 дата публикации

Method, apparatus, and system for speculative execution event counter checkpointing and restoring

Номер: US20120227045A1
Принадлежит: Intel Corp

An apparatus, method, and system are described herein for providing programmable control of performance/event counters. An event counter is programmable to track different events, as well as to be checkpointed when speculative code regions are encountered. So when a speculative code region is aborted, the event counter is able to be restored to it pre-speculation value. Moreover, the difference between a cumulative event count of committed and uncommitted execution and the committed execution, represents an event count/contribution for uncommitted execution. From information on the uncommitted execution, hardware/software may be tuned to enhance future execution to avoid wasted execution cycles.

Подробнее
13-09-2012 дата публикации

Cache System and Processing Apparatus

Номер: US20120233377A1
Принадлежит: Individual

According to an embodiment, a cache system includes a volatile cache memory, a nonvolatile cache memory, an address decoder, and an evacuation unit. The nonvolatile cache memory has a capacity equal to the volatile cache memory. The address decoder designates a same line to the volatile cache memory and the nonvolatile cache memory. The evacuation unit stores data which is inputted from the volatile cache memory and outputs the stored data to the volatile cache memory.

Подробнее
13-09-2012 дата публикации

Managing shared memory used by compute nodes

Номер: US20120233409A1
Автор: Jonathan Ross, Jork Loeser
Принадлежит: Microsoft Corp

A technology can be provided for managing shared memory used by a plurality of compute nodes. An example system can include a shared globally addressable memory to enable access to shared data by the plurality of compute nodes. A memory interface can process memory requests sent to the shared globally addressable memory from the plurality of processors. A memory write module can be included for the memory interface to allocate memory locations in the shared globally addressable memory and write read-only data to the globally addressable memory from a writing compute node. In addition, a read module for the memory interface can map read-only data in the globally addressable shared memory as read-only for subsequent accesses by the plurality of compute nodes.

Подробнее
13-09-2012 дата публикации

Protecting Large Objects Within an Advanced Synchronization Facility

Номер: US20120233411A1
Принадлежит: Advanced Micro Devices Inc

A system and method are disclosed for allowing protection of larger areas than memory lines by monitoring accessed and dirty bits in page tables. More specifically, in some embodiments, a second associative structure with a different granularity is provided to filter out a large percentage of false positives. By providing the associative structure with sufficient size, the structure exactly specifies a region in which conflicting cache lines lie. If entries within this region are evicted from the structure, enabling the tracking for the entire index filters out a substantial number of false positives (depending on a granularity and a number of indices present). In some embodiments, this associative structure is similar to a translation look aside buffer (TLB) with 4 k, 2M entries.

Подробнее
20-09-2012 дата публикации

Resource sharing to reduce implementation costs in a multicore processor

Номер: US20120239883A1
Принадлежит: Individual

A processor may include several processor cores, each including a respective higher-level cache; a lower-level cache including several tag units each including several controllers, where each controller corresponds to a respective cache bank configured to store data, and where the controllers are concurrently operable to access their respective cache banks; and an interconnect network configured to convey data between the cores and the lower-level cache. The controllers in a given tag unit may share access to a resource that may include one or more of an interconnect egress port coupled to the interconnect network, an interconnect ingress port coupled to the interconnect network, a test controller, or a data storage structure.

Подробнее
20-09-2012 дата публикации

Flash storage device with read disturb mitigation

Номер: US20120239990A1
Принадлежит: Stec Inc

A method for managing a flash storage device includes initiating a read request and reading requested data from a first storage block of a plurality of storage blocks in the flash storage device based on the read request. The method further includes incrementing a read count for the first storage block and moving the data in the first storage block to an available storage block of the plurality of storage blocks when the read count reaches a first threshold value.

Подробнее
27-09-2012 дата публикации

Communication device, communication method, and computer- readable recording medium storing program

Номер: US20120246402A1
Автор: Shunsuke Akimoto
Принадлежит: NEC Corp

A communication device reducing the processing time to install data on a disc storage medium onto multiple servers is provided. A protocol serializer 10 of a communication device 5 serializes read requests received from servers A 1 to A 2 for target data stored on a disc storage medium K in a processing order. A cache controller 11 determines whether the target data corresponding to the read requests are present in a cache memory 4 in the order of serialized read requests and, if present, receives the target data from the cache memory 4 via a memory controller 12 . If not present, the cache controller 11 acquires the target data from the disc storage medium K via a DVD/CD controller 13 . Then, the protocol serializer 10 sends the target data acquired by the cache controller 11 to the server of the transmission source of the read request corresponding to the target data.

Подробнее
27-09-2012 дата публикации

Register file segments for supporting code block execution by using virtual cores instantiated by partitionable engines

Номер: US20120246450A1
Автор: Mohammad Abdallah
Принадлежит: Soft Machines Inc

A system for executing instructions using a plurality of register file segments for a processor. The system includes a global front end scheduler for receiving an incoming instruction sequence, wherein the global front end scheduler partitions the incoming instruction sequence into a plurality of code blocks of instructions and generates a plurality of inheritance vectors describing interdependencies between instructions of the code blocks. The system further includes a plurality of virtual cores of the processor coupled to receive code blocks allocated by the global front end scheduler, wherein each virtual core comprises a respective subset of resources of a plurality of partitionable engines, wherein the code blocks are executed by using the partitionable engines in accordance with a virtual core mode and in accordance with the respective inheritance vectors. A plurality register file segments are coupled to the partitionable engines for providing data storage.

Подробнее
04-10-2012 дата публикации

Extending Cache for an External Storage System into Individual Servers

Номер: US20120254509A1
Принадлежит: International Business Machines Corp

Mechanisms are provided for extending cache for an external storage system into individual servers. Certain servers may have cards with cache in the form of dynamic random access memory (DRAM) and non-volatile storage, such as flash memory or solid-state drives (SSDs), which may be viewed as actual extensions of the external storage system. In this way, the storage system is distributed across the storage area network (SAN) into various servers. Several new semantics are used in communication between the cards and the storage system to keep the read caches coherent.

Подробнее
04-10-2012 дата публикации

Method for giving read commands and reading data, and controller and storage system using the same

Номер: US20120254522A1
Автор: Chih-Kang Yeh
Принадлежит: Phison Electronics Corp

A method for giving a read command to a flash memory chip to read data to be accessed by a host system is provided. The method includes receiving a host read command; determining whether the received host read command follows a last host read command; if yes, giving a cache read command to read data from the flash memory chip; and if no, giving a general read command and the cache read command to read data from the flash memory chip. Accordingly, the method can effectively reduce time needed for executing the host read commands by using the cache read command to combine the host read commands which access continuous physical addresses and pre-read data stored in a next physical address.

Подробнее
04-10-2012 дата публикации

Cache memory allocation process based on tcpip network and/or storage area network array parameters

Номер: US20120254533A1
Принадлежит: LSI Corp

An apparatus comprising a controller, one or more host devices and one or more storage devices. The controller may be configured to store and/or retrieve data in response to one or more input/output requests. The one or more host devices may be configured to present the input/output requests. The one or more storage devices may be configured to store and/or retrieve the data. The controller may include a cache memory configured to store the input/output requests. The cache memory may be configured as a memory allocation table to store and/or retrieve a compressed version of a portion of the data in response to one or more network parameters. The compressed version may be retrieved from the memory allocation table instead of the storage devices based on the input/output requests to improve overall storage throughput.

Подробнее
04-10-2012 дата публикации

Method of generating code executable by processor

Номер: US20120254551A1
Принадлежит: WASEDA UNIVERSITY

It is provided a method of generating a code by a compiler, including the steps of: analyzing a program executed by a processor; analyzing data necessary to execute respective tasks included in a program; determining whether a boundary of the data used by divided tasks is consistent with a management unit of a cache memory based on results of the analyzing; and generating a code for providing a non-cacheable area from which the data to be stored in the management unit including the boundary is not temporarily stored into the cache memory and a code for storing an arithmetic processing result stored in the management unit including the boundary into a non-cacheable area in a case where it is determined that the boundary of the data used by the divided tasks is not consistent with the management unit of the cache memory.

Подробнее
11-10-2012 дата публикации

Data Storage and Data Sharing in a Network of Heterogeneous Computers

Номер: US20120259953A1
Автор: Ilya Gertner
Принадлежит: Network Disk Inc

A network of PCs includes an I/O channel adapter and network adapter, and is configured for management of a distributed cache memory stored in the plurality of PCs interconnected by the network. The use of standard PCs reduces the cost of the data storage system. The use of the network of PCs permits building large, high-performance, data storage systems.

Подробнее
11-10-2012 дата публикации

Load multiple and store multiple instructions in a microprocessor that emulates banked registers

Номер: US20120260042A1
Принадлежит: Via Technologies Inc

A microprocessor supports an instruction set architecture that specifies: processor modes, architectural registers associated with each mode, and a load multiple instruction that instructs the microprocessor to load data from memory into specified ones of the registers. Direct storage holds data associated with a first portion of the registers and is coupled to an execution unit to provide the data thereto. Indirect storage holds data associated with a second portion of the registers and cannot directly provide the data to the execution unit. Which architectural registers are in the first and second portions varies dynamically based upon the current processor mode. If a specified register is currently in the first portion, the microprocessor loads data from memory into the direct storage, whereas if in the second portion, the microprocessor loads data from memory into the direct storage and then stores the data from the direct storage to the indirect storage.

Подробнее
18-10-2012 дата публикации

Distributed storage network including memory diversity

Номер: US20120265937A1
Принадлежит: Cleversafe Inc

A dispersed storage (DS) unit a processing module and a plurality of hard drives. The processing module is operable to maintain states for at least some of the plurality of hard drives. The processing module is further operable to receive a memory access request regarding an encoded data slice and identify a hard drive of the plurality of hard drives based on the memory access request. The processing module is further operable to determine a state of the hard drive. When the hard drive is in a read state and the memory access request is a write request, the processing module is operable to queue the write request, change from the read state to a write state in accordance with a state transition process, and, when in the write state, perform the write request to store the encoded data slice in the hard drive.

Подробнее
25-10-2012 дата публикации

Efficient data prefetching in the presence of load hits

Номер: US20120272004A1
Принадлежит: Via Technologies Inc

A memory subsystem in a microprocessor includes a first-level cache, a second-level cache, and a prefetch cache configured to speculatively prefetch cache lines from a memory external to the microprocessor. The second-level cache and the prefetch cache are configured to allow the same cache line to be simultaneously present in both. If a request by the first-level cache for a cache line hits in both the second-level cache and in the prefetch cache, the prefetch cache invalidates its copy of the cache line and the second-level cache provides the cache line to the first-level cache.

Подробнее
01-11-2012 дата публикации

Distributed shared memory

Номер: US20120278392A1
Автор: Lior Aronovich, Ron Asher
Принадлежит: International Business Machines Corp

Systems and methods for implementing a distributed shared memory (DSM) in a computer cluster in which an unreliable underlying message passing technology is used, such that the DSM efficiently maintains coherency and reliability. DSM agents residing on different nodes of the cluster process access permission requests of local and remote users on specified data segments via handling procedures, which provide for recovering of lost ownership of a data segment while ensuring exclusive ownership of a data segment among the DSM agents detecting and resolving a no-owner messaging deadlock, pruning of obsolete messages, and recovery of the latest contents of a data segment whose ownership has been lost.

Подробнее
08-11-2012 дата публикации

Selecting an auxiliary storage medium for writing data of real storage pages

Номер: US20120284458A1
Принадлежит: International Business Machines Corp

An auxiliary storage medium is selected for writing data of a set of one or more pages being paged-out from real memory. The auxiliary storage medium is selected from among a plurality of auxiliary storage media, including differing types of storage media, based on characteristics of the plurality of storage media and/or the attributes of the data being written to the auxiliary storage media.

Подробнее
08-11-2012 дата публикации

Method and apparatus for saving power by efficiently disabling ways for a set-associative cache

Номер: US20120284462A1
Принадлежит: Individual

A method and apparatus for disabling ways of a cache memory in response to history based usage patterns is herein described. Way predicting logic is to keep track of cache accesses to the ways and determine if an access to some ways are to be disabled to save power, based upon way power signals having a logical state representing a predicted miss to the way. One or more counters associated with the ways count accesses, wherein a power signal is set to the logical state representing a predicted miss when one of said one or more counters reaches a saturation value. Control logic adjusts said one or more counters associated with the ways according to the accesses.

Подробнее
15-11-2012 дата публикации

Application optimization in a network system

Номер: US20120290687A1
Принадлежит: International Business Machines Corp

A network system includes multiple network resource information handling systems (IHSs) for managing applications and application communications. An IHS operating system initializes an application optimizer to provide application acceleration capability to application optimizers, such as application delivery controllers (ADCs) and wide area network (WAN) optimizer controllers (WOCs) within the network system. Upon receipt of a server application request message (SARM), a network system server responds with a restful application optimizer message (RAOM) that includes protocol, policy, and other application optimizer information that pertains to the requesting SARM. Application optimizers may include clients, ADCs and WOCs that reside within the message communication path between client and server. Application optimizers may store protocol, policy, and other information from RAOM 280 to populate application table data. Application optimizers intercept messages between network resources of the network system and apply message policies to improve message performance thereby improving application performance within the network system. Application acceleration provides improvements in quality of experience (QoE) and quality of service (QoS).

Подробнее