Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 389. Отображено 122.
23-02-2017 дата публикации

DISTRIBUTING A PLURALITY OF TRACKS TO ADD TO CACHE TO LISTS ASSIGNED TO PROCESSORS

Номер: US20170052903A1
Принадлежит:

Provided are a computer program product, system, and method for distributing a plurality of tracks to add to cache to lists assigned to processors. Tracks stored in the cache are indicated in lists, wherein there is one list for each of a plurality of processors. Each of the processors processes the list for that processor to process the tracks in the cache indicated on the list. A determination is made as to whether the lists for the processors are unbalanced in their indicated numbers of tracks. For each of the lists, a determination is made of a number of received tracks to assign to the lists in response to determining that the lists are unbalanced. For each of the lists assigned at least one of the received tracks, indication is made of the determined number of the received tracks in the list. 1. A computer program product for managing tracks in a storage in a cache accessed by a plurality of processors , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that when executed performs operations , the operations comprising:indicating tracks in the storage stored in the cache in lists, wherein there is one list for each of the plurality of processors, wherein each of the processors processes the list for that processor to process the tracks in the cache indicated on the list;receiving a plurality of tracks to add to the cache;determining whether the lists for the processors are unbalanced in their indicated numbers of tracks;for each of the lists, determining a number of the received tracks to assign to the lists in response to determining that the lists are unbalanced; andfor each of the lists assigned at least one of the received tracks, indicating the determined number of the received tracks in the list.2. The computer program product of claim 1 , wherein the determining whether the lists are unbalanced comprises:determining a skew factor of the lists based on the number of the ...

Подробнее
06-12-2016 дата публикации

Determining adjustments of storage device timeout values based on synchronous or asynchronous remote copy state

Номер: US0009513827B1

A determination is made as to whether a plurality of storage volumes controlled by a processor complex are secondary storage volumes that are in an asynchronous copy relationship with a plurality of primary storage volumes. A storage device timeout value for a storage device that stores the plurality of storage volumes is changed from a predetermined low value to a predetermined high value, wherein the predetermined high value is indicative of a greater duration of time than the predetermined low value, in response to determining that each of the plurality of storage volumes controlled by the processor complex and stored in the storage device are secondary storage volumes that are in the asynchronous copy relationship with the plurality of primary storage volumes.

Подробнее
23-02-2017 дата публикации

DISTRIBUTING TRACKS TO ADD TO CACHE TO PROCESSOR CACHE LISTS BASED ON COUNTS OF PROCESSOR ACCESS REQUESTS TO THE CACHE

Номер: US20170052822A1
Принадлежит:

Provided are a computer program product, system, and method for distributing tracks to add to cache to processor cache lists based on counts of processor access requests to the cache. There are a plurality of lists, wherein there is one list for each of the plurality of processors. A determination is made as to whether the counts of processor accesses of tracks are unbalanced. A first caching method is used to select one of the lists to indicate a track to add to the cache in response to determining that the counts are unbalanced. A second caching method is used to select one of the lists to indicate the track to add to the cache in response to determining that the counts are balanced. The first and second caching methods provide different techniques for selecting one of the lists. 1. A computer program product for managing tracks in a storage in a cache accessed by a plurality of processors , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that when executed performs operations , the operations comprising:providing a plurality of lists, wherein there is one list for each of the plurality of processors;maintaining a count of a number access requests for each of the processors resulting in one of the tracks being maintained in the cache;receiving a track to add to cache for a request from an initiating processor comprising one of the processors;determining whether the counts for the processors are unbalanced;using a first caching method to select one of the lists to indicate the track to add to the cache in response to determining that the counts are unbalanced; andusing a second caching method to select one of the lists to indicate the track to add to the cache in response to determining that the counts are balanced, wherein the first and second caching methods provide different techniques for selecting one of the lists.2. The computer program product of claim 1 , wherein the first ...

Подробнее
02-01-2018 дата публикации

Raid data loss prevention

Номер: US0009858148B2

A method for preventing data loss in a RAID includes monitoring the age of storage drives making up a RAID. When a storage drive in the RAID reaches a specified age, the method individually tests the storage drive by subjecting the storage drive to a stress workload test. This stress workload test may be designed to place additional stress on the storage drive while refraining from adding stress to other storage drives in the RAID. In the event the storage drive fails the stress workload test (e.g., the storage drive cannot adequately handle the additional workload or generates errors in response to the additional workload), the method replaces the storage drive with a spare storage drive and rebuilds the RAID. In certain embodiments, the method tests the storage drive with greater frequency as the age of the storage drive increases. A corresponding system and computer program product are also disclosed.

Подробнее
23-02-2017 дата публикации

ASSIGNING CACHE CONTROL BLOCKS AND CACHE LISTS TO MULTIPLE PROCESSORS TO CACHE AND DEMOTE TRACKS IN A STORAGE SYSTEM

Номер: US20170052902A1
Принадлежит:

Provided are a computer program product, system, and method for assigning cache control blocks and cache lists to multiple processors to cache and demote tracks in a storage system. Cache control blocks are assigned to processors. A track added to the cache for one of the processors is assigned one of the cache control blocks assigned to the processor. There are a plurality of lists one list for each of the processors and the cache control blocks assigned to the processor. A track to add to cache for a request is received from an initiating processor comprising one of the processors. One of the cache control blocks assigned to the initiating processor is allocated for the track to add to the cache. The track to add to the cache is indicated on the list for the initiating processor. 1. A computer program product for managing tracks in a storage in a cache accessed by a plurality of processors , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that when executed performs operations , the operations comprising:providing assignments of cache control blocks to the processors, wherein a track added to the cache for one of the processors is assigned one of the cache control blocks assigned to the processor;providing a plurality of lists, wherein there is one list for each of the processors and the cache control blocks assigned to the processor;receiving a track to add to cache for a request from an initiating processor comprising one of the processors;allocating one of the cache control blocks assigned to the initiating processor for the track to add to the cache; andindicating the track to add to the cache on the list for the initiating processor.2. The computer program product of claim 1 , wherein the control blocks assigned to each of the processors comprises a range of control block index values claim 1 , wherein each of the control block index values map to a location in the cache.3. The ...

Подробнее
20-03-2018 дата публикации

Assigning cache control blocks and cache lists to multiple processors to cache and demote tracks in a storage system

Номер: US0009921974B2

Provided are a computer program product, system, and method for assigning cache control blocks and cache lists to multiple processors to cache and demote tracks in a storage system. Cache control blocks are assigned to processors. A track added to the cache for one of the processors is assigned one of the cache control blocks assigned to the processor. There are a plurality of lists one list for each of the processors and the cache control blocks assigned to the processor. A track to add to cache for a request is received from an initiating processor comprising one of the processors. One of the cache control blocks assigned to the initiating processor is allocated for the track to add to the cache. The track to add to the cache is indicated on the list for the initiating processor.

Подробнее
04-05-2017 дата публикации

ADJUSTING ACTIVE CACHE SIZE BASED ON CACHE USAGE

Номер: US20170124000A1
Принадлежит: International Business Machines Corp

Provided are a computer program product, system, and method for adjusting active cache size based on cache usage. An active cache in at least one memory device caches tracks in a storage during computer system operations. An inactive cache in the at least one memory device is not available to cache tracks in the storage during the computer system operations. During caching operations in the active cache, information is gathered on cache hits to the active cache and cache hits that would occur if the inactive cache was available to cache data during the computer system operations. The gathered information is used to determine whether to configure a portion of the inactive cache as part of the active cache for use during the computer system operations.

Подробнее
25-10-2016 дата публикации

Interacting with a remote server over a network to determine whether to allow data exchange with a resource at the remote server

Номер: US0009479525B2

Provided are a computer program product, system, and method for interacting with a remote server over a network to determine whether to allow data exchange with a resource at the remote server. Detection is made of an attempt to exchange data with the remote resource over the network. At least one computer instruction is executed to perform at least one interaction with the server over the network to request requested server information for each of the at least one interaction. At least one instance of received server information is received. A determination is made whether the at least one instance of the received server information satisfies at least one security requirement. A determination is made of whether to prevent the exchanging of data with the remote resource based on whether the at least one instance of the received server information satisfies the at least one security requirement.

Подробнее
23-02-2017 дата публикации

DETERMINING ADJUSTMENTS OF STORAGE DEVICE TIMEOUT VALUES BASED ON SYNCHRONOUS OR ASYNCHRONOUS REMOTE COPY STATE

Номер: US20170052724A1
Принадлежит:

A determination is made as to whether a plurality of storage volumes controlled by a processor complex are secondary storage volumes that are in an asynchronous copy relationship with a plurality of primary storage volumes. A storage device timeout value for a storage device that stores the plurality of storage volumes is changed from a predetermined low value to a predetermined high value, wherein the predetermined high value is indicative of a greater duration of time than the predetermined low value, in response to determining that each of the plurality of storage volumes controlled by the processor complex and stored in the storage device are secondary storage volumes that are in the asynchronous copy relationship with the plurality of primary storage volumes. 120-. (canceled)21. A method , comprising:determining that at least one storage volume of a plurality of storage volumes controlled by a processor complex and stored in a storage device is a secondary storage volume that is in a synchronous copy relationship with a primary storage volume of a plurality of primary storage volumes, wherein a storage device timeout value is to be assigned to one of at least two values comprising a predetermined low value and a predetermined high value, wherein the predetermined high value is indicative of a greater duration of time than the predetermined low value; andassigning the storage device timeout value to the predetermined low value.22. The method of claim 21 , wherein a request from a host times out if data corresponding to the request is not retrieved from the storage device within a time indicated by the storage device timeout value.23. A method claim 21 , comprising:determining that at least one storage volume of a plurality of storage volumes controlled by a processor complex and stored in a storage device is a primary storage volume that is in an asynchronous or a synchronous copy relationship with at least one secondary storage volume of a plurality of ...

Подробнее
20-04-2017 дата публикации

POPULATING A SECOND CACHE WITH TRACKS FROM A FIRST CACHE WHEN TRANSFERRING MANAGEMENT OF THE TRACKS FROM A FIRST NODE TO A SECOND NODE

Номер: US20170109284A1
Принадлежит:

Provided are a computer program product, system, and method for populating a second cache with tracks from a first cache when transferring management of the tracks from a first node to a second node. Management of a first group of tracks in the storage managed by the first node is transferred to the second node managing access to a second group of tracks in the storage. After the transferring the management of the tracks, the second node manages access to the first and second groups of tracks and caches accessed tracks from the first and second groups in the second cache of the second node. The second cache of the second node is populated with the tracks in a first cache of the first node 1. A computer program product for transferring management of access to tracks from first node to a second node that manage access to a storage , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that is executable to perform operations , the operations comprising:transferring management of a first group of tracks in the storage managed by the first node to the second node managing access to a second group of tracks in the storage, wherein after the transferring the management of the tracks, the second node manages access to the first and second groups of tracks and caches accessed tracks from the first and second groups in a second cache of the second node; andpopulating the second cache of the second node with the tracks in a first cache of the first node.2. The computer program product of claim 1 , wherein the populating the second cache comprises:transferring tracks in the first cache not already in the second cache to the second cache.3. The computer program product of claim 1 , wherein the populating the second cache with the tracks in the first cache comprises:transferring a list of tracks in the first cache to the second node;accessing, by the second node, tracks in the list that are not already ...

Подробнее
12-01-2017 дата публикации

INTERACTING WITH A REMOTE SERVER OVER A NETWORK TO DETERMINE WHETHER TO ALLOW DATA EXCHANGE WITH A RESOURCE AT THE REMOTE SERVER

Номер: US20170013010A1
Принадлежит:

Provided are a computer program product, system, and method for interacting with a remote server over a network to determine whether to allow data exchange with a resource at the remote server. Detection is made of an attempt to exchange data with the remote resource over the network. At least one computer instruction is executed to perform at least one interaction with the server over the network to request requested server information for each of the at least one interaction. At least one instance of received server information is received. A determination is made whether the at least one instance of the received server information satisfies at least one security requirement. A determination is made of whether to prevent the exchanging of data with the remote resource based on whether the at least one instance of the received server information satisfies the at least one security requirement. 1. A computer program product for managing a computational device access to a remote resource at a server over a network , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes in the computational device to perform operations , the operations comprising:detecting an attempt by the computational device to exchange data with the remote resource over the network;executing at least one computer instruction to perform at least one interaction by the computational device with the server over the network to request server information for each of the at least one interaction in response to detecting the attempt to exchange data with the remote resource;receiving server information in response to each of the at least one interaction with the server for the requested server information;determining from the received server information whether the received server information satisfies at least one security requirement;and determining whether to prevent the computational device from exchanging of ...

Подробнее
05-09-2017 дата публикации

Performance-based multi-mode task dispatching in a multi-processor core system for extreme temperature avoidance

Номер: US0009753773B1

In one embodiment of multi-mode task dispatching for extreme temperature avoidance, a performance-based dispatching mode includes a heating sub-mode in which heat generating non-system workload tasks are dispatched to idle processor cores of a set of available processor cores to raise the temperature of processing cores receiving a heat generating task. The heating sub-mode is entered if a multi-processor core temperature such as the ambient temperature of a CPU complex, for example, is below a sub-mode temperature threshold value. In this manner, the ambient temperature of the CPU complex may be prevented from reaching or maintaining a level which causes the CPU complex to fully or partially shut down due to low temperatures. Other features and aspects may be realized, depending upon the particular application.

Подробнее
08-06-2017 дата публикации

COMMUNICATIONS TO A PLURALITY OF CLOUD STORAGES VIA A PLURALITY OF COMMUNICATIONS PROTOCOLS THAT CHANGE OVER TIME

Номер: US20170163772A1
Принадлежит:

Provided are a method, a system, and a computer program product in which a computational device transmits, via a first communications protocol, a first set of data to a first cloud storage maintained by a first entity. The computational device transmits, via a second communications protocol, a second set of data to a second cloud storage maintained by a second entity.

Подробнее
25-05-2017 дата публикации

INTELLIGENT STRESS TESTING AND RAID REBUILD TO PREVENT DATA LOSS

Номер: US20170147437A1

A method for intelligently rebuilding a RAID includes subjecting a storage drive in an existing RAID to a stress workload test by placing the storage drive in a RAID 1 configuration with a spare storage drive. In the event the storage drive fails the stress workload test but can still be read, the method uses the RAID 1 configuration to copy recoverable data from the failing storage drive to the spare storage drive. The method uses other storage drives in the existing RAID to reconstruct, on the spare storage drive, data that is not recoverable from the failing storage drive. Either before or after all non-recoverable data has been reconstructed on the spare storage drive, the method logically replaces, in the existing RAID, the failing storage drive with the spare storage drive. A corresponding system and computer program product are also disclosed. 1. A method for intelligently rebuilding a RAID , the method comprising:subjecting a storage drive in an existing RAID to a stress workload test by placing the storage drive in a RAID 1 configuration with a spare storage drive;in the event the storage drive (hereinafter “the failing storage drive”) fails the stress workload test but can still be read, using the RAID 1 configuration to copy recoverable data from the failing storage drive to the spare storage drive;using other storage drives in the existing RAID to reconstruct, on the spare storage drive, data not recoverable from the failing storage drive; andlogically replacing, in the existing RAID, the failing storage drive with the spare storage drive.2. The method of claim 1 , wherein subjecting the storage drive to the stress workload test comprises performing the stress workload test when the storage drive reaches a specified age.3. The method of claim 1 , wherein subjecting the storage drive to the stress workload test comprises simultaneously refraining from stressing other storage drives in the RAID during the stress workload test.4. The method of claim 1 , ...

Подробнее
24-11-2016 дата публикации

DETERMINING ADJUSTMENTS OF STORAGE DEVICE TIMEOUT VALUES BASED ON SYNCHRONOUS OR ASYNCHRONOUS REMOTE COPY STATE

Номер: US20160342349A1
Принадлежит:

A determination is made as to whether a plurality of storage volumes controlled by a processor complex are secondary storage volumes that are in an asynchronous copy relationship with a plurality of primary storage volumes. A storage device timeout value for a storage device that stores the plurality of storage volumes is changed from a predetermined low value to a predetermined high value, wherein the predetermined high value is indicative of a greater duration of time than the predetermined low value, in response to determining that each of the plurality of storage volumes controlled by the processor complex and stored in the storage device are secondary storage volumes that are in the asynchronous copy relationship with the plurality of primary storage volumes.

Подробнее
28-03-2017 дата публикации

Determination of memory access patterns of tasks in a multi-core processor

Номер: US0009606835B1

A plurality of processing entities in which a plurality of tasks are executed are maintained. Memory access patterns are determined for each of the plurality of tasks by dividing a memory associated with the plurality of processing entities into a plurality of memory regions, and for each of the plurality of tasks, determining how many memory accesses take place in each of the memory regions, by incrementing a counter associated with each memory region in response to a memory access. Each of the plurality of tasks are allocated among the plurality of processing entities, based on the determined memory access patterns for each of the plurality of tasks.

Подробнее
25-05-2017 дата публикации

RAID DATA LOSS PREVENTION

Номер: US20170147436A1

A method for preventing data loss in a RAID includes monitoring the age of storage drives making up a RAID. When a storage drive in the RAID reaches a specified age, the method individually tests the storage drive by subjecting the storage drive to a stress workload test. This stress workload test may be designed to place additional stress on the storage drive while refraining from adding stress to other storage drives in the RAID. In the event the storage drive fails the stress workload test (e.g., the storage drive cannot adequately handle the additional workload or generates errors in response to the additional workload), the method replaces the storage drive with a spare storage drive and rebuilds the RAID. In certain embodiments, the method tests the storage drive with greater frequency as the age of the storage drive increases. A corresponding system and computer program product are also disclosed. 1. A method for preventing data loss in RAIDs , the method comprising:monitoring the age of storage drives making up a RAID;when a storage drive in the RAID reaches a specified age, individually testing the storage drive by subjecting the storage drive to a stress workload test, wherein the stress workload test isolates the storage drive to not place additional stress on other storage drives making up the RAID;determining whether the storage drive passed or failed the stress workload test; andin the event the storage drive failed the stress workload test, replacing the storage drive with a spare storage drive and rebuilding the RAID.2. The method of claim 1 , wherein monitoring the age of the storage drives comprises monitoring the age relative to a life expectancy published by a vendor of the storage drives.3. The method of claim 1 , wherein testing the storage drive comprises placing the storage drive in a RAID 1 configuration with another storage drive not belonging to the RAID.4. The method of claim 1 , further comprising testing the storage drive with greater ...

Подробнее
04-10-2016 дата публикации

Alternative port error recovery with limited system impact

Номер: US0009459972B2

Various embodiments for troubleshooting a network device in a computing storage environment by a processor. In response to an error in a specific port, an alternative error recovery operation is initiated on the port by performing at least one of initiating a silent recovery operation by reloading a failed instruction, taking the port offline, cleaning up any active transactions associated with the port, performing a hardware reset operation port, and bringing the port online.

Подробнее
04-05-2017 дата публикации

DETERMINING CACHE PERFORMANCE USING A GHOST CACHE LIST INDICATING TRACKS DEMOTED FROM A CACHE LIST OF TRACKS IN A CACHE

Номер: US20170124001A1
Принадлежит:

Provided are a computer program product, system, and method for determining cache performance using a ghost cache list. Tracks in the cache are indicated in a cache list. A track demoted from the cache is indicated in a ghost cache list in response to demoting the track in the cache. The demoted track is not indicated in the cache list. During caching operations, information is gathered on a number of cache hits comprising accesses to tracks indicated in the cache list and a number of ghost cache hits comprising accesses to tracks indicated in the ghost cache list. The gathered information on the cache hits and the ghost cache hits is used to generate information on cache performance improvements that would occur if the cache was increased in size to cache tracks in the ghost cache list. 1. A computer program product for managing a cache in a computer system to cache tracks stored in a storage , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that when executed performs operations , the operations comprising:indicating tracks in the cache in a cache list;indicating a track demoted from the cache in a ghost cache list in response to demoting the track in the cache, wherein the demoted track is not indicated in the cache list;during caching operations, gathering information on a number of cache hits comprising accesses to tracks indicated in the cache list and a number of ghost cache hits comprising accesses to tracks indicated in the ghost cache list; andusing the gathered information on the cache hits and the ghost cache hits to generate information on cache performance improvements that would occur if the cache was increased in size to cache tracks in the ghost cache list.2. The computer program product of claim 1 , wherein the tracks indicated in the ghost cache list are not stored in the cache.3. The computer program product of claim 1 , wherein the operations further comprise: ...

Подробнее
28-09-2017 дата публикации

ENCRYPTION AND DECRYPTION OF DATA IN A CLOUD STORAGE BASED ON INDICATIONS IN METADATA

Номер: US20170279812A1
Принадлежит:

Provided are a method, a system, and a computer program product in which metadata associated with encrypted data is maintained in a cloud computing environment, wherein the metadata indicates whether reading of information in the encrypted data is restricted geographically. A controller provides a decryption code for decrypting the encrypted data to a cloud server located in a geographical location, based on whether the metadata indicates whether the reading of information in the encrypted data is restricted geographically.

Подробнее
30-01-2018 дата публикации

Intelligent stress testing and raid rebuild to prevent data loss

Номер: US0009880903B2

A method for intelligently rebuilding a RAID includes subjecting a storage drive in an existing RAID to a stress workload test by placing the storage drive in a RAID 1 configuration with a spare storage drive. In the event the storage drive fails the stress workload test but can still be read, the method uses the RAID 1 configuration to copy recoverable data from the failing storage drive to the spare storage drive. The method uses other storage drives in the existing RAID to reconstruct, on the spare storage drive, data that is not recoverable from the failing storage drive. Either before or after all non-recoverable data has been reconstructed on the spare storage drive, the method logically replaces, in the existing RAID, the failing storage drive with the spare storage drive. A corresponding system and computer program product are also disclosed.

Подробнее
28-09-2017 дата публикации

DISTRIBUTION OF DATA IN CLOUD STORAGE BASED ON POLICIES MAINTAINED IN METADATA

Номер: US20170279890A1
Принадлежит:

Provided are a method, a system, and a computer program product in which metadata associated with data is maintained, wherein the metadata indicates whether storage of the data is restricted geographically. A controller receives a request to store the data in cloud storage comprising a plurality of cloud servers located in a plurality of geographical locations. The controller determines where to store the data in the cloud storage, by interpreting the metadata.

Подробнее
08-06-2017 дата публикации

METHOD, SYSTEM, AND COMPUTER PROGRAM PRODUCT FOR DISTRIBUTED STORAGE OF DATA IN A HETEROGENEOUS CLOUD

Номер: US20170163731A1
Принадлежит:

Provided are a method, a system, and a computer program product in which a computational device stores a first part of data in a first cloud storage maintained by a first entity. A second part of the data is stored in a second cloud storage maintained by a second entity.

Подробнее
08-06-2017 дата публикации

DISTRIBUTED STORAGE OF DATA IN A LOCAL STORAGE AND A HETEROGENEOUS CLOUD

Номер: US20170160951A1
Принадлежит:

Provided are a method, a system, and a computer program product in which a storage controller determines a plurality of parts of a dataset. At least one part of the dataset is stored in a local storage coupled to the storage controller. At least one other part of the dataset in one or more cloud storages coupled to the storage controller. 1. A method comprising:determining, by a storage controller, a plurality of parts of a dataset;storing at least one part of the dataset in local storage coupled to the storage controller; andstoring at least one other part of the dataset in one or more cloud storages coupled to the storage controller, wherein the one or more cloud storages include a first cloud storage and a second cloud storage, the method further comprising:communicating, by the storage controller, via a first communications protocol with the first cloud storage;communicating, by the storage controller, via a second communications protocol that is different from the first communications protocol with the second cloud storage, wherein the first communications protocol provides a relatively higher level of security than the second communications protocol, and wherein the second communication protocol provides a relatively lower level of security than the first communications protocol;changing the first communications protocol that provides a relatively higher level of security than the second communications protocol, to another communications protocol, in response to an elapse of a first predetermined amount of time; andchanging the second communications protocol that provides a relatively lower level of security than the first communications protocol, to a different communications protocol, in response to an elapse of a second predetermined amount of time.2. The method of claim 1 , wherein the at least one part of the dataset stored in the local storage requires greater security than the at least one other part of the dataset that is stored in the one or more ...

Подробнее
20-04-2017 дата публикации

POPULATING A SECONDARY CACHE WITH UNMODIFIED TRACKS IN A PRIMARY CACHE WHEN REDIRECTING HOST ACCESS FROM A PRIMARY SERVER TO A SECONDARY SERVER

Номер: US20170111468A1
Принадлежит: International Business Machines Corp

Provided are a computer program product, system, and method for populating a secondary cache with unmodified tracks in a primary cache when redirecting host access from a primary server to a secondary server. Host access to tracks is redirected from the primary server to the secondary server. Prior to the redirecting, updates to tracks in the primary storage were replicated to the secondary server. After the redirecting host access to the secondary server, host access is directed to the secondary server and the secondary storage. A secondary cache at the secondary server is populated with unmodified tracks in a primary cache at the primary server when the host access was redirected to the secondary server to make available to the host access redirected to the secondary server.

Подробнее
21-11-2017 дата публикации

Adjusting active cache size based on cache usage

Номер: US0009824030B2

Provided are a computer program product, system, and method for adjusting active cache size based on cache usage. An active cache in at least one memory device caches tracks in a storage during computer system operations. An inactive cache in the at least one memory device is not available to cache tracks in the storage during the computer system operations. During caching operations in the active cache, information is gathered on cache hits to the active cache and cache hits that would occur if the inactive cache was available to cache data during the computer system operations. The gathered information is used to determine whether to configure a portion of the inactive cache as part of the active cache for use during the computer system operations.

Подробнее
23-02-2017 дата публикации

USING CACHE LISTS FOR MULTIPLE PROCESSORS TO CACHE AND DEMOTE TRACKS IN A STORAGE SYSTEM

Номер: US20170052897A1
Принадлежит:

Provided are a computer program product, system, and method for using cache lists for multiple processors to cache and demote tracks in a storage system. Tracks in the storage stored in the cache are indicated in lists, wherein there is one list for each of a plurality of processors. Each of the processors processes the list for that processor to process the tracks in the cache indicated on the list. A determination is made of one of the lists from which to select one of the tracks in the cache indicated in the determined list to demote. The selected track is demoted from the cache. 1. A computer program product for managing tracks in a storage in a cache accessed by a plurality of processors , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that when executed performs operations , the operations comprising:indicating tracks in the storage stored in the cache in lists, wherein there is one list for each of the plurality of processors, wherein each of the processors processes the list for that processor to process the tracks in the cache indicated on the list;determining one of the lists from which to select one of the tracks in the cache indicated in the determined list to demote; anddemoting the selected track from the cache.2. The computer program product of claim 1 , wherein for each of the lists claim 1 , there is a separate lock that needs to be obtained to add and move track identifiers for the tracks in the cache in the list.3. The computer program product of claim 2 , wherein the lock is obtained on the determined list to perform the demoting of the selected track indicated in the determined list.4. The computer program product of claim 1 , wherein the determination of one of the lists comprise processing the lists to determine the list having an entry indicating the track that has been in the cache for a longest period of time.5. The computer program product of claim 4 , ...

Подробнее
29-08-2017 дата публикации

Performance-based multi-mode task dispatching in a multi-processor core system for high temperature avoidance

Номер: US0009747139B1

In one embodiment, performance-based multi-mode task dispatching for high temperature avoidance in accordance with the present description, includes selecting processor cores as available to receive a dispatched task. Tasks are dispatched to a set of available processor cores for processing in a performance-based dispatching mode. If monitored temperature rises above a threshold temperature value, task dispatching logic switches to a thermal-based dispatching mode. If a monitored temperature falls below another threshold temperature value, dispatching logic switches back to the performance-based dispatching mode. If a monitored temperature of an individual processor core rises above a threshold temperature value, the processor core is redesignated as unavailable to receive a dispatched task. If the temperature of an individual processor core falls below another threshold temperature value, the processor core is redesignated as available to receive a dispatched task. Other features and aspects ...

Подробнее
28-11-2017 дата публикации

Interacting with a remote server over a network to determine whether to allow data exchange with a resource at the remote server

Номер: US0009832218B2

Provided are a computer program product, system, and method for interacting with a remote server over a network to determine whether to allow data exchange with a resource at the remote server. Detection is made of an attempt to exchange data with the remote resource over the network. At least one computer instruction is executed to perform at least one interaction with the server over the network to request requested server information for each of the at least one interaction. At least one instance of received server information is received. A determination is made whether the at least one instance of the received server information satisfies at least one security requirement. A determination is made of whether to prevent the exchanging of data with the remote resource based on whether the at least one instance of the received server information satisfies the at least one security requirement.

Подробнее
07-11-2017 дата публикации

Determining cache performance using a ghost cache list indicating tracks demoted from a cache list of tracks in a cache

Номер: US0009811474B2

Provided are a computer program product, system, and method for determining cache performance using a ghost cache list. Tracks in the cache are indicated in a cache list. A track demoted from the cache is indicated in a ghost cache list in response to demoting the track in the cache. The demoted track is not indicated in the cache list. During caching operations, information is gathered on a number of cache hits comprising accesses to tracks indicated in the cache list and a number of ghost cache hits comprising accesses to tracks indicated in the ghost cache list. The gathered information on the cache hits and the ghost cache hits is used to generate information on cache performance improvements that would occur if the cache was increased in size to cache tracks in the ghost cache list.

Подробнее
23-02-2017 дата публикации

USING CACHE LISTS FOR PROCESSORS TO DETERMINE TRACKS TO DEMOTE FROM A CACHE

Номер: US20170052898A1
Принадлежит:

Provided are a computer program product, system, and method for using cache lists for processors to determine tracks in a storage to demote from a cache. Tracks in the storage stored in the cache are indicated in lists. There is one list for each of a plurality of processors. Each of the processors processes the list for that processor to process the tracks in the cache indicated on the list. There is a timestamp for each of the tracks indicated in the lists indicating a time at which the track was added to the cache. Tracks indicated in each of the lists having timestamps that fall within a range of timestamps are demoted. 1. A computer program product for managing tracks in a storage in a cache accessed by a plurality of processors , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that when executed performs operations , the operations comprising:indicating tracks in the storage stored in the cache in lists, wherein there is one list for each of the plurality of processors, wherein each of the processors processes the list for that processor to process the tracks in the cache indicated on the list, wherein there is a timestamp for each of the tracks indicated in the lists indicating a time at which the track was added to the cache; anddemoting tracks indicated in each of the lists having timestamps that fall within a range of timestamps.2. The computer program product of claim 1 , wherein the operations further comprise:determining the range of the timestamps from timestamps for tracks indicated in one of the lists.3. The computer program product of claim 1 , wherein the operations further comprise:determining whether the demoting of the tracks in the lists from the cache within the range of timestamps resulted in a demotion of a percentage of the cache; andperforming at least one additional iteration of determining a new range of timestamps and demoting tracks indicated in each of ...

Подробнее
17-11-2016 дата публикации

PROCESSOR THREAD MANAGEMENT

Номер: US20160335132A1
Принадлежит:

Provided are a computer program product, system, and method for managing processor threads of a plurality of processors. In one embodiment, a parameter of performance of the computing system is measured, and the configurations of one or more processor nodes are dynamically adjusted as a function of the measured parameter of performance. In this manner, the number of processor threads being concurrently executed by the plurality of processor nodes of the computing system may be dynamically adjusted in real time as the system operates to improve the performance of the system as it operates under various operating conditions. It is appreciated that systems employing processor thread management in accordance with the present description may provide other features in addition to or instead of those described herein, depending upon the particular application.

Подробнее
06-03-2018 дата публикации

Determining adjustments of storage device timeout values based on synchronous or asynchronous remote copy state

Номер: US0009910609B2

A determination is made as to whether a plurality of storage volumes controlled by a processor complex are secondary storage volumes that are in an asynchronous copy relationship with a plurality of primary storage volumes. A storage device timeout value for a storage device that stores the plurality of storage volumes is changed from a predetermined low value to a predetermined high value, wherein the predetermined high value is indicative of a greater duration of time than the predetermined low value, in response to determining that each of the plurality of storage volumes controlled by the processor complex and stored in the storage device are secondary storage volumes that are in the asynchronous copy relationship with the plurality of primary storage volumes.

Подробнее
09-01-2018 дата публикации

Validation of storage volumes that are in a peer to peer remote copy relationship

Номер: US0009864534B1

A peer to peer remote copy operation is performed between a primary storage controller and a secondary storage controller, to establish a peer to peer remote copy relationship between a primary storage volume and a secondary storage volume. Subsequent to indicating completion of the peer to peer remote copy operation to a host, a determination is made as to whether the primary storage volume and the secondary storage volume have identical data, by performing operations of staging data of the primary storage volume from auxiliary storage of the primary storage controller to local storage of the primary storage controller, and transmitting the data of the primary storage volume that is staged, to the secondary storage controller for comparison with data of the secondary storage volume stored in an auxiliary storage of the secondary storage controller.

Подробнее
16-01-2018 дата публикации

Processor thread management

Номер: US0009870275B2

Provided are a computer program product, system, and method for managing processor threads of a plurality of processors. In one embodiment, a parameter of performance of the computing system is measured, and the configurations of one or more processor nodes are dynamically adjusted as a function of the measured parameter of performance. In this manner, the number of processor threads being concurrently executed by the plurality of processor nodes of the computing system may be dynamically adjusted in real time as the system operates to improve the performance of the system as it operates under various operating conditions. It is appreciated that systems employing processor thread management in accordance with the present description may provide other features in addition to or instead of those described herein, depending upon the particular application.

Подробнее
24-10-2017 дата публикации

Communicating health status when a management console is unavailable for a server in a mirror storage environment

Номер: US0009800481B1

Provided are a computer program product, system, and method for communicating health status when a management console is unavailable for a server in a mirror storage environment. A determination at a first server is made that a management console is unavailable over the console network. The first server determines a health status at the first server and the first storage in response to determining that the management console cannot be reached over the console network. The health status indicates whether there are errors or no errors at the first server and the first storage. The first server transmits the determined health status to the second server over a mirroring network mirroring data between the first storage and a second storage managed by the second server. The determined health status is forwarded to an administrator.

Подробнее
06-01-2022 дата публикации

USING A CHARACTERISTIC OF A PROCESS INPUT/OUTPUT (I/O) ACTIVITY AND DATA SUBJECT TO THE I/O ACTIVITY TO DETERMINE WHETHER THE PROCESS IS A SUSPICIOUS PROCESS

Номер: US20220004628A1
Принадлежит:

Provided are a computer program product, system, and method for detecting a security breach in a system managing access to a storage. Process Input/Output (I/O) activity by a process accessing data in a storage is monitored. A determination is made of a characteristic of the data subject to the I/O activity from the process. A determination is made as to whether a characteristic of the process I/O activity as compared to the characteristic of the data satisfies a condition. The process initiating the I/O activity is characterized as a suspicious process in response to determining that the condition is satisfied. A security breach is indicated in response to characterizing the process as the suspicious process. 125-. (canceled)26. A computer program product for detecting a security breach in a system managing access to a storage , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that when executed performs operations , the operations comprising:monitoring Input/Output (I/O) activity by a process accessing data in a storage;determining a last access time the data subject to the monitored I/O activity was last accessed prior to being accessed by the monitored I/O activity;determining whether a difference of a process access time the monitored I/O activity accessed the data and the last access time for the data satisfies a condition;characterizing the process as a suspicious process in response to determining that the condition is satisfied; andindicating a security breach in response to characterizing the process as the suspicious process.27. The computer program product of claim 26 , wherein the condition comprises determining whether the difference exceeds a threshold claim 26 , wherein the process is characterized as the suspicious process if the difference exceeds the threshold.28. The computer program product of claim 26 , wherein the monitoring is performed with respect to a ...

Подробнее
02-01-2020 дата публикации

Determining when to replace a storage device using a machine learning module

Номер: US20200004434A1
Принадлежит: International Business Machines Corp

Provided are a computer program product, system, and method for using a machine learning module to determine when to replace a storage device. Input on attributes of the storage device is provided to a machine learning module to produce an output value. A determination is made whether the output value indicates to replace the storage device. Indication is made to replace the storage device in response to determining that the output value indicates to replace the storage device.

Подробнее
02-01-2020 дата публикации

DETERMINING WHEN TO REPLACE A STORAGE DEVICE BY TRAINING A MACHINE LEARNING MODULE

Номер: US20200004435A1
Принадлежит:

Provided are a computer program product, system, and method for using a machine learning module to determine when to replace a storage device. Input on attributes of the storage device is provided to a machine learning module to produce an output value. A determination is made whether the output value indicates to replace the storage device. Indication is made to replace the storage device in response to determining that the output value indicates to replace the storage device. 18-.9. A computer program product for determining when to replace a storage device deployed within a computing environment , the computer program product comprising a computer readable storage medium storing computer readable program code that when executed performs operations , the operations comprising:updating dynamic attributes for the storage device in response to access requests to the storage device;detecting a failure of the storage device; determining input comprising the dynamic attributes of the storage device that failed; and', 'using the input to train a machine learning module to produce an output value indicating no expected remaining life of the storage device; and', 'after training the machine learning module for the storage device that failed, executing the machine learning module to produce an output value based on dynamic storage attributes of an operational storage device to determine an expected remaining life of the operational storage device., 'in response to detecting the failure of the storage device, performing10. The computer program product of claim 9 , wherein to train the machine learning module comprises:executing the machine learning module with the input to produce a current output value of the storage device that failed;determining a margin of error of the current output value and an output value indicating no expected remaining life of the storage device that failed; andusing the margin of error and the input to train weights and biases of nodes in the ...

Подробнее
02-01-2020 дата публикации

DETERMINING WHEN TO PERFORM A DATA INTEGRITY CHECK OF COPIES OF A DATA SET USING A MACHINE LEARNING MODULE

Номер: US20200004437A1
Принадлежит:

Provided are a computer program product, system, and method for using a machine learning module to determine when to perform a data integrity check of copies of a data set. Input on storage attributes of a plurality of storage units, each storage unit of the storage units storing a copy of a data set, is provided to a machine learning module to produce an output value. A determination is made as to whether the output value indicates to perform a data integrity check of the copies of the data set. A determination is made as to whether the copies of the data set on different storage units are inconsistent in response to determining to perform the data integrity check. At least one of the copies of the data set is corrected to synchronize all the copies of the data set. 1. A computer program product for checking data integrity of copies of a data set , the computer program product comprising a computer readable storage medium storing computer readable program code that when executed performs operations , the operations comprising:providing input on storage attributes of a plurality of storage units, each storage unit of the storage units storing a copy of a data set, to a machine learning module to produce an output value;determining whether the output value indicates to perform a data integrity check of the copies of the data set;determining whether the copies of the data set on different storage units are inconsistent in response to determining to perform the data integrity check; andcorrecting at least one of the copies of the data set to synchronize all the copies of the data set.2. The computer program product of claim 1 , wherein the correcting the at least one of the copies of the data set to synchronize all the copies of the data set comprises performing one of:using parity information for each copy of the data set of the copies of the data set to perform a parity check of the copy of the data set and if the copy of the data set has errors, using the parity ...

Подробнее
02-01-2020 дата публикации

Determining when to perform a data integrity check of copies of a data set by training a machine learning module

Номер: US20200004439A1
Принадлежит: International Business Machines Corp

Provided are a computer program product, system, and method for using a machine learning module to determine when to perform a data integrity check of copies of a data set. Input on storage attributes of a plurality of storage units, each storage unit of the storage units storing a copy of a data set, is provided to a machine learning module to produce an output value. A determination is made as to whether the output value indicates to perform a data integrity check of the copies of the data set. A determination is made as to whether the copies of the data set on different storage units are inconsistent in response to determining to perform the data integrity check. At least one of the copies of the data set is corrected to synchronize all the copies of the data set.

Подробнее
02-01-2020 дата публикации

Determining when to perform error checking of a storage unit by using a machine learning module

Номер: US20200004623A1
Принадлежит: International Business Machines Corp

Provided are a computer program product, system, and method for using a machine learning module to determine when to perform error checking of a storage unit. Input on attributes of at least one storage device comprising the storage unit are provided to a machine learning module to produce an output value. An error check frequency is determined from the output value. A determination is made as to whether the error check frequency indicates to perform an error checking operation with respect to the storage unit. The error checking operation is performed in response to determining that the error checking frequency indicates to perform the error checking operation.

Подробнее
02-01-2020 дата публикации

DETERMINING WHEN TO PERFORM ERROR CHECKING OF A STORAGE UNIT BY TRAINING A MACHINE LEARNING MODULE

Номер: US20200004625A1
Принадлежит:

Provided are a computer program product, system, and method for using a machine learning module to determine when to perform error checking of a storage unit. Input on attributes of at least one storage device comprising the storage unit are provided to a machine learning module to produce an output value. An error check frequency is determined from the output value. A determination is made as to whether the error check frequency indicates to perform an error checking operation with respect to the storage unit. The error checking operation is performed in response to determining that the error checking frequency indicates to perform the error checking operation. 19-. (canceled)10. A computer program product for error checking data in a storage unit , the computer program product comprising a computer readable storage medium storing computer readable program code that when executed performs operations , the operations comprising:determining to train a machine learning module; determining inputs comprising attributes of at least one storage device of the storage unit; and', 'training the machine learning module to produce a desired output value indicating to perform an error checking operation of the storage unit from the determined inputs in response to detecting the error; and', 'executing the machine learning module to produce an output value used to determine whether to perform an error checking operation with respect to the storage unit., 'in response to determining to train the machine learning module, performing11. The computer program product of claim 10 , wherein the operations further comprise:detecting an error while performing the error checking operation, wherein the determining to train the machine learning module occurs in response to detecting the error; andsetting the desired output value to an output value indicating to perform error checking to use to train the machine learning module in response to detecting the error.12. The computer program ...

Подробнее
02-01-2020 дата публикации

USING ALTERNATE RECOVERY ACTIONS FOR INITIAL RECOVERY ACTIONS IN A COMPUTING SYSTEM

Номер: US20200004634A1
Принадлежит:

Provided are a computer program product, system, and method for using alternate recovery actions for initial recovery actions in a computing system. An initial recovery table provides initial recovery actions to perform for errors detected in the computing system. An alternate recovery table is received including at least one alternate recovery action for at least one of the initial recovery actions. An alternative recovery action provided for an initial recovery action specifies a different recovery path involving at least one of a different action and a different component in the computing system than involved in the initial recovery action. A determination is made as to whether to use the initial recovery action in the initial recovery table for a detected error or the alternate recovery action in the alternate recovery table. The determined initial recovery action or alternate recovery action determined is used to address the detected error. 123-. (canceled)24. A computer program product for performing a recovery action upon detecting an error in a computing system , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that is executable to perform operations , the operations comprising:detecting an error in the computing system;determining whether to use an initial recovery action for the detected error or an alternate recovery action for the initial recovery action, wherein an alternative recovery action provided for an initial recovery action specifies a different recovery path involving at least one of a different action and a different component in the computing system than involved in the initial recovery action for which the alternative recovery action is provided; andusing the initial recovery action or the alternate recovery action determined to use to address the detected error.25. The computer program product of claim 24 , wherein the operations further comprise:maintaining a ...

Подробнее
02-01-2020 дата публикации

TRANSFER TRACK FORMAT INFORMATION FOR TRACKS IN CACHE AT A PRIMARY STORAGE SYSTEM TO A SECONDARY STORAGE SYSTEM TO WHICH TRACKS ARE MIRRORED TO USE AFTER A FAILOVER OR FAILBACK

Номер: US20200004649A1
Принадлежит:

Provided are a computer program product, system, and method to transfer track format information for tracks in cache at a primary storage system to a secondary storage system to which tracks are mirrored to use after a failover or failback. In response to a failover from the primary storage system to the secondary storage system, the primary storage system adds a track identifier of the track and track format information indicating a layout of data in the track, indicated in track metadata for the track in the primary storage, to a cache transfer list. The primary storage system transfers the cache transfer list to the secondary storage system to use the track format information in the cache transfer list for a track staged into the secondary cache having a track identifier in the cache transfer list. 123-. (canceled)24. A computer program product for performing a failover from a primary storage system having a primary cache and a primary storage to a secondary storage system having a secondary cache and a secondary storage , the computer program product comprising a computer readable storage medium having computer readable program code executed in the primary storage system to perform operations , the operations comprising:in response to failover from the primary storage system to the secondary storage system, for each track in the primary cache, adding a track identifier of the track and track format information indicating metadata of the track, including a layout of data in the track, to a cache transfer list; andtransferring the cache transfer list to the secondary storage system to cause the secondary storage system to use the track format information transferred with the cache transfer list for a track staged into the secondary cache from the secondary storage having a track identifier in the cache transfer list after the failover.25. The computer program product of claim 24 , wherein prior to the failover while the primary storage system comprises an active ...

Подробнее
03-01-2019 дата публикации

PREVENTING UNEXPECTED POWER-UP FAILURES OF HARDWARE COMPONENTS

Номер: US20190004581A1
Принадлежит:

In one embodiment, a method includes determining a plurality of hardware components of a system. The method also includes power cycling a first hardware component of the plurality of hardware components of the system according to a dynamic schedule. Also, the method includes determining whether the first hardware component experienced a power-up failure resulting from the power cycling. Moreover, the method includes outputting an indication to replace and/or repair the first hardware component in response to a determination that the first hardware component experienced the power-up failure resulting from the power cycling. Other systems, methods, ad computer program products for preventing unexpected power-up failures of individual hardware components are described in accordance with more embodiments. 1. A method , comprising:determining a plurality of hardware components of a system;power cycling a first hardware component of the plurality of hardware components of the system according to a dynamic schedule;determining whether the first hardware component experienced a power-up failure resulting from the power cycling; andoutputting an indication to replace and/or repair the first hardware component in response to a determination that the first hardware component experienced the power-up failure resulting from the power cycling.2. The method as recited in claim 1 , further comprising:routing input and/or output (I/O) requests that are destined for the first hardware component to a first redundant hardware component for processing thereof prior to and during the power cycling of the first hardware component, wherein the first redundant hardware component is configured to provide full back-up redundancy for the first hardware component within the system; andresuming sending I/O requests to the first hardware component in response to a determination that the first hardware component did not experience the power-up failure after the power cycling.3. The method as ...

Подробнее
03-01-2019 дата публикации

POPULATING A SECOND CACHE WITH TRACKS FROM A FIRST CACHE WHEN TRANSFERRING MANAGEMENT OF THE TRACKS FROM A FIRST NODE TO A SECOND NODE

Номер: US20190004951A1
Принадлежит:

Provided are a computer program product, system, and method for populating a second cache with tracks from a first cache when transferring management of the tracks from a first node to a second node. Management of a first group of tracks in the storage managed by the first node is transferred to the second node managing access to a second group of tracks in the storage. After the transferring the management of the tracks, the second node manages access to the first and second groups of tracks and caches accessed tracks from the first and second groups in the second cache of the second node. The second cache of the second node is populated with the tracks in a first cache of the first node 122-. (canceled)23. A computer program product for transferring tracks from a first node having a first cache to a second node having a second cache , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that is executable to perform operations , the operations comprising:providing a cache transfer list of tracks in the first cache to transfer to the second cache to manage by the second node;transferring a first number of tracks indicated in the cache transfer list maintained in the first cache from the first cache directly to the second cache; andtransferring a second number of tracks indicated in the cache transfer list maintained in the first cache and stored in a storage from the storage to the second cache.24. The computer program product of claim 23 , wherein the operations further comprise:managing, by the second node, access to the tracks in the cache transfer list after transfering the tracks from the first cache to the second cache.25. The computer program product of claim 23 , wherein the transferring of the first number of tracks from the first cache and the transferring of the second number of tracks from the storage are performed in parallel.26. The computer program product of claim 23 , ...

Подробнее
27-01-2022 дата публикации

USING MULTI-TIERED CACHE TO SATISFY INPUT/OUTPUT REQUESTS

Номер: US20220027267A1
Принадлежит:

A computer-implemented method, according to one approach, includes: determining whether to satisfy an I/O request using a first tier of memory in a secondary cache by inspecting a bypass indication in response to determining that the input/output (I/O) request includes a bypass indication. The secondary cache is coupled to a primary cache and a data storage device. The secondary cache also includes the first tier of memory and a second tier of memory. Moreover, in response to determining to satisfy the I/O request using the first tier of memory in the secondary cache, the I/O request is satisfied using the first tier of memory in the secondary cache. The updated data is also destaged from the secondary cache to the data storage device in response to determining that data associated with the I/O request has been updated as the result of satisfying the I/O request using the secondary cache. 1. A system , comprising:a processor, wherein the processor is coupled to: a primary cache, a secondary cache, and a data storage device, wherein the secondary cache includes a first tier of memory and a second tier of memory, wherein performance characteristics associated with the first tier of memory in the secondary cache are greater than performance characteristics associated with the second tier of memory in the secondary cache; and in response to determining that an input/output (I/O) request includes a bypass indication, determine, by the processor, whether to satisfy the I/O request using the first tier of memory in the secondary cache by inspecting the bypass indication;', 'in response to determining to satisfy the I/O request using the first tier of memory in the secondary cache, satisfy, by the processor, the I/O request using the first tier of memory in the secondary cache; and', 'in response to determining that data associated with the I/O request has been updated as the result of satisfying the I/O request using the secondary cache, destage, by the processor, the ...

Подробнее
11-01-2018 дата публикации

ADJUSTING ACTIVE CACHE SIZE BASED ON CACHE USAGE

Номер: US20180011799A1
Принадлежит:

Provided are a computer program product, system, and method for adjusting active cache size based on cache usage. An active cache in at least one memory device caches tracks in a storage during computer system operations. An inactive cache in the at least one memory device is not available to cache tracks in the storage during the computer system operations. During caching operations in the active cache, information is gathered on cache hits to the active cache and cache hits that would occur if the inactive cache was available to cache data during the computer system operations. The gathered information is used to determine whether to configure a portion of the inactive cache as part of the active cache for use during the computer system operations. 124-. (canceled)25. A computer program product for managing cache in at least one memory device in a computer system to cache tracks stored in a storage , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that when executed performs operations , the operations comprising:maintaining an active cache list indicating tracks in an active cache comprising a first portion of the at least one memory device to cache the tracks in the storage during computer system operations;maintaining an inactive cache list indicating tracks demoted from the active cache;during caching operations, gathering information on active cache hits comprising access requests to tracks indicated in the active cache list and inactive cache hits comprising access requests to tracks indicated in the inactive cache list; andusing the gathered information to determine whether to provision a second portion of the at least one memory device unavailable to cache user data to be part of the active cache for use to cache user data during the computer system operations.26. The computer program product of claim 25 , wherein the operations further comprise:providing an active Least ...

Подробнее
17-01-2019 дата публикации

Avoid out of space outage in a thinly provisioned box

Номер: US20190018595A1
Принадлежит: International Business Machines Corp

A computer determines free space of the thinly provisioned box and calculates a time of consumption of the free space. The computer increases a dispatch and a priority of a clean-up job based on determination that the time of consumption is below a threshold time of consumption value. The increase of the dispatch is performed by deletion of dirty extents from the thinly provisioned box. The priority of the clean-up job represents a priority for execution of a cleaning program on the thinly provisioned box, where the clean-up job deletes the dirty extents from the thinly provisioned box. The computer executes the clean-up job before allocation of a new extent in the free space of the thinly provisioned box based on determination that the free space is below a critical level value, where the new extent may reduce the free space of the thinly provisioned box.

Подробнее
24-01-2019 дата публикации

CONCURRENT DATA ERASURE AND REPLACEMENT OF PROCESSORS

Номер: US20190026229A1

A method for concurrently erasing data on a processor and preparing the processor for removal from a computing system is disclosed. In one embodiment, such a method includes determining tasks queued to be executed on a processor and reassigning the tasks to a different processor, such as to a different processor in the same cluster as the processor. The method further prevents new tasks from being assigned to the processor. The method waits for currently executing tasks on the processor to complete. Once the currently executing tasks are complete, the method initiates a cache-hostile job on the processor to evict entries in cache of the processor. Once the cache-hostile job is complete, the method enables the processor to be removed from a computing system such as a storage system controller. A corresponding system and computer program product are also disclosed. 1. A method for concurrently erasing data on a processor and preparing the processor for removal from a computing system , the method comprising:determining tasks queued to be executed on an identified processor belonging to a cluster of processors;reassigning the tasks to a different processor within the cluster;preventing new tasks from being assigned to the identified processor;waiting for currently executing tasks on the identified processor to complete prior to initiating a cache-hostile job on the identified processor to replace entries in cache of the identified processor;once the currently executing tasks are complete, initiating the cache-hostile job on the identified processor; andonce the cache-hostile job is complete, removing the identified processor from the cluster.2. (canceled)3. (canceled)4. The method of claim 1 , wherein the cluster is implemented within a storage controller.5. The method of claim 1 , wherein the cache-hostile job allocates an amount of memory at least as large as the cache.6. The method of claim 1 , wherein the cache-hostile job performs at least one of: randomly writing ...

Подробнее
31-01-2019 дата публикации

TRANSFER TRACK FORMAT INFORMATION FOR TRACKS IN CACHE AT A PRIMARY STORAGE SYSTEM TO A SECONDARY STORAGE SYSTEM TO WHICH TRACKS ARE MIRRORED TO USE AFTER A FAILOVER OR FAILBACK

Номер: US20190034302A1
Принадлежит:

Provided are a computer program product, system, and method to transfer track format information for tracks in cache at a primary storage system to a secondary storage system to which tracks are mirrored to use after a failover or failback. In response to a failover from the primary storage system to the secondary storage system, the primary storage system adds a track identifier of the track and track format information indicating a layout of data in the track, indicated in track metadata for the track in the primary storage, to a cache transfer list. The primary storage system transfers the cache transfer list to the secondary storage system to use the track format information in the cache transfer list for a track staged into the secondary cache having a track identifier in the cache transfer list. 1. A computer program product for performing a failover from a primary storage system having a primary cache and a primary storage to a secondary storage system having a secondary cache and a secondary storage , the computer program product comprising a computer readable storage medium having computer readable program code executed in the primary and the secondary storage systems to perform operations , the operations comprising:mirroring data from the primary storage system to the secondary storage system;initiating a failover from the primary storage system to the secondary storage system;in response to the failover, for each track indicated in a cache list of tracks in the primary cache, adding, by the primary storage system, a track identifier of the track and track format information indicating a layout of data in the track, indicated in track metadata for the track in the primary storage, to a cache transfer list;transferring, by the primary storage system, the cache transfer list to the secondary storage system; andusing, by the secondary storage system after the failover, the track format information transferred with the cache transfer list for a track staged ...

Подробнее
31-01-2019 дата публикации

Transfer track format information for tracks in cache at a first processor node to a second process node to which the first processor node is failing over

Номер: US20190034303A1
Принадлежит: International Business Machines Corp

Provided are a computer program product, system, and method for managing failover from a first processor node including a first cache to a second processor node including a second cache. Storage areas assigned to the first processor node are reassigned to the second processor node. For each track indicated in a cache list of tracks in the first cache for the reassigned storage areas, the first processor node adds a track identifier of the track and track format information indicating a layout and format of data in the track to a cache transfer list. The first processor node transfers the cache transfer list to the second processor node. The second processor node uses the track format information transferred with the cache transfer list to process read and write requests to tracks in the reassigned storage areas staged into the second cache.

Подробнее
31-01-2019 дата публикации

SAVING TRACK METADATA FORMAT INFORMATION FOR TRACKS DEMOTED FROM CACHE FOR USE WHEN THE DEMOTED TRACK IS LATER STAGED INTO CACHE

Номер: US20190034355A1
Принадлежит:

Provided are a computer program product, system, and method for saving track metadata format information for tracks demoted from cache for use when the demoted track is later staged into cache. A track is demoted from the cache and indicated in a demoted track list. The track format information for the demoted track is saved, wherein the track format information indicates a layout of data in the track. The saved track format information for the demoted track is used when the demoted track is staged back into the cache. 1. A computer program product for managing tracks in storage cached in a cache , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that is executable to perform operations , the operations comprising:demoting a track from the cache;indicating the demoted track in a demoted track list;saving track format information for the demoted track, wherein the track format information indicates a layout of data in the track; andusing the saved track format information for the demoted track when the demoted track is staged back into the cache.2. The computer program product of claim 1 , wherein the track format information comprises a track format code defined in a track format table associating track format codes with track format metadata.3. The computer program product of claim 1 , wherein the track format information comprises a track format code defined in a track format table associating track format codes with track format metadata claim 1 , wherein the using the saved track format information for the demoted track when the demoted track is staged back into the cache comprises:staging a track from the storage into the cache;generating a cache control block for the staged track;determining whether there is a saved track format information for the staged track that was saved when the staged track was previously demoted; andincluding the saved track format information for the ...

Подробнее
07-02-2019 дата публикации

PROVIDING TRACK FORMAT INFORMATION WHEN MIRRORING UPDATED TRACKS FROM A PRIMARY STORAGE SYSTEM TO A SECONDARY STORAGE SYSTEM

Номер: US20190042096A1
Принадлежит:

Provided are a computer program product, system, and method for providing track format information when mirroring updated tracks from a primary storage system to a secondary storage system. The primary storage system determines a track to mirror to the secondary storage system and determines whether there is track format information for the track to mirror. The track format information indicates a format and layout of data in the track, indicated in track metadata for the track. The primary storage system sends the track format information to the secondary storage system, in response to determining there is the track format information and mirrors the track to mirror to the secondary storage system. The secondary storage system uses the track format information for the track in the secondary cache when processing a read or write request to the mirrored track. 1. A computer program product for mirroring data from a primary storage system having a primary cache and a primary storage to a secondary storage system having a secondary cache and a secondary storage , the computer program product comprising a computer readable storage medium having computer readable program code executed in the primary and the secondary storage systems to perform operations , the operations comprising:determining, by the primary storage system, a track to mirror from the primary storage system to the secondary storage system;determining, by the primary storage system, whether there is track format information for the track to mirror that the primary storage system maintains for caching the track to mirror in the primary cache, wherein the track format information indicates a format and layout of data in the track, indicated in track metadata for the track;sending, by the primary storage system, the track format information to the secondary storage system, in response to determining there is the track format information;mirroring, by the primary storage system, the track to mirror to the ...

Подробнее
06-02-2020 дата публикации

DETERMINING WHEN TO SEND MESSAGE TO A COMPUTING NODE TO PROCESS ITEMS USING A MACHINE LEARNING MODULE

Номер: US20200042367A1
Принадлежит:

Provided are a computer program product, system, and method for determining when to send message to a computing node to process items using a machine learning module. A send message threshold indicates a send message parameter value for a send message parameter indicating when to send a message to the computing node with at least one requested item to process. Information related to sending of messages to the computing node to process requested items is provided to a machine learning module to produce a new send message parameter value for the send message parameter indicating when to send the message, which is set to the send message parameter value. A message is sent to the computing node to process at least one item in response to the current value satisfying the condition with respect to the send message parameter value. 1. A computer program product for sending a message to a computing node indicating a number of requested items to process at the computing node , wherein the computer program product comprises a computer readable storage medium having computer readable program code embodied therein that when executed performs operations , the operations comprising:providing a send message threshold indicating a send message parameter value for a send message parameter indicating when to send a message to the computing node with at least one requested item to process;providing to a machine learning module information related to sending of messages to the computing node to process requested items;receiving, from the machine learning module having processed the provided information, a new send message parameter value for the send message parameter indicating when to send the message;setting the send message parameter value to the new send message parameter value;determining whether a current value of the send message parameter satisfies a condition with respect to the send message parameter value; andsending a message to the computing node to process at least one ...

Подробнее
06-02-2020 дата публикации

DETERMINING WHEN TO SEND MESSAGE TO A COMPUTING NODE TO PROCESS ITEMS BY TRAINING A MACHINE LEARNING MODULE

Номер: US20200042368A1
Принадлежит:

Provided are a computer program product, system, and method for determining when to send message to a computing node to process items by training a machine learning module. A machine learning module receives as input information related to sending of messages to the computing node to process items and outputs a send message parameter value for a send message parameter indicating when to send a message to the computing node. The send message parameter value is adjusted based on a performance condition and a performance condition threshold to produce an adjusted send message parameter value. The machine learning module is retrained with the input information related to the sending of messages to produce the adjusted send message parameter value. The retrained machine learning module is used to produce a new send message parameter value used to determine when to send a message. 1. A computer program product for sending a message to a computing node indicating a number of requested items to process at the computing node , wherein the computer program product comprises a computer readable storage medium having computer readable program code embodied therein that when executed performs operations , the operations comprising:providing a machine learning module that receives as input information related to sending of messages to the computing node to process items and that outputs a send message parameter value for a send message parameter indicating when to send a message to the computing node indicating at least one item to process;determining at least one of a first margin of error of a message processing time and a second margin of error of a processor utilization;adjusting the send message parameter value based on at least one of the first margin of error and the second margin of error to produce an adjusted send message parameter value;retraining the machine learning module with the input information related to the sending of messages to produce the adjusted send ...

Подробнее
06-02-2020 дата публикации

DETERMINING SECTORS OF A TRACK TO STAGE INTO CACHE BY TRAINING A MACHINE LEARNING MODULE

Номер: US20200042905A1
Принадлежит:

Provided are a computer program product, system, and method for determining sectors of a track to stage into cache by training a machine learning module. A machine learning module that receives as input performance attributes of system components affected by staging tracks from the storage to the cache and outputs a staging strategy comprising one of a plurality of staging strategy indicating at least one of a plurality of sectors of a track to stage into the cache. A margin of error is determined based on a current value of a performance attribute and a threshold of the performance attribute. An adjusted staging strategy is determined based on the margin of error. The machine learning module is retrained with current performance attributes to output the adjusted staging strategy. 1. A computer program product for determining tracks to stage into cache from a storage , wherein the computer program product comprises a computer readable storage medium having computer readable program code embodied therein that when executed performs operations , the operations comprising:providing a machine learning module that receives as input performance attributes of system components affected by staging tracks from the storage to the cache and outputs a staging strategy comprising one of a plurality of staging strategy, wherein each staging strategy indicates at least one of a plurality of sectors of a track to stage into the cache;determining a margin of error based on a current value of a performance attribute and a threshold of the performance attribute;determining an adjusted staging strategy of the plurality of staging strategies based on the margin of error;retraining the machine learning module with current performance attributes to output the adjusted staging strategy; andusing the retrained machine learning module to output one of the staging strategies to use to determine sectors of a track to stage into the cache for a requested track not in the cache.2. The computer ...

Подробнее
06-02-2020 дата публикации

DETERMINING SECTORS OF A TRACK TO STAGE INTO CACHE USING A MACHINE LEARNING MODULE

Номер: US20200042906A1
Принадлежит:

Provided are a computer program product, system, and method for determining sectors of a track to stage into cache using a machine learning module. Performance attributes of system components affected by staging tracks from the storage to the cache are provided to a machine learning module. An output is received, from the machine learning module having processed the provided performance attributes, indicating a staging strategy indicating sectors of a track to stage into the cache comprising one of a plurality of staging strategies. Sectors of an accessed track that is not in the cache are staged into the cache according to the staging strategy indicated in the output. 1. A computer program product for determining data to stage into cache from a storage , wherein the computer program product comprises a computer readable storage medium having computer readable program code embodied therein that when executed performs operations , the operations comprising:providing performance attributes of system components affected by staging tracks from the storage to the cache to a machine learning module;receiving, from the machine learning module having processed the performance attributes, an output indicating a staging strategy indicating sectors of a track to stage into the cache comprising one of a plurality of staging strategies; andstaging sectors of an accessed track that is not in the cache according to the staging strategy indicated in the output.2. The computer program product of claim 1 , wherein the plurality of staging strategies include at least a plurality of a partial track staging to stage all sectors from a requested sector of a track claim 1 , a sector staging to stage only the requested sectors of the track claim 1 , and a full track staging to stage all sectors of the track.3. The computer program product of claim 2 , wherein the performance attributes provided to the machine learning module comprise a plurality of:cache misses indicating a number of ...

Подробнее
15-02-2018 дата публикации

PROVIDING EXCLUSIVE USE OF CACHE ASSOCIATED WITH A PROCESSING ENTITY OF A PROCESSOR COMPLEX TO A SELECTED TASK

Номер: US20180046506A1
Принадлежит:

A plurality of processing entities are maintained in a processor complex. In response to determining that a task is a critical task, the critical task is dispatched to a scheduler, wherein it is preferable to prioritize execution of critical tasks over non-critical tasks. In response to dispatching the critical task to the scheduler, the scheduler determines which processing entity of the plurality of processing entities has a least amount of processing remaining to be performed for currently scheduled tasks. Tasks queued on the determined processing entity are moved to other processing entities, and the currently scheduled tasks on the determined processing entity are completed. In response to moving tasks queued on the determined processing entity to other processing entities and completing the currently scheduled tasks on the determined processing entity, the critical task is dispatched on the determined processing entity. 1. A method , comprising:maintaining a plurality of processing entities in a processor complex;in response to determining that a task is a critical task, dispatching the critical task to a scheduler, wherein it is preferable to prioritize execution of critical tasks over non-critical tasks;in response to dispatching the critical task to the scheduler, determining, by the scheduler, which processing entity of the plurality of processing entities has a least amount of processing remaining to be performed for currently scheduled tasks;moving tasks queued on the determined processing entity to other processing entities, and completing the currently scheduled tasks on the determined processing entity; andin response to moving tasks queued on the determined processing entity to other processing entities and completing the currently scheduled tasks on the determined processing entity, dispatching the critical task on the determined processing entity.2. The method of claim 1 , the method further comprising:in response to determining that the task is a ...

Подробнее
15-02-2018 дата публикации

RESERVING A CORE OF A PROCESSOR COMPLEX FOR A CRITICAL TASK

Номер: US20180046507A1
Принадлежит:

A plurality of cores are maintained in a processor complex. A core of the plurality of cores is reserved for execution of critical tasks, wherein it is preferable to prioritize execution of critical tasks over non-critical tasks. A scheduler receives a task for scheduling in the plurality of cores. In response to determining that the task is a critical task, the task is scheduled for execution in the reserved core. 1. A method , comprising:maintaining a plurality of cores in a processor complex;reserving a core of the plurality of cores for execution of critical tasks, wherein it is preferable to prioritize execution of critical tasks over non-critical tasks;receiving, by a scheduler, a task for scheduling in the plurality of cores; andin response to determining that the task is a critical task, scheduling the task for execution in the reserved core.2. The method of claim 1 , the method further comprising:in response to determining that the task is a non-critical task, scheduling the task for execution in a non-reserved core of the plurality of cores.3. The method of claim 2 , wherein the reserved core is exclusively used for the execution of critical tasks.4. The method of claim 2 , wherein the reserved core has a clean L1 cache and a clean L2 cache at a time at which the critical task is scheduled for execution in the reserved core.5. The method of claim 4 , wherein the critical task executes faster by using the clean L1 cache and the clean L2 cache of the reserved core claim 4 , in comparison to scheduling the critical task on any non-reserved core of the plurality of cores claim 4 , wherein in non-reserved cores L1 and L2 cache are shared among a plurality of tasks.6. The method of claim 1 , wherein the plurality of cores are sufficiently high in number claim 1 , such that reserving a single core of the plurality of cores as the reserved core for the execution of the critical tasks does not affect processing speed for execution of non-critical tasks.7. The ...

Подробнее
13-02-2020 дата публикации

OPTIMIZING SYNCHRONOUS I/O FOR zHYPERLINK

Номер: US20200050384A1

A method for dynamically adjusting utilization of I/O processing techniques includes providing functionality to execute a plurality of I/O processing techniques. The I/O processing techniques include a first I/O processing technique that uses a higher performance communication path for transmitting I/O and a second I/O processing technique that uses a lower performance communication path for transmitting I/O. The method automatically increases use of the first I/O processing technique and reduces use of the second I/O processing technique when the set of conditions is satisfied. Similarly, the method automatically increases use of the second I/O processing technique and reduces use of the first I/O processing technique when the set of conditions is not satisfied. A corresponding system and computer program product are also disclosed. 1. A method for optimizing I/O performance between a host system and a storage system , the method comprising:managing different physicals paths for transmitting I/O requests between a host system and a storage system, the different physical paths comprising a higher performance communication path and a lower performance communication path;monitoring conditions on the storage system, the conditions indicating availability of resources on the storage system;communicating information about the conditions from the storage system to the host system;determining whether the conditions satisfy certain thresholds;in response to determining that the conditions satisfy the thresholds, automatically increasing, by the host system, a proportion of the I/O requests that are processed over the higher performance communication path and decreasing a proportion of the I/O requests that are processed over the lower performance communication path; andin response to determining that the conditions do not satisfy the thresholds, automatically decreasing, by the host system, the proportion of the I/O requests that are processed over the higher performance ...

Подробнее
13-02-2020 дата публикации

TRANSACTION OPTIMIZATION DURING PERIODS OF PEAK ACTIVITY

Номер: US20200051045A1

A method includes establishing, for a transaction processing system, a maximum number of transactions that the transaction processing system can optimally handle at a time, as well as an optimal transaction rate. The method monitors a current number of transactions being processed by the transaction processing system. Incoming transactions that would cause the current number to exceed the maximum number are received into a queue, and transactions are released from the queue in accordance with the optimal transaction rate. The method further monitors a number of transactions waiting in the queue. When the number reaches an upper threshold, the method declines to admit additional transactions into the queue. When the number reaches a lower threshold, the method begins to admit additional transactions into the queue. A corresponding system and computer program product are also disclosed. 1. A method to prevent overloading of a transaction processing system , the method comprising:establishing, for a transaction processing system, a maximum number of transactions that the transaction processing system can optimally handle at a time, as well as an optimal transaction rate;monitoring a current number of transactions being processed by the transaction processing system;when the current number exceeds the maximum number, diverting incoming transactions into a queue;releasing transactions from the queue to the transaction processing system in accordance with the optimal transaction rate;monitoring a number of transactions waiting in the queue;when the number reaches an upper threshold, declining to admit additional transactions into the queue; andwhen the number reaches a lower threshold, admitting additional transactions into the queue.2. The method of claim 1 , wherein declining to admit additional transactions into the queue further comprises returning the additional transactions for retry at a later time.3. The method of claim 1 , wherein the queue resides outside of the ...

Подробнее
10-03-2022 дата публикации

USING A MACHINE LEARNING MODULE TO PERFORM PREEMPTIVE IDENTIFICATION AND REDUCTION OF RISK OF FAILURE IN COMPUTATIONAL SYSTEMS

Номер: US20220075676A1
Принадлежит:

Input on a plurality of attributes of a computing environment is provided to a machine learning module to produce an output value that comprises a risk score that indicates a likelihood of a potential malfunctioning occurring within the computing environment. A determination is made as to whether the risk score exceeds a predetermined threshold. In response to determining that the risk score exceeds a predetermined threshold, an indication is transmitted to indicate that potential malfunctioning is likely to occur within the computing environment. A modification is made to the computing environment to prevent the potential malfunctioning from occurring. 124-. (canceled)25. A method , comprising:providing input on a plurality of attributes of a computing environment comprising one or more devices to a machine learning module to produce an output value that comprises a risk score that indicates a likelihood of a potential malfunctioning occurring within a computing environment; andin response to determining that the risk score exceeds a predetermined threshold, transmitting an indication to indicate that the potential malfunctioning is likely to occur within the computing environment, wherein the indication additionally indicates a level of severity of the potential malfunctioning.26. The method of claim 25 , wherein the computing environment is modified to prevent the potential malfunctioning from occurring.27. The method of claim 25 , wherein the computing environment comprises one or more devices comprising one or more storage controllers claim 25 , one or more storage drives claim 25 , and one or more host computing systems claim 25 , wherein the one or more storage controllers manage the storage drives to allow input/output (I/O) access to the one or more host computing systems.28. The method of claim 27 , wherein an attribute of the plurality of attributes is a measure of a firmware or software level of a device in comparison to a minimum or recommended firmware ...

Подробнее
10-03-2022 дата публикации

PERFORM PREEMPTIVE IDENTIFICATION AND REDUCTION OF RISK OF FAILURE IN COMPUTATIONAL SYSTEMS BY TRAINING A MACHINE LEARNING MODULE

Номер: US20220075704A1
Принадлежит:

A machine learning module is trained by receiving inputs comprising attributes of a computing environment, where the attributes affect a likelihood of failure in the computing environment. In response to an event occurring in the computing environment, a risk score that indicates a predicted likelihood of failure in the computing environment is generated via forward propagation through a plurality of layers of the machine learning module. A margin of error is calculated based on comparing the generated risk score to an expected risk score, where the expected risk score indicates an expected likelihood of failure in the computing environment corresponding to the event. An adjustment is made of weights of links that interconnect nodes of the plurality of layers via back propagation to reduce the margin of error, to improve the predicted likelihood of failure in the computing environment. 124-. (canceled)25. A method for training a machine learning module , the method comprising:receiving, by the machine learning module executing in a computational device, inputs comprising attributes of a computing environment comprising one or more devices, wherein the attributes affect a likelihood of failure in the computing environment;in response to an event occurring in the computing environment, generating, via forward propagation through a plurality of layers of the machine learning module, a risk score that indicates a predicted likelihood of failure in the computing environment;calculating a margin of error based on comparing the generated risk score to an expected risk score, wherein the expected risk score indicates an expected likelihood of failure in the computing environment corresponding to the event, and wherein the expected risk score is higher in response to a data loss or data integrity loss in comparison to a loss of access to data for a period of time; andadjusting weights of links that interconnect nodes of the plurality of layers via back propagation to reduce ...

Подробнее
21-02-2019 дата публикации

DESTAGING PINNED RETRYABLE DATA IN CACHE

Номер: US20190057042A1
Принадлежит:

Provided are techniques for destaging pinned retryable data in cache. A ranks scan structure is created with an indicator for each rank of multiple ranks that indicates whether pinned retryable data in a cache for that rank is destageable. A cache directory is partitioned into chunks, wherein each of the chunks includes one or more tracks from the cache. A number of tasks are determined for the scan of the cache. The number of tasks are executed to scan the cache to destage pinned retryable data that is indicated as ready to be destaged by the ranks scan structure, wherein each of the tasks selects an unprocessed chunk of the cache directory for processing until the chunks of the cache directory have been processed. 1. A computer program product , the computer program product comprising a computer readable storage medium having program code embodied therewith , the program code executable by at least one processor to perform:creating a ranks scan structure with an indicator for each rank of multiple ranks that indicates whether pinned retryable data in a cache for that rank is destageable;partitioning a cache directory into chunks, wherein each of the chunks includes one or more tracks from the cache;determining a number of tasks for a scan of the cache; andexecuting the number of tasks to scan the cache to destage pinned retryable data that is indicated as ready to be destaged by the ranks scan structure, wherein each of the tasks selects an unprocessed chunk of the cache directory for processing until the chunks of the cache directory have been processed.2. The computer program product of claim 1 , wherein the program code is executable by the at least one processor to perform:creating a ranks destageable structure with an indicator for each rank of multiple ranks that indicates whether pinned retryable data for that rank is destageable, wherein the ranks scan structure is a copy of the ranks destageable structure.3. The computer program product of claim 2 , ...

Подробнее
01-03-2018 дата публикации

INTERACTING WITH A REMOTE SERVER OVER A NETWORK TO DETERMINE WHETHER TO ALLOW DATA EXCHANGE WITH A RESOURCE AT THE REMOTE SERVER

Номер: US20180063183A1
Принадлежит:

Provided are a computer program product, system, and method for interacting with a remote server over a network to determine whether to allow data exchange with a resource at the remote server. Detection is made of an attempt to exchange data with the remote resource over the network. At least one computer instruction is executed to perform at least one interaction with the server over the network to request requested server information for each of the at least one interaction. At least one instance of received server information is received. A determination is made whether the at least one instance of the received server information satisfies at least one security requirement. A determination is made of whether to prevent the exchanging of data with the remote resource based on whether the at least one instance of the received server information satisfies the at least one security requirement. 131-. (canceled)32. A computer program product for managing a computational device access to a remote resource at a remote server over a network , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to perform operations , the operations comprising:detecting an attempt by the computational device to initiate an exchange of data with the remote resource over the network;determining a security policy for a type of the remote resource;determining, for the security policy, at least one computer instruction to perform at least one interaction with the remote server to request server information;executing the at least one instruction to perform the at least one interaction with the remote server;determining whether server information is received for the at least one interaction with the remote server; anddetermining whether to allow or prevent the exchange of data with the remote server based on whether server information is received and, if server information is received, based on the ...

Подробнее
12-03-2015 дата публикации

INJECTING CONGESTION IN A LINK BETWEEN ADAPTORS IN A NETWORK

Номер: US20150071069A1

Provided are a computer program product, system, and method for injecting congestion in a link between adaptors in a network. A congestion request is sent to a selected adaptor in a containing network component comprising one of a plurality of network components. The selected adaptor is in communication with a linked adaptor in a linked network component comprising one of the network components. The congestion request causes a delay in servicing the selected adaptor to introduce congestion on a link between the selected adaptor and the linked adaptor. 1. A computer program product for testing a network comprised of network components including hosts and at least one switch , wherein the network components include adaptors to enable network communication among the network components , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to perform operations , the operations comprising:sending a congestion request to a selected adaptor in a containing network component comprising one of the network components, wherein the selected adaptor is in communication with a linked adaptor in a linked network component comprising one of the network components, wherein the congestion request causes a delay in servicing the selected adaptor to introduce congestion on a link between the selected adaptor and the linked adaptor.2. The computer program product of claim 1 , wherein the congestion request identifies a selected port in the selected adaptor claim 1 , wherein the congestion request causes the delay in servicing for the selected port.3. The computer program product of claim 1 , wherein the congestion request does not cause delay in servicing other ports in the selected adaptor or another adaptor in the containing network component.4. The computer program product of claim 1 , wherein the congestion request specifies a delay duration claim 1 , wherein the delay in servicing the ...

Подробнее
12-03-2015 дата публикации

INJECTING CONGESTION IN A LINK BETWEEN ADAPTORS IN A NETWORK

Номер: US20150071070A1

Provided are a computer program product, system, and method for injecting congestion in a link between adaptors in a network. A congestion request is received for the selected adaptor at a containing network component comprising one of the network components. In response to the received congestion request, servicing the selected adaptor is delayed to introduce congestion on a link between the selected adaptor and the linked adaptor. 1. A computer program product for testing a link between a selected adaptor and a linked adaptor each comprising one of a plurality of adaptors in network components linked in a network including a containing network component including the selected adaptor and a linked network component including the linked adaptor , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to perform operations , the operations comprising:receiving a congestion request for the selected adaptor at the containing network component; andin response to the received congestion request, delaying servicing the selected adaptor to introduce congestion on a link between the selected adaptor and the linked adaptor.2. The computer program product of claim 1 , wherein the congestion request identifies a selected port in the selected adaptor claim 1 , wherein the delay in servicing is with respect to the selected port.3. The computer program product of claim 2 , wherein the operations further comprise: initiating a service routine with respect to the selected port;', 'determining whether the flag indicates to delay the servicing in response to determining to perform the service operation;', 'performing the servicing of the selected port in response to determining the flag does not indicate to delay the servicing; and', 'completing the service routine without performing the servicing of the selected port in response to determining the flag indicates to delay the servicing., 'setting ...

Подробнее
28-02-2019 дата публикации

Restore current version of a track from a non-volatile storage into a new location in cache

Номер: US20190065325A1
Принадлежит: International Business Machines Corp

Provided are a computer program product, system, and method for restoring tracks in cache. A restore operation is initiated to restore a track in the cache from a non-volatile storage to which tracks in the cache are backed-up. The non-volatile storage includes a current version of the track and wherein a previous version of the track subject to the restore operation is stored in a first location in the cache. A second location in the cache is allocated for the current version of the track to restore from the non-volatile storage. The data for the current version of the track is transferred from the non-volatile storage to the second location in the cache. Data for the track is merged from the second location into the first location in the cache to complete restoring to the current version of the track in the first location from the non-volatile storage.

Подробнее
28-02-2019 дата публикации

MAINTAINING TRACK FORMAT METADATA FOR TARGET TRACKS IN A TARGET STORAGE IN A COPY RELATIONSHIP WITH SOURCE TRACKS IN A SOURCE STORAGE

Номер: US20190065381A1
Принадлежит:

Provided area computer program product, system, and method for maintaining track format metadata for target tracks in a target storage in a copy relationship with source tracks in a source storage. Upon receiving a request to a requested target track in the target storage, the source track for the requested target track is staged from the source storage to a cache to be used as the requested target track in response to determining that the copy relationship information indicates that a source track needs to be copied to the requested target track. A determination is made of track format metadata for the requested target track, comprising the staged source track, indicating a format and layout of data in the requested target track and a track format code identifying the track format metadata. The track format code is included in a cache control block for the requested target track. 1. A computer program product for managing read and write requests to a target storage in a point-in-time copy relationship with a source storage , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that is executable to communicate with a cache and to perform operations , the operations comprising:receiving a request to a target track in the target storage;determining whether copy relationship information for the point-in-time copy relationship indicates that a source track needs to be copied to the requested target track in the target storage;staging a source track for the requested target track from the source storage to the cache to be used as the requested target track in response to determining that the copy relationship information indicates that a source track needs to be copied to the requested target track;determining track format metadata for the requested target track, comprising the staged source track, indicating a format and layout of data in the requested target track;determining a track format ...

Подробнее
08-03-2018 дата публикации

Raid data loss prevention

Номер: US20180067809A1
Принадлежит: International Business Machines Corp

A method for preventing data loss in a RAID includes monitoring storage drives making up a RAID. The method individually tests a storage drive of the RAID by subjecting the storage drive to a stress workload test. This stress workload test may be designed to place additional stress on the storage drive while refraining from adding stress to other storage drives in the RAID. In the event the storage drive fails the stress workload test (e.g., the storage drive cannot adequately handle the additional workload or generates errors in response to the additional workload), the method replaces the storage drive with a spare storage drive and rebuilds the RAID. In certain embodiments, the method tests the storage drive with greater frequency as the age of the storage drive increases. A corresponding system and computer program product are also disclosed.

Подробнее
11-03-2021 дата публикации

ADJUSTMENT OF SAFE DATA COMMIT SCAN BASED ON OPERATIONAL VERIFICATION OF NON-VOLATILE MEMORY

Номер: US20210073090A1
Принадлежит:

A first non-volatile dual in-line memory module (NVDIMM) of a first server and a second NVDIMM of a second server are armed during initial program load in a dual-server based storage system to configure the first NVDIMM and the second NVDIMM to retain data on power loss. Prior to initiating a safe data commit scan to destage modified data from the first server to a secondary storage, a determination is made as to whether the first NVDIMM is armed. In response to determining that the first NVDIMM is not armed, a failover is initiated to the second server. 1. A method comprising:arming a first non-volatile dual in-line memory module (NVDIMM) of a first server and a second NVDIMM of a second server during initial program load in a dual-server based storage system to configure the first NVDIMM and the second NVDIMM to retain data on power loss;prior to initiating a safe data commit scan to destage modified data from the first server to a secondary storage, determining whether the first NVDIMM is armed; andin response to determining that the first NVDIMM is not armed, initiating a failover to the second server.2. The method of claim 1 , the method further comprising:in response to determining that the second NVDIMM is not armed, decreasing a time interval between successive safe data commit scans in the second server.3. The method of claim 2 , the method further comprising:in response to determining that the first NVDIMM has become armed once again in the first server and the first server has become operational, changing the time interval between successive safe data commit scans to a predetermined time that is a standard time between successive safe data commit scans.4. The method of claim 2 , the method further comprising:in response to completion of a safe data commit scan in the second server, and in response to determining that NVDIMM usage in the second server is greater than a predetermined threshold or a predetermined time that is a standard time between ...

Подробнее
15-03-2018 дата публикации

DETERMINING MEMORY ACCESS CATEGORIES TO USE TO ASSIGN TASKS TO PROCESSOR CORES TO EXECUTE

Номер: US20180074851A1
Принадлежит:

Provided are a computer program product, system, and method for determining memory access categories to use to assign tasks to processor cores to execute. A computer system has a plurality of cores, each core is comprised of a plurality of processing units and at least one cache memory shared by the processing units on the core to cache data from a memory. At task is processed to determine one of the cores on which to dispatch the task. A memory access category of a plurality of memory access categories is determined to which the processed task is assigned. The processed task is dispatched to the core assigned the determined memory access category. 1. A computer program product for dispatching tasks in a computer system having a plurality of cores , wherein each core is comprised of a plurality of processing units and at least one cache memory shared by the processing units on the core to cache data from a memory , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that when executed performs operations , the operations comprising:processing a task to determine one of the cores on which to dispatch the task;determining a memory access category of a plurality of memory access categories to which the processed task is assigned; anddispatching the processed task to the core assigned the determined memory access category.2. The computer program product of claim 1 , wherein the operations further comprise:providing an assignment of memory access categories to tasks to execute, wherein the memory access category of the processed tasks is determined from the assignment.3. The computer program product of claim 2 , wherein the operations further comprise:processing computer program code in which the tasks are coded to determine memory address ranges in the memory the tasks execute; determining a memory address range in the memory accessed by the tasks;', 'determining a memory access category ...

Подробнее
15-03-2018 дата публикации

DETERMINING CORES TO ASSIGN TO CACHE HOSTILE TASKS

Номер: US20180074974A1
Принадлежит:

Provided are a computer program product, system, and method for determining cores to assign to cache hostile tasks. A computer system has a plurality of cores. Each core is comprised of a plurality of processing units and at least one cache memory shared by the processing units on the core to cache data from a memory. A task is processed to determine one of the cores on which to dispatch the task. A determination is made as to whether the processed task is classified as cache hostile. A task is classified as cache hostile when the task accesses more than a threshold number of memory address ranges in the memory. The processed task is dispatched to at least one of the cores assigned to process cache hostile tasks. 1. A computer program product for dispatching tasks in a computer system having a plurality of cores , wherein each core is comprised of a plurality of processing units and at least one cache memory shared by the processing units on the core to cache data from a memory , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that when executed performs operations , the operations comprising:processing a task to determine one of the cores on which to dispatch the task;determining whether the processed task is classified as cache hostile, wherein a task is classified as cache hostile when the task accesses more than a threshold number of memory address ranges in the memory; anddispatching the processed task to at least one of the cores assigned to process cache hostile tasks.2. The computer program product of claim 1 , wherein the threshold number of memory address ranges comprises one memory address range.3. The computer program product of claim 1 , wherein the operations further comprise:processing computer program code in which the tasks are coded to determine memory address ranges in the memory the tasks executes; determining that the task accesses more than the threshold number of ...

Подробнее
07-03-2019 дата публикации

Distributed safe data commit in a data storage system

Номер: US20190073311A1
Принадлежит: International Business Machines Corp

In one embodiment, a safe data commit process manages the allocation of task control blocks (TCBs) as a function of the type of task control block (TCB) to be allocated for destaging and as a function of the identity of the RAID storage rank to which the data is being destaged. For example, the allocation of background TCBs is prioritized over the allocation of foreground TCBs for destage operations. In addition, the number of background TCBs allocated to any one RAID storage rank is limited. Once the limit of background TCBs for a particular RAID storage rank is reached, the distributed safe data commit logic switches to allocating foreground TCBs. Further, the number of foreground TCBs allocated to any one RAID storage rank is also limited. Other features and aspects may be realized, depending upon the particular application.

Подробнее
07-03-2019 дата публикации

METHOD, SYSTEM, AND COMPUTER PROGRAM PRODUCT FOR PROVIDING SECURITY AND RESPONSIVENESS IN CLOUD BASED DATA STORAGE AND APPLICATION EXECUTION

Номер: US20190075119A1
Принадлежит:

A storage controller that is coupled to a plurality of storage clouds is maintained. The storage controller determines security requirements for performing a selected operation in the plurality of storage cloud. A subset of storage clouds of the plurality of storage clouds that are able to satisfy the security requirements are determined. A determination is made as to which storage cloud of the subset of storage clouds is most responsive for performing the selected operation. The selected operation is performed in the determined storage cloud that is most responsive. 120-. (canceled)21. A method , comprising:maintaining a storage controller that controls a plurality of storage clouds;determining a minimum level of security certification required for performing a selected operation in the plurality of storage clouds;determining a subset of storage clouds of the plurality of storage clouds that are able to satisfy the minimum level of security certification;determining which storage cloud of the subset of storage clouds has a fastest responsiveness among the subset of storage clouds; andperforming the selected operation in the determined storage cloud that has the fastest responsiveness among the subset of storage clouds.22. The method of claim 21 , wherein the selected operation comprises storing a dataset.23. The method of claim 22 , wherein the selected operation further comprises executing an application.24. The method of claim 21 , the method further comprising:maintaining, in the storage controller, a security level indicator that indicates a level of security certification of each of the plurality of storage clouds; anddetermining from the security level indicator the subset of storage clouds of the plurality of storage clouds that are able to provide the minimum level of security certification.25. The method of claim 24 , wherein the level of security certification comprises Evaluation Assurance Levels (EAL) ranging from 1 to 7 in a Common Criteria standard.26 ...

Подробнее
05-03-2020 дата публикации

DETECTION AND PREVENTION OF DEADLOCK IN A STORAGE CONTROLLER FOR CACHE ACCESS

Номер: US20200073807A1
Принадлежит:

A computational device determines whether one or more tasks are waiting for accessing a cache for more than a predetermined amount of time while least recently used (LRU) based replacement of tracks are being performed for the cache via demotion of tracks from a LRU list of tracks corresponding to the cache. In response to determining that one or more tasks are waiting for accessing the cache for more than the predetermined amount of time, in addition to continuing to demote tracks from the LRU list, a deadlock prevention application demotes tracks from at least one region of a cache directory that identifies all tracks in the cache. 1. A method , comprising:determining, by a computational device, whether one or more tasks are waiting for accessing a cache for more than a predetermined amount of time while least recently used (LRU) based replacement of tracks are being performed for the cache via demotion of tracks from a LRU list of tracks corresponding to the cache; andin response to determining that one or more tasks are waiting for accessing the cache for more than the predetermined amount of time, in addition to continuing to demote tracks from the LRU list, demoting, by a deadlock prevention application, tracks from at least one region of a cache directory that identifies all tracks in the cache, wherein the deadlock prevention application increases a rate at which tracks are demoted by executing in parallel with the demotion of tracks from the LRU list.2. The method of claim 1 , wherein the deadlock prevention application claim 1 , attempts to demote a predetermined number of tracks from the at least one region of the cache directory by:discarding unmodified tracks in the at least one region; andin response to determining that the discarded unmodified tracks in the at least one region are fewer than the predetermined number of tracks, destaging and then discarding modified tracks from the at least one region of the cache directory.3. The method of claim 2 , ...

Подробнее
05-03-2020 дата публикации

DETECTION AND PREVENTION OF DEADLOCK IN A STORAGE CONTROLLER FOR CACHE ACCESS VIA A PLURALITY OF DEMOTE MECHANISMS

Номер: US20200073808A1
Принадлежит:

A computational device determines whether one or more tasks are waiting for accessing a cache for more than a predetermined amount of time while least recently used (LRU) based replacement of tracks are being performed for the cache via demotion of tracks from a LRU list of tracks corresponding to the cache. In response to determining that one or more tasks are waiting for accessing the cache for more than the predetermined amount of time, in addition to continuing to demote tracks from the LRU list, a plurality of deadlock prevention demotion tasks demote tracks from the cache. 1. A method , comprising:determining, by a computational device, whether one or more tasks are waiting for accessing a cache for more than a predetermined amount of time while least recently used (LRU) based replacement of tracks are being performed for the cache via demotion of tracks from a LRU list of tracks corresponding to the cache; andin response to determining that one or more tasks are waiting for accessing the cache for more than the predetermined amount of time, in addition to continuing to demote tracks from the LRU list, demoting, by a plurality of deadlock prevention demotion tasks, tracks from the cache.2. The method of claim 1 , wherein the plurality of deadlock prevention demotion tasks execute in a round robin manner.3. The method of claim 1 , wherein the plurality of deadlock prevention demotion tasks execute in parallel.4. The method of claim 1 , wherein a first deadlock prevention demotion task of the plurality of deadlock prevention demotion tasks demotes tracks indicated in a cache directory that is divided into a plurality of regions claim 1 , by selecting a region from which to demote tracks via a round robin mechanism.5. The method of claim 4 , wherein a second deadlock prevention demotion task of the plurality of deadlock prevention demotion tasks attempts to demote tracks of the cache in a first in first out (FIFO) order.6. The method of claim 5 , wherein a third ...

Подробнее
18-03-2021 дата публикации

DYNAMIC COMPRESSION WITH DYNAMIC MULTI-STAGE ENCRYPTION FOR A DATA STORAGE SYSTEM

Номер: US20210081544A1
Принадлежит:

Dynamic compression with dynamic multi-stage encryption for a data storage system in accordance with the present description includes, in one aspect of the present description, preserves end-to-end encryption between a host and a storage controller while compressing data which was received from the host in encrypted but uncompressed form, using MIPs and other processing resources of the storage controller instead of the host. In one embodiment, the storage controller decrypts encrypted but uncompressed data received from the host to unencrypted data and compresses the unencrypted data to compressed data. The storage controller then encrypts the compressed data to encrypted, compressed data and stores the encrypted, compressed data in a storage device controlled by the storage controller. Other aspects and advantages may be realized, depending upon the particular application. 1. A computer program product configured for use with a computer system having a host , and a data storage system having a storage controller and at least one storage unit controlled by the storage controller and configured to store data , wherein the computer system has at least one processor , and wherein the computer program product comprises a computer readable storage medium having program instructions embodied therewith , the program instructions executable by a processor of the computer system to cause computer system operations , the computer system operations comprising:a host transferring encrypted data to a storage controller controlling a storage device; and decrypting the encrypted data to unencrypted data;', 'compressing the unencrypted data to compressed data;', 'encrypting the compressed data to encrypted, compressed data; and', 'storing the encrypted, compressed data in the storage device., 'the storage controller2. The computer program product of wherein the decrypting the encrypted data to unencrypted data includes decrypting the encrypted data to unencrypted data in a ...

Подробнее
14-03-2019 дата публикации

THIN PROVISIONING USING CLOUD BASED RANKS

Номер: US20190079686A1
Принадлежит:

A computer-implemented method for thin provisioning using cloud based ranks comprises determining a total amount of unused physical storage space for all of a plurality of local ranks associated with a storage controller; comparing the total amount of unused physical storage space to a first threshold; in response to determining that the total amount of unused physical storage space is less than the first threshold, creating one or more cloud based ranks. Creating each of the one or more cloud based ranks comprises allocating storage space on one or more corresponding cloud storage devices via a cloud interface; mapping the allocated storage space to corresponding virtual local addresses; and grouping the virtual local addresses as a virtual local rank associated with the storage controller. 1. A computer-implemented method comprising:determining a total amount of unused physical storage space for all of a plurality of local ranks associated with a storage controller;comparing the total amount of unused physical storage space to a first threshold;in response to determining that the total amount of unused physical storage space is less than the first threshold, creating one or more cloud based ranks;wherein creating each of the one or more cloud based ranks comprises:allocating storage space on one or more corresponding cloud storage devices via a cloud interface;mapping the allocated storage space to corresponding virtual local addresses; andgrouping the virtual local addresses as a virtual local rank associated with the storage controller.2. The computer-implemented method of claim 1 , further comprising storing claim 1 , on the one or more cloud based ranks claim 1 , new data written after creating the one or more cloud based ranks.3. The computer-implemented method of claim 1 , further comprising:moving, to the one or more cloud based ranks, data stored on the plurality of local ranks prior to creating the one or more cloud based ranks; andstoring new data ...

Подробнее
14-03-2019 дата публикации

DYNAMIC DATA RELOCATION USING CLOUD BASED RANKS

Номер: US20190079693A1
Принадлежит:

An example method for dynamic data relocation using cloud based ranks comprises monitoring accesses to data stored on a plurality of local ranks of an enterprise storage system; identifying data which has not been accessed for a predetermined amount of time based on the monitored accesses; and moving the data which has not been accessed for the predetermined amount of time to one or more cloud based ranks of the enterprise storage system, wherein each cloud based rank comprises storage space on one or more cloud storage devices, the storage space on the one or more cloud storage devices mapped to corresponding virtual local addresses that are grouped as a virtual local rank. 1. A computer-implemented method for dynamic data relocation , the method comprising:monitoring accesses to data stored on a plurality of local ranks of an enterprise storage system;based on the monitored accesses, identifying data which has not been accessed for a predetermined amount of time; andmoving the data which has not been accessed for the predetermined amount of time to one or more cloud based ranks of the enterprise storage system, wherein each cloud based rank comprises storage space on one or more cloud storage devices, the storage space on the one or more cloud storage devices mapped to corresponding virtual local addresses that are grouped as a virtual local rank;wherein the virtual local addresses are memory addresses which appear as addresses of a storage device coupled to a storage controller via a local connection.2. The computer-implemented method of claim 1 , further comprising:in response to an access request directed to the corresponding virtual local addresses for at least part of the moved data, converting the access request to a cloud data access request configured for an application programming interface (API) corresponding to the one or more cloud storage devices of the one or more cloud based ranks; andrelocating the at least part of the moved data from the one or ...

Подробнее
22-03-2018 дата публикации

Determination of memory access patterns of tasks in a multi-core processor

Номер: US20180081727A1
Принадлежит: International Business Machines Corp

A plurality of processing entities in which a plurality of tasks are executed are maintained. Memory access patterns are determined for each of the plurality of tasks by dividing a memory associated with the plurality of processing entities into a plurality of memory regions, and for each of the plurality of tasks, determining how many memory accesses take place in each of the memory regions, by incrementing a counter associated with each memory region in response to a memory access. Each of the plurality of tasks are allocated among the plurality of processing entities, based on the determined memory access patterns for each of the plurality of tasks.

Подробнее
14-03-2019 дата публикации

STORAGE SYSTEM USING CLOUD STORAGE AS A RANK

Номер: US20190082008A1
Принадлежит:

A computer-implemented method for utilizing cloud storage as a rank comprises allocating storage space on one or more cloud storage devices via a cloud interface; mapping the allocated storage space to corresponding virtual local addresses; grouping the virtual local addresses to create one or more virtual local ranks from the allocated storage space on the one or more cloud storage devices; converting local data access requests for the one or more virtual local ranks to cloud data access requests configured for the cloud interface; and communicating the cloud data access requests to the one or more cloud storage devices via the cloud interface. 1. A computer-implemented method for utilizing cloud storage as a rank , the method comprising:allocating storage space on one or more cloud storage devices via a cloud interface;mapping the allocated storage space to corresponding virtual local addresses;grouping the virtual local addresses to create one or more virtual local ranks from the allocated storage space on the one or more cloud storage devices;converting local data access requests for the one or more virtual local ranks to cloud data access requests configured for the cloud interface; andcommunicating the cloud data access requests to the one or more cloud storage devices via the cloud interface.2. The computer-implemented method of claim 1 , wherein communicating the cloud data access request further comprises assigning a first service level claim 1 , a second service level or a third service level to the cloud data access request;wherein the first service level has higher latencies and lower throughput than the second and third service levels, and the second service level has higher latencies and lower throughput than the third service level.3. The computer-implemented method of claim 2 , further comprising assigning the first service level to the cloud data access request in response to determining that a service level agreement indicates a quality of service ...

Подробнее
14-03-2019 дата публикации

STORAGE SYSTEM USING CLOUD BASED RANKS AS REPLICA STORAGE

Номер: US20190082009A1
Принадлежит:

A computer-implemented method for using cloud based ranks as replica storage comprises allocating storage space on cloud storage devices via a cloud interface; mapping the allocated storage space on the cloud storage devices to corresponding virtual local addresses; grouping the virtual local addresses to create at least one cloud based rank from the allocated storage space on the cloud storage devices; designating a cloud based rank as cloud based replica storage for a corresponding primary storage; assigning a service level to the cloud based replica storage based, at least in part, on characteristics of data being mirrored to the cloud based replica storage and a rate at which the data is mirrored to the cloud based replica storage; and dynamically adjusting the service level assigned to the cloud based replica storage in response to a command to swap the cloud based replica storage with the corresponding primary storage. 1. A computer-implemented method for utilizing cloud storage as replica storage , the method comprising:allocating storage space on one or more cloud storage devices via a cloud interface;mapping the allocated storage space on the one or more cloud storage devices to corresponding virtual local addresses;grouping the virtual local addresses to create at least one cloud based rank from the allocated storage space on the one or more cloud storage devices;designating one or more of the at least one cloud based rank as cloud based replica storage for a corresponding primary storage;assigning a service level to the cloud based replica storage based, at least in part, on one or more characteristics of data being mirrored to the cloud based replica storage and a rate at which the data is mirrored to the cloud based replica storage; anddynamically adjusting the service level assigned to the cloud based replica storage in response to a command to swap the cloud based replica storage with the corresponding primary storage.2. The computer-implemented ...

Подробнее
12-03-2020 дата публикации

VALIDATION OF CLOCK TO PROVIDE SECURITY FOR TIME LOCKED DATA

Номер: US20200081480A1
Принадлежит:

A computational device receives an input/output (I/O) operation directed to a data set. In response to determining that there is a time lock on the data set, a determination is made as to whether a clock of the computational device is providing a correct time. In response to determining that the clock of the computational device is not providing the correct time, the I/O operation is restricted from accessing the data set. In response to determining that the clock of the computational device is providing the correct time, a determination is made from one or more time entries of the time lock whether to provide the I/O operation with access to the data set. 125-. (canceled)26. A method , comprising:in response to determining that there is a time lock on a data set and in response to determining that a time provided by a clock of a computational device and a time determined via a log file maintained by the computational device are same or do not differ by more than a predetermined amount of time, determining that the clock of the computational device is providing a correct time; andin response to determining that the clock of the computational device is providing the correct time, determining from one or more time entries of the time lock whether to provide an Input/Output (I/O) operation with access to the data set.27. The method of claim 26 , wherein the log file records times of previous access or attempts to access the data set claim 26 , and a passage of time in the computational device.28. The method of claim 26 , wherein determining from one or more time entries of the time lock whether to provide the I/O operation with access to the data set comprises:determining whether the I/O operation meets a criteria provided by the one or more time entries of the time lock.29. The method of claim 28 , wherein determining from one or more time entries of the time lock whether to provide the I/O operation with access to the data set further comprises:in response to ...

Подробнее
12-03-2020 дата публикации

POINT IN TIME COPY OF TIME LOCKED DATA IN A STORAGE CONTROLLER

Номер: US20200081624A1
Принадлежит:

A computational device receives a command to activate a time lock for a data set. In response to receiving the command to activate the time lock for the data set, a point in time copy of the data set is generated to allow write operations to be performed even if the time lock is activated. 125-. (canceled)26. A method , comprising:setting an alarm at a time at which a time lock is to start for a data set; andin response to an occurrence of the alarm, generating a point in time copy of the data set to allow write operations to be performed even if the time lock is activated.27. The method of claim 26 , the method further comprising:in response to stopping a write operation for a first time as a result of the time lock, generating the point in time copy.28. The method of claim 26 , wherein on expiry of the time lock the point in time copy is deleted claim 26 , but the data set is maintained.29. The method of claim 26 , wherein on expiry of the time lock the data set is deleted claim 26 , but the point in time copy is maintained.30. The method of claim 26 , wherein on expiry of the time lock both the data set and the point in time copy are maintained.31. A system claim 26 , comprising:a memory; anda processor coupled to the memory, wherein the processor performs operations, the operations comprising:setting an alarm at a time at which a time lock is to start for a data set; andin response to an occurrence of the alarm, generating a point in time copy of the data set to allow write operations to be performed even if the time lock is activated.32. The system of claim 31 , the operations further comprising:in response to stopping a write operation for a first time as a result of the time lock, generating the point in time copy.33. The system of claim 31 , wherein on expiry of the time lock the point in time copy is deleted claim 31 , but the data set is maintained.34. The system of claim 31 , wherein on expiry of the time lock the data set is deleted claim 31 , but the ...

Подробнее
19-03-2020 дата публикации

REDUCTION OF PROCESSING OVERHEAD FOR POINT IN TIME COPY TO ALLOW ACCESS TO TIME LOCKED DATA

Номер: US20200089411A1
Принадлежит:

A computational device generates a point in time copy of one or more regions of a time locked data set, in response to receiving one or more I/O operations directed to the time locked data set. The one or more I/O operations are performed on the point in time copy of the one or more regions of the time locked data set, in response to generating the point in time copy of the one or more regions of the time locked data set. 125-. (canceled)26. A method , comprising:generating, by a computational device, a point in time copy of one or more regions of a data set, in response to receiving one or more Input/Output (I/O) operations directed to the data set; andperforming the one or more I/O operations on the point in time copy of the one or more regions of the data set, in response to generating the point in time copy of the one or more regions of the data set.27. The method of claim 26 , wherein the one or more regions comprise an entirety of the data set.28. The method of claim 26 , wherein the one or more regions comprise those regions of the data set to which the I/O operations are directed.29. The method of claim 26 , wherein the generating of the point in time copy of the one or more regions of the data set is performed subsequent to the one or more I/O operations exceeding a predetermined threshold number.30. The method of claim 26 , the method further comprising:preventing the I/O operations from being performed on the data set.31. The method of claim 26 , wherein the one or more regions include one or more volumes or parts of volumes in which the data set is stored.32. The method of claim 26 , wherein on expiry of a time lock on the data set claim 26 , additional I/O operations are directed to the data set and not directed to the point in time copy.33. A system claim 26 , comprising:a memory; anda processor coupled to the memory, wherein the processor performs operations, the operations comprising:generating a point in time copy of one or more regions of a data ...

Подробнее
12-04-2018 дата публикации

PROCESSOR THREAD MANAGEMENT

Номер: US20180101414A1
Принадлежит:

Provided are a computer program product, system, and method for managing processor threads of a plurality of processors. In one embodiment, a parameter of performance of the computing system is measured, and the configurations of one or more processor nodes are dynamically adjusted as a function of the measured parameter of performance. In this manner, the number of processor threads being concurrently executed by the plurality of processor nodes of the computing system may be dynamically adjusted in real time as the system operates to improve the performance of the system as it operates under various operating conditions. It is appreciated that systems employing processor thread management in accordance with the present description may provide other features in addition to or instead of those described herein, depending upon the particular application. 1. A method , comprising:measuring a parameter of performance of a computing system having a plurality of processor nodes, at least one processor node being configured to execute at least one processor thread of program code; anddynamically adjusting configurations of at least one of the plurality of processor nodes as a function of the measured parameter of performance, to adjust the number of processor threads being concurrently executed by the plurality of processor nodes of the computing system.2. The method of wherein the measuring a parameter of performance includes measuring a lock spin time parameter of performance as a function of lock spin time of a spin lock for controlling access to a shared resource of the computing system by processor threads of the computing system.3. The method of wherein the dynamically adjusting configurations of the plurality of processor nodes as a function of the measured parameter of performance claim 1 , to adjust the number of processor threads being concurrently executed by the plurality of processor nodes of the computing system claim 1 , includes comparing the measured ...

Подробнее
04-04-2019 дата публикации

MAINTAINING TRACK FORMAT METADATA FOR TARGET TRACKS IN A TARGET STORAGE IN A COPY RELATIONSHIP WITH SOURCE TRACKS IN A SOURCE STORAGE

Номер: US20190102306A1
Принадлежит:

Provided area computer program product, system, and method for maintaining track format metadata for target tracks in a target storage in a copy relationship with source tracks in a source storage. Upon receiving a request to a requested target track in the target storage, the source track for the requested target track is staged from the source storage to a cache to be used as the requested target track in response to determining that the copy relationship information indicates that a source track needs to be copied to the requested target track. A determination is made of track format metadata for the requested target track, comprising the staged source track, indicating a format and layout of data in the requested target track and a track format code identifying the track format metadata. The track format code is included in a cache control block for the requested target track. 123-. (canceled)24. A computer program product for managing read and write requests to a target storage and configured to communicate with a source storage , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that is executable to communicate with a cache and to perform operations , the operations comprising:receiving a request to a requested target track in the target storage in a point-in-time copy relationship with a source track in the source storage;staging the source track from the source storage to the cache to be used as the requested target track;including a track format code, identifying track format metadata for the staged source track, in a cache control block for the staged source track; andusing the track format code in the cache control block to determine the track format metadata to process subsequent requests to the requested target track in the cache.25. The computer program product of claim 24 , wherein the operations further comprise:maintaining a track format table associating track format ...

Подробнее
19-04-2018 дата публикации

PERFORMANCE-BASED MULTI-MODE TASK DISPATCHING IN A MULTI-PROCESSOR CORE SYSTEM FOR HIGH TEMPERATURE AVOIDANCE

Номер: US20180107512A1
Принадлежит:

In one embodiment, performance-based multi-mode task dispatching for high temperature avoidance in accordance with the present description, includes selecting processor cores as available to receive a dispatched task. Tasks are dispatched to a set of available processor cores for processing in a performance-based dispatching mode. If monitored temperature rises above a threshold temperature value, task dispatching logic switches to a thermal-based dispatching mode. If a monitored temperature falls below another threshold temperature value, dispatching logic switches back to the performance-based dispatching mode. If a monitored temperature of an individual processor core rises above a threshold temperature value, the processor core is redesignated as unavailable to receive a dispatched task. If the temperature of an individual processor core falls below another threshold temperature value, the processor core is redesignated as available to receive a dispatched task. Other features and aspects may be realized, depending upon the particular application. 1. A system , comprising:a plurality of processor cores; and temperature monitoring logic configured to monitor a multi-processor core temperature which is a function of temperatures of at least a portion of the plurality of the processor cores of the system;', 'comparator logic configured to be responsive to the temperature monitoring logic, and to compare a multi-processor core temperature to a first threshold temperature value; and', 'mode selection logic configured to be responsive to the comparator and to select a mode of the multi-mode task dispatching logic as a function of temperature of processor cores, wherein the mode selection logic is further configured to, if the multi-processor core temperature rises above the first threshold temperature value, select the thermal-based dispatching mode and switch the mode of the multi-mode task dispatching logic to the thermal-based dispatching mode so that tasks are ...

Подробнее
11-04-2019 дата публикации

PROVIDING ADDITIONAL MEMORY AND CACHE FOR THE EXECUTION OF CRITICAL TASKS BY FOLDING PROCESSING UNITS OF A PROCESSOR COMPLEX

Номер: US20190108065A1
Принадлежит:

A plurality of processing entities of a processor complex is maintained, wherein each processing entity has a local cache and the processor complex has a shared cache and a shared memory. One of the plurality of processing entities is allocated for execution of a critical task. In response to the allocating of one of the plurality of processing entities for the execution of the critical task, other processing entities of the plurality of processing entities are folded. The critical task utilizes the local cache of the other processing entities that are folded, the shared memory, and the shared cache, in addition to the local cache of the processing entity allocated for the execution of the critical task. 125-. (canceled)26. A method , comprising:in response to allocating of one of a plurality of processing entities for execution of a critical task, folding other processing entities of the plurality of processing entities by stopping processing operations in the other processing entities and releasing a local cache of the other processing entities for use by the processing entity allocated for execution of the critical task, wherein prior to folding the other processing entities, currently scheduled tasks on the other processing entities are temporarily suspended;utilizing, by the critical task, the local cache of the other processing entities that are folded, a shared memory, and a shared cache, in addition to a local cache of the processing entity allocated for the execution of the critical task; andin response to completion of the critical task in the processing entity that is allocated, performing an unfolding of the other processing entities to make the other processing entities operational.27. The method of claim 26 , the method further comprising:in response to performing the unfolding of the other processing entities, resuming any suspended tasks and dispatch queued tasks.28. The method of claim 26 , wherein it is preferable to execute the critical task on a ...

Подробнее
26-04-2018 дата публикации

EXECUTION OF CRITICAL TASKS BASED ON THE NUMBER OF AVAILABLE PROCESSING ENTITIES

Номер: US20180113737A1
Принадлежит:

A determination is made as to whether a plurality of processing entities in a processor complex exceeds a predetermined threshold number. In response to determining that the plurality of processing entities exceeds the predetermined threshold number, a processing entity of the plurality of processing entities is reserved for exclusive execution of critical tasks. In response to determining that the plurality of processing entities does not exceed the predetermined threshold number, and in response to receiving a task that is a critical task for execution, a determination is made as to which processing entity of the plurality of processing entities has a least amount of processing remaining to be performed for currently scheduled tasks. In response to moving tasks queued on the determined processing entity to other processing entities, the critical task is scheduled for execution on the determined processing entity. 1. A method comprising ,determining whether a plurality of processing entities in a processor complex exceeds a predetermined threshold number;in response to determining that the plurality of processing entities exceeds the predetermined threshold number, reserving a processing entity of the plurality of processing entities for exclusive execution of critical tasks; and in response to receiving a task that is a critical task for execution, determining which processing entity of the plurality of processing entities has a least amount of processing remaining to be performed for currently scheduled tasks; and', 'in response to moving tasks queued on the determined processing entity to other processing entities, scheduling the critical task for execution on the determined processing entity., 'in response to determining that the plurality of processing entities does not exceed the predetermined threshold number, performing2. The method of claim 1 ,wherein it is preferable to prioritize execution of critical tasks over non-critical tasks,wherein in response to ...

Подробнее
26-04-2018 дата публикации

PROVIDING ADDITIONAL MEMORY AND CACHE FOR THE EXECUTION OF CRITICAL TASKS BY FOLDING PROCESSING UNITS OF A PROCESSOR COMPLEX

Номер: US20180113744A1
Принадлежит:

A plurality of processing entities of a processor complex is maintained, wherein each processing entity has a local cache and the processor complex has a shared cache and a shared memory. One of the plurality of processing entities is allocated for execution of a critical task. In response to the allocating of one of the plurality of processing entities for the execution of the critical task, other processing entities of the plurality of processing entities are folded. The critical task utilizes the local cache of the other processing entities that are folded, the shared memory, and the shared cache, in addition to the local cache of the processing entity allocated for the execution of the critical task. 1. A method , comprising:maintaining a plurality of processing entities of a processor complex, wherein each processing entity has a local cache and the processor complex has a shared cache and a shared memory;allocating one of the plurality of processing entities for execution of a critical task;in response to the allocating of one of the plurality of processing entities for the execution of the critical task, folding other processing entities of the plurality of processing entities; andutilizing, by the critical task, the local cache of the other processing entities that are folded, the shared memory, and the shared cache, in addition to the local cache of the processing entity allocated for the execution of the critical task.2. The method of claim 1 , wherein additional resources that are freed by folding the other processing entities are also utilized by the critical task claim 1 , and wherein folding of the other processing entities comprises:stopping processing operations in the other processing entities; andreleasing the local cache of the other processing entities for use by the processing entity allocated for execution of the critical task.3. The method of claim 2 , the method further comprising:prior to folding the other processing entities, temporarily ...

Подробнее
28-04-2016 дата публикации

INTERACTING WITH A REMOTE SERVER OVER A NETWORK TO DETERMINE WHETHER TO ALLOW DATA EXCHANGE WITH A RESOURCE AT THE REMOTE SERVER

Номер: US20160119372A1
Принадлежит:

Provided are a computer program product, system, and method for interacting with a remote server over a network to determine whether to allow data exchange with a resource at the remote server. Detection is made of an attempt to exchange data with the remote resource over the network. At least one computer instruction is executed to perform at least one interaction with the server over the network to request requested server information for each of the at least one interaction. At least one instance of received server information is received. A determination is made whether the at least one instance of the received server information satisfies at least one security requirement. A determination is made of whether to prevent the exchanging of data with the remote resource based on whether the at least one instance of the received server information satisfies the at least one security requirement. 1. A computer program product for managing access to a remote resource at a server over a network , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that executes to perform operations , the operations comprising:detecting an attempt to exchange data with the remote resource over the network;executing at least one computer instruction to perform at least one interaction with the server over the network to request requested server information for each of the at least one interaction;receiving at least one instance of received server information in response to each of the at least one interaction for the requested server information;determining from the at least one instance of the received server information whether the at least one instance of the received server information satisfies at least one security requirement; anddetermining whether to prevent the exchanging of data with the remote resource based on whether the at least one instance of the received server information satisfies the at ...

Подробнее
09-06-2022 дата публикации

CACHE MANAGEMENT USING FAVORED VOLUMES AND A MULTIPLE TIERED CACHE MEMORY

Номер: US20220179801A1
Принадлежит:

A method for demoting a selected storage element from a cache memory includes storing favored and non-favored storage elements within a higher performance portion and lower performance portion of the cache memory. The favored storage elements are retained in the cache memory longer than the non-favored storage elements. The method maintains a first favored LRU list and a first non-favored LRU list, associated with the favored and non-favored storage elements stored within the higher performance portion of the cache. The method selects a favored or non-favored storage element to be demoted from the higher performance portion of the cache memory according to life expectancy and residency of the oldest favored and non-favored storage elements in the first LRU lists. The method demotes the selected from the higher performance portion of the cache to the lower performance portion of the cache, or to the data storage devices, according to a cache demotion policy. A corresponding storage controller and computer program product are also disclosed. 1. A computer program product for demoting selected storage elements within a storage system , the computer program product comprising a non-transitory computer-readable storage medium having computer-usable program code embodied therein , the computer-usable program code configured to perform operations when executed by at least one processor , the operations comprising:storing favored storage elements and non-favored storage elements within a cache memory, the cache memory comprising a higher performance portion and a lower performance portion;maintaining a first favored LRU list and a first non-favored LRU list within the higher performance portion, wherein the first favored LRU list includes entries associated with the favored storage elements stored within the higher performance portion and ordered according to when the favored storage element was recently accessed, and wherein the first non-favored LRU list includes entries ...

Подробнее
09-06-2022 дата публикации

CACHE MANAGEMENT USING MULTIPLE CACHE MEMORIES AND FAVORED VOLUMES WITH MULTIPLE RESIDENCY TIME MULTIPLIERS

Номер: US20220179802A1
Принадлежит:

A method for demoting a selected storage element from a cache memory includes storing favored and non-favored storage elements within a higher performance portion and lower performance portion of the cache memory. The method maintains a plurality of favored LRU lists and a non-favored LRU list for the higher and lower performance portions of the cache memory. Each favored LRU list contains entries associated with the favored storage elements that have the same unique residency multiplier. The non-favored LRU list includes entries associated with the non-favored storage elements. The method demotes a selected favored or non-favored storage element from the higher and lower performance portions of the cache memory according to a cache demotion policy that provides a preference to favored storage elements over non-favored storage elements based on a computed cache life expectancy, residency time, and the unique residency multiplier. A corresponding storage controller and computer program product are also disclosed. 1. A computer program product for demoting selected storage elements within a storage system , the computer program product comprising a non-transitory computer-readable storage medium having computer-usable program code embodied therein , the computer-usable program code configured to perform operations when executed by at least one processor , the operations comprising:storing favored storage elements and non-favored storage elements within a cache memory, the cache memory comprising a higher performance portion and a lower performance portion;maintaining a plurality of first favored LRU lists within the higher performance portion, wherein each of the first favored LRU lists in the plurality of the first favored LRU lists is associated with a unique cache residency multiplier and includes entries associated with the favored storage elements stored within the higher performance portion and is ordered according to when the favored storage element was ...

Подробнее
26-04-2018 дата публикации

COMMUNICATING HEALTH STATUS WHEN A MANAGEMENT CONSOLE IS UNAVAILABLE FOR A SERVER IN A MIRROR STORAGE ENVIRONMENT

Номер: US20180115476A1
Принадлежит:

Provided are a computer program product, system, and method for communicating health status when a management console is unavailable for a server in a mirror storage environment. A determination at a first server is made that a management console is unavailable over the console network. The first server determines a health status at the first server and the first storage in response to determining that the management console cannot be reached over the console network. The health status indicates whether there are errors or no errors at the first server and the first storage. The first server transmits the determined health status to the second server over a mirroring network mirroring data between the first storage and a second storage managed by the second server. The determined health status is forwarded to an administrator. 1. A computer program product for monitoring health status of components in a mirror copy storage environment mirroring data between a first storage , managed by a first server , and a second storage , managed by a second server , over a mirroring network , wherein a management console is connected to the first server over a console network , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that is executable to perform operations , the operations comprising:determining that the management console is unavailable over the console network;determining a health status at the first server and the first storage in response to determining that the management console cannot be reached over the console network, wherein the health status indicates whether there are errors or no errors at the first server and the first storage; andtransmitting, by the first server, the health status to the second server over the mirroring network, wherein the health status is forwarded to an administrator.2. The computer program product of claim 1 , wherein the management console comprises a ...

Подробнее
09-04-2020 дата публикации

DYNAMIC I/O LOAD BALANCING FOR zHYPERLINK

Номер: US20200110542A1

A method for dynamically balancing I/O workload is disclosed. In one embodiment, such a method includes transmitting, from a host system to a storage system, read requests and write requests over a communication path, such as a zHyperLink communication path. The method further determines whether first and second sets of conditions (e.g., read cache hit ratio, read and write response times, read and write reject rates, etc.) are satisfied on one or more of the host system and storage system. In the event the first set of conditions is satisfied, the method increases a ratio of read requests to write requests that are transmitted over the communication path. In the event the second set of conditions is satisfied, the method decreases the ratio of read requests to write requests that are transmitted over the communication path. A corresponding system and computer program product are also disclosed. 1. A method for dynamically balancing I/O workload , the method comprising:transmitting, from a host system to a storage system, read requests and write requests over a communication path;determining whether a first set of conditions and a second set of conditions is satisfied on at least one of the host system and the storage system;in the event the first set of conditions is satisfied, increasing a ratio of read requests to write requests that are transmitted over the communication path; andin the event the second set of conditions is satisfied, decreasing the ratio of read requests to write requests that are transmitted over the communication path.2. The method of claim 1 , wherein the communication path is a zHyperLink communication path.3. The method of claim 1 , further comprising providing claim 1 , from the storage system to the host system claim 1 , a hint indicating whether to increase or decrease the ratio.4. The method of claim 1 , further comprising providing claim 1 , from the storage system to the host system claim 1 , information indicating whether the first ...

Подробнее
09-04-2020 дата публикации

PERFORMANCE EFFICIENT TIME LOCKS ON DATA IN A STORAGE CONTROLLER

Номер: US20200110673A1
Принадлежит:

Provided are a method, system, and computer program product in which a computational device stores a data structure that includes identifications of a plurality of volumes and identifications of one or more time locks associated with each of the plurality of volumes. The data structure is indexed into, to determine whether an input/output (I/O) operation from a host with respect to a volume is to be permitted. 125-. (canceled)26. A method , comprising:storing, via a computational device, a data structure that includes identifications of a plurality of volumes and identifications of one or more time locks associated with each of the plurality of volumes; andindexing into the data structure to determine whether an input/output (I/O) operation from a host with respect to a volume is to be permitted, wherein the one or more time locks are destaged to a storage device to store as a metadata of data that is protected by the one or more time locks.27. The method of claim 26 , wherein the metadata is used to recover the one or more time locks and the data structure claim 26 , in response to a power failure of the computational device.28. The method of claim 26 , wherein the data structure and the one or more time locks are pinned to a cache of the computational device.29. The method of claim 26 , the method further comprising:in response to determining that an identification of the volume is not present in the data structure, performing the I/O operation on the volume.30. The method of claim 26 , wherein a time lock indicates at least:one or more volumes or parts of volumes protected by the time lock; andduration of time for which the time lock is in effect.31. The method of claim 26 , wherein a time lock indicates at least: types of I/O operation disallowed by the time lock.32. A system claim 26 , comprising:a memory; anda processor coupled to the memory, wherein the processor performs operations, the operations comprising:storing a data structure that includes ...

Подробнее
13-05-2021 дата публикации

USING A MACHINE LEARNING MODULE TO DETERMINE WHEN TO PERFORM ERROR CHECKING OF A STORAGE UNIT

Номер: US20210141688A1
Принадлежит:

Provided are a computer program product, system, and method for using a machine learning module to determine when to perform error checking of a storage unit. Input on attributes of at least one storage device comprising the storage unit are provided to a machine learning module to produce an output value. An error check frequency is determined from the output value. A determination is made as to whether the error check frequency indicates to perform an error checking operation with respect to the storage unit. The error checking operation is performed in response to determining that the error checking frequency indicates to perform the error checking operation. 126-. (canceled)27. A computer program product for error checking data in a storage , the computer program product comprising a computer readable storage medium storing computer readable program code that when executed performs operations , the operations comprising:providing input on attributes of a storage to a machine learning module to produce an output value indicating a likelihood that the storage is experiencing an error;performing an error checking operation in response to determining that the output value indicates to perform the error checking operation; andusing the output value to adjust an error check frequency in response to the output value indicating to adjust the error check frequency, wherein the error check frequency indicates a number of writes that occur before an error checking operation is performed at the storage.28. The computer program product of claim 27 , wherein the output value comprises a number from zero to 1 claim 27 , not performing the error checking operation in response to the output value being less than a lower bound; and', 'performing the error checking operation in response to the output value being greater than an upper bound., 'wherein the determining whether the error check frequency indicates to perform the error checking operation comprises29. The computer ...

Подробнее
13-05-2021 дата публикации

INTEGRATION OF APPLICATION INDICATED MINIMUM TIME TO CACHE AND MAXIMUM TIME TO CACHE TO LEAST RECENTLY USED TRACK DEMOTING SCHEMES IN A CACHE MANAGEMENT SYSTEM OF A STORAGE CONTROLLER

Номер: US20210141739A1
Принадлежит:

A computational device receives indications of a minimum retention time and a maximum retention time in cache for a first plurality of tracks, wherein no indications of a minimum retention time or a maximum retention time in the cache are received for a second plurality of tracks. A cache management application demotes a track of the first plurality of tracks from the cache, in response to determining that the track is a least recently used (LRU) track in a LRU list of tracks in the cache and the track has been in the cache for a time that exceeds the minimum retention time. The cache management application demotes the track of the first plurality of tracks, in response to determining that the track has been in the cache for a time that exceeds the maximum retention time. 124-. (canceled)25. A method , comprising:receiving indications of a minimum retention time and a maximum retention time in a cache for a first plurality of tracks, wherein no indications of the minimum retention time or the maximum retention time in the cache are received for a second plurality of tracks; anddemoting, by a cache management application, a track of the first plurality of tracks from the cache even if the track has not been in the cache for a time that exceeds the minimum retention time, in response to predetermined conditions being satisfied.26. The method of claim 25 , the method further comprising:demoting a track of the second plurality of tracks from the cache, in response to determining that the track of the second plurality of tracks is a least recently used (LRU) track in a LRU list.27. The method of claim 25 , wherein the minimum retention time is indicative of a preference of a host application to maintain the first plurality of tracks in the cache for at least the minimum retention time claim 25 , and wherein the maximum retention time is indicative of a preference of the host application to maintain the first plurality of tracks in the cache for no more than the maximum ...

Подробнее
03-05-2018 дата публикации

Validation of write data subsequent to destaging to auxiliary storage for completion of peer to peer remote copy

Номер: US20180121107A1
Принадлежит: International Business Machines Corp

A primary storage controller receives a write command from a host, to write data that is to be controlled by the primary storage controller. The data is written to local storage of the primary storage controller and subsequently the data is destaged from the local storage of the primary storage controller to store the data in an auxiliary storage of the primary storage controller. The data is transmitted to a secondary storage controller for writing the data to local storage of the secondary storage controller and for subsequently destaging the data from the local storage of the secondary storage controller to store the data in an auxiliary storage of the secondary storage controller. The data stored in the auxiliary storage of the primary storage controller is compared to the data stored in the auxiliary storage of the secondary storage controller to determine whether the write command is successfully executed.

Подробнее
03-05-2018 дата публикации

VALIDATION OF STORAGE VOLUMES THAT ARE IN A PEER TO PEER REMOTE COPY RELATIONSHIP

Номер: US20180121114A1
Принадлежит:

A peer to peer remote copy operation is performed between a primary storage controller and a secondary storage controller, to establish a peer to peer remote copy relationship between a primary storage volume and a secondary storage volume. Subsequent to indicating completion of the peer to peer remote copy operation to a host, a determination is made as to whether the primary storage volume and the secondary storage volume have identical data, by performing operations of staging data of the primary storage volume from auxiliary storage of the primary storage controller to local storage of the primary storage controller, and transmitting the data of the primary storage volume that is staged, to the secondary storage controller for comparison with data of the secondary storage volume stored in an auxiliary storage of the secondary storage controller. 120-. (canceled)21. A method , comprising:in response to indicating to a host a completion of a peer to peer remote copy operation between a primary storage controller and a secondary storage controller, staging data of a primary storage volume from an auxiliary storage of the primary storage controller to a local storage of the primary storage controller; andtransmitting the data of the primary storage volume that is staged, to the secondary storage controller, for comparison with data of a secondary storage volume stored in an auxiliary storage of the secondary storage controller to determine whether the primary storage volume and the secondary storage volume have identical data.22. The method of claim 21 , wherein the peer to peer remote copy operation establishes a peer to peer remote copy relationship between the primary storage volume and the secondary storage volume.23. The method of claim 21 , wherein completion of the peer to peer remote copy operation is indicated by the primary storage controller to the host prior to completion of data destages to the auxiliary storage of the primary storage controller from ...

Подробнее
23-04-2020 дата публикации

RECLAIMING STORAGE SPACE IN RAIDS MADE UP OF HETEROGENEOUS STORAGE DRIVES

Номер: US20200125284A1

A method for reclaiming storage space in RAID arrays made up of heterogeneous storage drives is disclosed. In one embodiment, such a method includes determining a most common storage capacity for a set of storage drives utilized in a storage system. The method further identifies physical storage drives from the set that contain unused storage space. The method pools the unused storage space of the physical storage drives to create virtual storage drives with storage capacities substantially equal to the most common storage capacity. The method then utilizes the virtual storage drives in existing or new RAID arrays. A corresponding system and computer program product are also disclosed. 1. A method for reclaiming storage space in RAID arrays made up of heterogeneous storage drives:determining a most common storage capacity for a plurality of physical storage drives in a storage system;identifying physical storage drives from the plurality that contain unused storage space;pooling the unused storage space to create virtual storage drives with storage capacities substantially equal to the most common storage capacity; andutilizing the virtual storage drives in the storage system to create RAID arrays.2. The method of claim 1 , wherein pooling the unused storage space to create virtual storage drives comprises creating a single virtual storage drive by using the storage space from multiple physical storage drives.3. The method of claim 1 , wherein utilizing the virtual storage drives in the storage system comprises mixing virtual storage drives with physical storage drives to create selected RAID arrays.4. The method of claim 1 , wherein utilizing the virtual storage drives in the storage system comprises not mixing virtual storage drives with physical storage drives to create selected RAID arrays.5. The method of claim 1 , wherein utilizing the virtual storage drives in the storage system comprises ensuring that performance and/or redundancy requirements associated ...

Подробнее
23-04-2020 дата публикации

EFFICIENT METADATA DESTAGE DURING SAFE DATA COMMIT OPERATION

Номер: US20200125684A1

A method for reducing I/O performance impacts associated with a data commit operation is disclosed. In one embodiment, such a method includes periodically performing a data commit operation wherein modified data is destaged from cache to persistent storage drives. Upon performing a particular instance of the data commit operation, the method determines whether modified data in the cache is a metadata track. In the event the modified data is a metadata track, the method attempts to acquire an exclusive lock on the metadata track. In the event the exclusive lock cannot be acquired, the method skips over the metadata track without destaging the metadata track for the particular instance of the data commit operation. A corresponding system and computer program product are also disclosed. 1. A method for reducing I/O performance impacts associated with a data commit operation , the method comprising:periodically performing a data commit operation wherein modified data is destaged from cache to persistent storage drives;upon performing a particular instance of the data commit operation, determining whether modified data encountered in the cache is a metadata track;in the event the modified data is a metadata track, attempting to acquire an exclusive lock on the metadata track; andin the event the exclusive lock cannot be acquired, skipping over the metadata track without destaging the metadata track for the particular instance of the data commit operation.2. The method of claim 1 , further comprising claim 1 , in the event the exclusive lock cannot be acquired claim 1 , incrementing a count of unsuccessful metadata track destages.3. The method of claim 1 , further comprising claim 1 , in the event the exclusive lock can be acquired claim 1 , destaging the metadata track from the cache to the persistent storage drives.4. The method of claim 3 , further comprising claim 3 , in the event the metadata track is destaged from the cache to the persistent storage drives claim 3 , ...

Подробнее
30-04-2020 дата публикации

Using a machine learning module to perform preemptive identification and reduction of risk of failure in computational systems

Номер: US20200133753A1
Принадлежит: International Business Machines Corp

Input on a plurality of attributes of a computing environment is provided to a machine learning module to produce an output value that comprises a risk score that indicates a likelihood of a potential malfunctioning occurring within the computing environment. A determination is made as to whether the risk score exceeds a predetermined threshold. In response to determining that the risk score exceeds a predetermined threshold, an indication is transmitted to indicate that potential malfunctioning is likely to occur within the computing environment. A modification is made to the computing environment to prevent the potential malfunctioning from occurring.

Подробнее
30-04-2020 дата публикации

Perform preemptive identification and reduction of risk of failure in computational systems by training a machine learning module

Номер: US20200133820A1
Принадлежит: International Business Machines Corp

A machine learning module is trained by receiving inputs comprising attributes of a computing environment, where the attributes affect a likelihood of failure in the computing environment. In response to an event occurring in the computing environment, a risk score that indicates a predicted likelihood of failure in the computing environment is generated via forward propagation through a plurality of layers of the machine learning module. A margin of error is calculated based on comparing the generated risk score to an expected risk score, where the expected risk score indicates an expected likelihood of failure in the computing environment corresponding to the event. An adjustment is made of weights of links that interconnect nodes of the plurality of layers via back propagation to reduce the margin of error, to improve the predicted likelihood of failure in the computing environment.

Подробнее
30-04-2020 дата публикации

USING A MACHINE LEARNING MODULE TO PERFORM DESTAGES OF TRACKS WITH HOLES IN A STORAGE SYSTEM

Номер: US20200133856A1
Принадлежит:

In response to an end of track access for a track in a cache, a determination is made as to whether the track has modified data and whether the track has one or more holes. In response to determining that the track has modified data and the track has one or more holes, an input on a plurality of attributes of a computing environment in which the track is processed is provided to a machine learning module to produce an output value. A determination is made as to whether the output value indicates whether one or more holes are to be filled in the track. In response to determining that the output value indicates that one or more holes are to be filled in the track, the track is staged to the cache from a storage drive. 1. A method , comprising:in response to an end of track access for a track in a cache, determining whether the track has modified data and whether the track has one or more holes; andin response to determining that the track has modified data and the track has one or more holes, providing input on a plurality of attributes of a computing environment in which the track is processed to a machine learning module to produce an output value;determining whether the output value indicates whether one or more holes are to be filled in the track; andin response to determining that the output value indicates that one or more holes are to be filled in the track, staging the track to the cache from a storage drive.2. The method of claim 1 , the method further comprising:in response to completion of staging of the track to the cache from the storage drive, destaging the track from the cache.3. The method of claim 2 , wherein the computing environment comprises a storage controller having the cache claim 2 , wherein the storage controller is coupled to one or more storage drives in a RAID configuration that stores parity information claim 2 , wherein the storage controller manages the one or more storage drives to allow input/output (I/O) access to one or more host ...

Подробнее
30-04-2020 дата публикации

PERFORM DESTAGES OF TRACKS WITH HOLES IN A STORAGE SYSTEM BY TRAINING A MACHINE LEARNING MODULE

Номер: US20200134462A1
Принадлежит:

A machine learning module receives inputs comprising attributes of a storage controller, where the attributes affect performance parameters for performing stages and destages in the storage controller. In response to an event, the machine learning module generates, via forward propagation, an output value that indicates whether to fill holes in a track of a cache by staging data to the cache prior to destage of the track. A margin of error is calculated based on comparing the generated output value to an expected output value, where the expected output value is generated from an indication of whether it is correct to fill holes in a track of the cache by staging data to the cache prior to destage of the track. An adjustment is made of weights of links that interconnect nodes of the plurality of layers via back propagation to reduce the margin of error. 1. A method for training a machine learning module , the method comprising:receiving, by the machine learning module, inputs comprising attributes of a storage controller, wherein the attributes affect performance parameters for performing stages and destages in the storage controller;in response to an event, generating, via forward propagation through a plurality of layers of the machine learning module, an output value that indicates whether to fill holes in a track of a cache by staging data to the cache prior to destage of the track;calculating a margin of error based on comparing the generated output value to an expected output value, wherein the expected output value is generated from an indication of whether it is correct to fill holes in a track of the cache by staging data to the cache prior to destage of the track; andadjusting weights of links that interconnect nodes of the plurality of layers via back propagation to reduce the margin of error, to improve a prediction of whether or not to fill holes in tracks prior to destage of the track.2. The method of claim 1 , wherein a margin of error for training the ...

Подробнее
10-06-2021 дата публикации

RECOVERING STORAGE DEVICES IN A STORAGE ARRAY HAVING ERRORS

Номер: US20210173752A1
Принадлежит:

Provided are a computer program product, system, and method for recovering storage devices in a storage array having errors. A determination is made to replace a first storage device in a storage array with a second storage device. The storage array is rebuilt by including the second storage device in the storage array and removing the first storage device from the storage array resulting in a rebuilt storage array. The first storage device is recovered from errors that resulted in the determination to replace. Data is copied from the second storage device included in the rebuilt storage array to the first storage device. The recovered first storage device is swapped into the storage array to replace the second storage device in response to copying the data from the second storage device to the first storage device. 124-. (canceled)25. A computer program product for managing storage devices in a storage array , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that is executable to perform operations , the operations comprising:rebuilding the storage array by replacing a first storage device in the storage array with a spare storage device from a pool of spare storage devices, resulting in a rebuilt storage array;validating operability of the first storage device;swapping the first storage device into the storage array to replace the spare storage device in response to the validating the operability of the first storage device; andreturning the replaced spare storage device into the pool of spare storage devices.26. The computer program product of claim 25 , wherein the operations further comprise:determining whether a detected error at the first storage device is a fatal error or non-fatal error, wherein the rebuilding the storage array, the validating operability of the first storage device, the swapping the first storage device into the storage array to replace the spare storage ...

Подробнее
31-05-2018 дата публикации

USING CACHE LISTS FOR PROCESSORS TO DETERMINE TRACKS TO DEMOTE FROM A CACHE

Номер: US20180150402A1
Принадлежит:

Provided are a computer program product, system, and method for using cache lists for processors to determine tracks in a storage to demote from a cache. Tracks in the storage stored in the cache are indicated in lists. There is one list for each of a plurality of processors. Each of the processors processes the list for that processor to process the tracks in the cache indicated on the list. There is a timestamp for each of the tracks indicated in the lists indicating a time at which the track was added to the cache. Tracks indicated in each of the lists having timestamps that fall within a range of timestamps are demoted 121-. (canceled)22. A computer program product for managing tracks in a storage in a cache accessed by a plurality of processors , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that when executed performs operations , the operations comprising:maintaining a plurality of cache lists, wherein each cache list of the cache lists is assigned to one of the plurality of processors, and wherein each of the processors processes the cache list to which it is assigned to demote tracks indicated in the cache list; andinvoking, by each processor of the processors, a demotion task to separately process the cache list assigned to the processor to demote the tracks indicated in the cache list assigned to the processor.23. The computer program product of claim 22 , wherein the operations further comprise:maintaining locks for the cache lists, wherein there is one lock for each of the cache lists,obtaining a lock, by each invoked demotion task of invoked demotion tasks, on the cache list assigned to the processor invoking the demotion task, wherein the demotion task processes the cache list in response to obtaining the lock.24. The computer program product of claim 22 , wherein the operations further comprise:in response to the invoked demotion tasks completing demotion operations, ...

Подробнее
07-05-2020 дата публикации

REDUCING A RATE AT WHICH DATA IS MIRRORED FROM A PRIMARY SERVER TO A SECONDARY SERVER

Номер: US20200142598A1
Принадлежит:

Provided are a computer program product, system, and method for reducing a rate at which data is mirrored from a primary server to a secondary server. A determination is made as to whether a processor utilization at a processor managing access to the secondary storage exceeds a utilization threshold. If so, a determination is made as to whether a specified operation at the processor is in progress. A message is sent to the primary server to cause the primary server to reduce a rate at which data is mirrored from the primary server to the secondary server in response to determining that the specified operation is in progress. 123-. (canceled)24. A computer program product for managing mirroring data from a primary server to a secondary server having a secondary storage , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that is executed by a processor to perform operations , the operations comprising:determining whether a processor utilization at a processor managing access to the secondary storage exceeds a utilization threshold;determining whether the processor is performing a critical task with respect to a storage array that if delayed increases a risk of data loss; andsending a message to the primary server to cause the primary server to reduce a rate at which data is mirrored from the primary server to the secondary server in response to determining that the critical task is being performed and that the processor utilization at the processor exceeds the utilization threshold.25. The computer program product of claim 24 , wherein the critical task comprises rebuilding the storage array to recover from a failed storage device at the secondary storage.26. The computer program product of claim 25 , wherein the operations further comprise:determining a remaining fault tolerance comprising a number of remaining operational storage devices that can fail, excluding the failed storage device ...

Подробнее
07-05-2020 дата публикации

PROVIDING TRACK FORMAT INFORMATION WHEN MIRRORING UPDATED TRACKS FROM A PRIMARY STORAGE SYSTEM TO A SECONDARY STORAGE SYSTEM

Номер: US20200142599A1
Принадлежит:

Provided are a computer program product, system, and method for providing track format information when mirroring updated tracks from a primary storage system to a secondary storage system. The primary storage system determines a track to mirror to the secondary storage system and determines whether there is track format information for the track to mirror. The track format information indicates a format and layout of data in the track, indicated in track metadata for the track. The primary storage system sends the track format information to the secondary storage system, in response to determining there is the track format information and mirrors the track to mirror to the secondary storage system. The secondary storage system uses the track format information for the track in the secondary cache when processing a read or write request to the mirrored track. 1. A computer program product for mirroring data from a primary storage system having a primary cache and a primary storage to a secondary storage system having a secondary cache and a secondary storage , the computer program product comprising a computer readable storage medium having computer readable program code executed in the primary and the secondary storage systems to perform operations , the operations comprising:determining, by the primary storage system, a track to mirror from the primary storage system to the secondary storage system;determining, by the primary storage system, whether there is track format information for the track to mirror that the primary storage system maintains for caching the track to mirror in the primary cache, wherein the track format information indicates a format and layout of data in the track, indicated in track metadata for the track;sending, by the primary storage system, the track format information to the secondary storage system, in response to determining there is the track format information;mirroring, by the primary storage system, the track to mirror to the ...

Подробнее
07-05-2020 дата публикации

DETERMINATION OF MEMORY ACCESS PATTERNS OF TASKS IN A MULTI-CORE PROCESSOR

Номер: US20200142742A1
Принадлежит:

A plurality of processing entities in which a plurality of tasks are executed are maintained. Memory access patterns are determined for each of the plurality of tasks by dividing a memory associated with the plurality of processing entities into a plurality of memory regions, and for each of the plurality of tasks, determining how many memory accesses take place in each of the memory regions, by incrementing a counter associated with each memory region in response to a memory access. Each of the plurality of tasks are allocated among the plurality of processing entities, based on the determined memory access patterns for each of the plurality of tasks. 120-. (canceled)21. A method , comprising:determining memory access patterns for each of a plurality of tasks by dividing a memory associated with a plurality of processing entities into a plurality of memory regions;determining a set of tasks that share access to a same region of memory; andallocating the set of tasks to a same processing entity for execution, wherein each task of the set of tasks accesses the same region of memory more than a threshold percentage of times.22. The method of claim 21 , wherein the memory access patterns are determined by a profiler tool that performs operations comprising:in response to a task accessing a new memory region, allocating a new counter for the task to keep track of accesses to the new memory region.23. The method of claim 21 , wherein the allocating of each of the plurality of tasks among the plurality of processing entities is performed subsequent to collection of a set of access statistics related to a workload that has been completed for the plurality of processing entities.24. The method of claim 21 , wherein each task of the plurality of tasks has a counter associated with each memory region accessed by each task claim 21 , wherein no counter is maintained for a memory region that has not been accessed by the task claim 21 , and wherein:the plurality of processing ...

Подробнее
07-05-2020 дата публикации

Wait classified cache writes in a data storage system

Номер: US20200142834A1
Принадлежит: International Business Machines Corp

In one embodiment, a task control block (TCB) for allocating cache storage such as cache segments in a multi-track cache write operation may be enqueued in a wait queue for a relatively long wait period, the first time the task control block is used, and may be re-enqueued on the wait queue for a relatively short wait period, each time the task control block is used for allocating cache segments for subsequent cache writes of the remaining tracks of the multi-track cache write operation. As a result, time-out suspensions caused by throttling of host input-output operations to facilitate cache draining, may be reduced or eliminated. It is appreciated that wait classification of task control blocks in accordance with the present description may be applied to applications other than draining a cache. Other features and aspects may be realized, depending upon the particular application.

Подробнее
07-05-2020 дата публикации

USING A MACHINE LEARNING MODULE TO SELECT A PRIORITY QUEUE FROM WHICH TO PROCESS AN INPUT/OUTPUT (I/O) REQUEST

Номер: US20200142846A1
Принадлежит:

Provided are a computer program product, system, and method for using at least one machine learning module to select a priority queue from which to process an Input/Output (I/O) request. Input I/O statistics are provided on processing of I/O requests at the queues to at least one machine learning module. Output is received from the at least one machine learning module for each of the queues. The output for each queue indicates a likelihood that selection of an I/O request from the queue will maintain desired response time ratios between the queues. The received output for each of the queues is used to select a queue of the queues. An I/O request from the selected queue is processed. 127-. (canceled)28. A computer program product for selecting one of a plurality queues having Input/Output (I/O) requests for a storage to process , comprising a computer readable storage medium having computer readable program code embodied therein that when executed performs operations , the operations comprising:providing at least one machine learning module that receives as input I/O statistics for the queues based on I/O activity at the queues and produces output indicating which queue to select;providing the output to use to select a queue of the queues from which to service an I/O request from the selected queue;determining an adjusted output for at least one of the queues based on a desired ratio of response times between queues;retraining the at least one machine learning module with the input I/O statistics to produce the adjusted output for the at least one of the queues; andusing the retrained at least one machine learning module to select one of the queues from which to process an I/O request.29. The computer program product of claim 28 , wherein the output for each queue indicates a likelihood that selection of an I/O request from the queue will maintain expected response time ratios between the queues.30. The computer program product of claim 28 , wherein the determining ...

Подробнее
17-06-2021 дата публикации

SPECULATIVELY EXECUTING CONDITIONAL BRANCHES OF CODE WHEN DETECTING POTENTIALLY MALICIOUS ACTIVITY

Номер: US20210182396A1
Принадлежит:

Provided are a computer program product, system, and method for determining a frequency at which to execute trap code in an execution path of a process executing a program to generate a trap address range to detect potential malicious code. Trap code is executed in response to processing a specified type of command in application code to allocate a trap address range used to detect potentially malicious code. A determination is whether to modify a frequency of executing the trap code in response to processing a specified type of command. The frequency of executing the trap code is modified in response to processing the specified type of command in response to determining to determining to modify the frequency of executing the trap code. 19-. (canceled)10. A computer program product for detecting potentially malicious code accessing data from a storage , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that when executed by a processor performs operations , the operations comprising:executing, by the processor, application code;speculatively executing, by the processor, conditional branches of the application code in advance of a location at which the application code is being executed, wherein a result of one of the speculatively executed conditional branches is maintained depending on a condition used to determine which of the conditional branches to traverse;detecting potentially malicious activity; andin response to detecting the potentially malicious activity, disabling the speculatively executing of the application code.11. The computer program product of claim 10 , wherein the operations further comprise:executing trap code in response to processing a specified type of command in application code to allocate a trap address range used to detect potentially malicious code; andexecuting the specified type of command in the application code, wherein the detecting the potentially ...

Подробнее
07-06-2018 дата публикации

MANAGEMENT OF FOREGROUND AND BACKGROUND PROCESSES IN A STORAGE CONTROLLER

Номер: US20180157498A1
Принадлежит:

A background process is configured to periodically scrub a boot storage of a storage controller to ensure operational correctness of the boot storage. One or more foreground processes store a system configuration data of the storage controller in the boot storage of the storage controller. The background process and the one or more foreground processes are executed to meet predetermined performance requirements for the background process and the one or more foreground processes. 1. A method , comprising:configuring a background process to periodically scrub a boot storage of a storage controller to ensure operational correctness of the boot storage;storing, via one or more foreground processes, a system configuration data of the storage controller in the boot storage of the storage controller; andexecuting the background process and the one or more foreground processes to meet predetermined performance requirements for the background process and the one or more foreground processes.2. The method of claim 1 , wherein the predetermined performance requirements include executing the background process at least once in a predetermined interval of time claim 1 , and completing execution of each of the one or more foreground processes within a predetermined amount of time.3. The method of claim 1 , the method further comprising:starting execution of the background process;in response to completion of the execution of the background process, reducing a priority of the background process; andin response to determining that the execution of the background process has not completed in a predetermined amount of time, increasing the priority of the background process.4. The method of claim 3 , wherein the background process is configurable to have at least two priorities including a low priority and a high priority claim 3 , wherein the high priority is a higher priority than the low priority claim 3 , the method further comprising: in response to determining that the ...

Подробнее
07-06-2018 дата публикации

USING CACHE LISTS FOR PROCESSORS TO DETERMINE TRACKS TO DEMOTE FROM A CACHE

Номер: US20180157594A1
Принадлежит:

Provided are a computer program product, system, and method for using cache lists for processors to determine tracks in a storage to demote from a cache. Tracks in the storage stored in the cache are indicated in lists. There is one list for each of a plurality of processors. Each of the processors processes the list for that processor to process the tracks in the cache indicated on the list. There is a timestamp for each of the tracks indicated in the lists indicating a time at which the track was added to the cache. Tracks indicated in each of the lists having timestamps that fall within a range of timestamps are demoted 121-. (canceled)22. A computer program product for managing tracks in a storage in a cache accessed by a plurality of processors , the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that when executed performs operations , the operations comprising:allocating a cache control block for a track to add to the cache;determining a cache list of a plurality of cache lists for the track to add to the cache, wherein each of the cache lists is assigned to one of the plurality of processors, wherein each of the processors processes the cache list to which it is assigned to demote tracks indicated in the cache list; andindicating the track to add to the cache in the determined cache list.23. The computer program product of claim 22 , wherein the operations further comprise:maintaining an assignment of free queues of available cache control blocks to the processors, wherein the allocating the cache control block comprises allocating the cache control block from a free queue of the free queues assigned to a processor initiating a request to add the track to the cache.24. The computer program product of claim 22 , wherein the allocating the cache control block comprises allocating the cache control block from a free queue claim 22 , wherein the determining the cache list assigned to ...

Подробнее
14-05-2020 дата публикации

VALIDATION OF STORAGE VOLUMES THAT ARE IN A PEER TO PEER REMOTE COPY RELATIONSHIP

Номер: US20200150881A1
Принадлежит:

A peer to peer remote copy operation is performed between a primary storage controller and a secondary storage controller, to establish a peer to peer remote copy relationship between a primary storage volume and a secondary storage volume. Subsequent to indicating completion of the peer to peer remote copy operation to a host, a determination is made as to whether the primary storage volume and the secondary storage volume have identical data, by performing operations of staging data of the primary storage volume from auxiliary storage of the primary storage controller to local storage of the primary storage controller, and transmitting the data of the primary storage volume that is staged, to the secondary storage controller for comparison with data of the secondary storage volume stored in an auxiliary storage of the secondary storage controller. 120-. (canceled)21. A method , comprising:in response to indicating to a host a completion of a copy operation between a primary storage controller and a secondary storage controller, staging data of a primary storage volume from an auxiliary storage of the primary storage controller to a local storage of the primary storage controller;transmitting the data of the primary storage volume that is staged, to the secondary storage controller, for comparison with data of a secondary storage volume stored in an auxiliary storage of the secondary storage controller to determine whether the primary storage volume and the secondary storage volume have identical data; andsuspending, by the primary storage controller, a copy relationship between the primary storage volume and the secondary storage volume, in response to receiving by the primary storage controller an error condition from the secondary storage volume, wherein the error condition is generated in response to determining that the primary storage volume and the secondary storage volume do not have identical data.22. The method of claim 21 , wherein the peer to peer ...

Подробнее
23-05-2019 дата публикации

Processing of a set of pending operations for a switchover from a first storage resource to a second storage resource

Номер: US20190155509A1
Принадлежит: International Business Machines Corp

A determination is made that data stored in an extent of a first storage resource is to be moved to an extent of a second storage resource. Operations that are still awaiting to start execution in the first storage resource after the data stored in the extent of the first storage resource has been moved to the extent of the second storage resource, are configured for execution in the second storage resource.

Подробнее