Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 169. Отображено 97.
03-10-2017 дата публикации

Methods and systems for autonomous memory searching

Номер: US0009779138B2

Methods and systems operate to receive a plurality of search requests for searching a database in a memory system. The search requests can be stored in a FIFO queue and searches can be subsequently generated for each search request. The resulting plurality of searches can be executed substantially in parallel on the database. A respective indication is transmitted to a requesting host when either each respective search is complete or each respective search has generated search results.

Подробнее
03-01-2019 дата публикации

METHODS AND SYSTEMS FOR AUTONOMOUS MEMORY

Номер: US20190007529A1
Принадлежит:

A method, an apparatus, and a system have been disclosed. An embodiment of the method includes an autonomous memory device receiving a set of instructions, the memory device executing the set of instructions, combining the set of instructions with any data recovered from the memory device in response to the set of instructions into a packet, and transmitting the packet from the memory device. 1. (canceled)2. A method performed by a memory device , comprising:receiving and parsing a set of instructions at a memory processing apparatus, of the memory device, using a packet parser;executing the set of instructions, using at least one execution unit of the memory processing apparatus, to retrieve data from a storage memory of the memory device;combining, into a packet using a packet generator of the memory processing apparatus, the set of instructions with the data retrieved from the storage memory; andcommunicating the packet from the memory processing apparatus to a memory controller connected to the memory device.3. The method of claim 2 , wherein the receiving of the set of instructions comprises receiving the set of instructions via a network coupled to the memory device claim 2 , and wherein the communicating of the packet comprises transmitting the packet via the network.4. The method of claim 2 , wherein the parsing of the set of instructions comprises parsing a received packet that includes the set of instructions claim 2 , by:loading a program counter with an initial program counter value associated with the received set of instructions;loading an instruction memory with the set of instructions; andloading a register file with a set of initial conditions associated with the set of instructions.5. The method of claim 4 , wherein executing the set of instructions comprises:calculating a new program counter value after executing a first instruction of the set of instructions; andstoring the new program counter value in the program counter.6. The method of claim 2 ...

Подробнее
17-02-2022 дата публикации

PROGRAMMABLE ENGINE FOR DATA MOVEMENT

Номер: US20220050639A1
Принадлежит:

A memory chip having a predefined memory region configured to store program data transmitted from a microchip. The memory chip also having a programmable engine configured to facilitate access to a second memory chip to read data from the second memory chip and write data to the second memory chip according to stored program data in the predefined memory region. The predefined memory region can include a portion configured as a command queue for the programmable engine, and the programmable engine can be configured to facilitate access to the second memory chip according to the command queue. 1. A system , comprising:a first memory; anda second memory, a predefined memory region configured to store program data; and', 'a programmable engine configured to facilitate access to the second memory to', 'read data from the second memory and write data to the second memory according to the program data stored in the predefined memory region., 'wherein the first memory comprises2. The system of claim 1 , wherein the predefined memory region comprises a portion configured as a command queue for the programmable engine claim 1 , and wherein the programmable engine is configured to facilitate access to the second memory according to the command queue.3. The system of claim 2 , wherein a part of the program data stored in the predefined memory region is configured to control the command queue.4. The system of claim 3 , comprising a portion of memory configured to store data to be moved to the second memory claim 3 , and wherein data stored in the portion of memory is moved according to the command queue.5. The system of claim 4 , wherein the first memory further comprises:a first set of pins configured to allow the first memory to be coupled to a microchip via first wiring;a second set of pins configured to allow the first memory to be coupled to the second memory via second wiring that is separate from the first wiring; andwherein the programmable engine is configured to ...

Подробнее
31-01-2019 дата публикации

MEMORY DEVICES WITH SELECTIVE PAGE-BASED REFRESH

Номер: US20190035453A1
Автор: Akel Ameen D.
Принадлежит:

Several embodiments of memory devices and systems with selective page-based refresh are disclosed herein. In one embodiment, a memory device includes a controller operably coupled to a main memory having at least one memory region comprising a plurality of memory pages. The controller is configured to track, in one or more refresh schedule tables stored on the memory device and/or on a host device, a subset of memory pages in the plurality of memory pages having an refresh schedule. In some embodiments, the controller is further configured to refresh the subset of memory pages in accordance with the refresh schedule. 1. A memory device comprising:a main memory including a memory region having a plurality of memory pages; and track a first subset of the plurality of memory pages having a first refresh schedule and a second subset of the plurality of memory pages having a second refresh schedule that is different than the first refresh schedule,', 'refresh the first subset of memory pages according to the first refresh schedule, and', 'refresh the second subset of memory pages according to the second refresh schedule., 'a controller operably coupled to the main memory, wherein the controller is configured to2. The memory device of claim 1 , wherein the first subset is a contiguous range of memory pages claim 1 , and the controller is configured to track the first subset using an identifier of a first page of the range and an identifier of a last page of the range.3. The memory device of claim 1 , wherein the first subset is a contiguous range of memory pages claim 1 , and the controller is configured to track the first subset using an identifier of a first page of the range and a length of the range.4. The memory device of claim 1 , wherein the controller is further configured to remove an imprint from a first memory page in the first subset by repeatedly refreshing the first memory page.5. The memory device of claim 1 , wherein the controller is further configured to ...

Подробнее
19-02-2015 дата публикации

METHODS AND SYSTEMS FOR AUTONOMOUS MEMORY SEARCHING

Номер: US20150052114A1
Принадлежит: MICRON TECHNOLOGY, INC.

Methods and systems operate to receive a plurality of search requests for searching a database in a memory system. The search requests can be stored in a FIFO queue and searches can be subsequently generated for each search request. The resulting plurality of searches can be executed substantially in parallel on the database. A respective indication is transmitted to a requesting host when either each respective search is complete or each respective search has generated search results. 1. A method comprising:issuing a search request to a memory; andreceiving an indication from the memory that a search responsive to the search request has been completed or that search results have been found; andwhen the search results have been found, retrieving the search results from the memory in response to the indication.2. The method of wherein the search request comprises a write command to the memory.3. The method of wherein the write command comprises an indication of a search criteria and a search key.4. The method of and further comprising receiving an acknowledgement from the memory that the search request has been received.5. The method of wherein receiving the indication from the memory that the search responsive to the search request has been completed or that the search results have been found comprises receiving an indication that the search has reached an end of a database stored in the memory.6. The method of wherein receiving the indication from the memory that the search responsive to the request has been completed or that the search results have been found comprises receiving an indication that buffers in the memory are full.7. The method of wherein retrieving the search results from the memory in response to the indication comprises issuing a read command to the memory that causes the memory to transmit back the search results.8. The method of and further comprising issuing a read command to the memory to indicate to the memory that a host is ready to receive ...

Подробнее
03-03-2022 дата публикации

Systems and methods for reducing latency in cloud services

Номер: US20220068264A1
Автор: Ameen D. Akel
Принадлежит: Micron Technology Inc

Systems and methods for distributing cloud-based language processing services to partially execute in a local device to reduce latency perceived by the user. For example, a local device may receive a request via audio input, that requires a cloud-based service to process the request and generate a response. A partial response may be generated locally and played back while a more complete response is generated remotely.

Подробнее
25-02-2021 дата публикации

FEATURE DICTIONARY FOR BANDWIDTH ENHANCEMENT

Номер: US20210056350A1
Принадлежит:

A system having multiple devices that can host different versions of an artificial neural network (ANN) as well as different versions of a feature dictionary. In the system, encoded inputs for the ANN can be decoded by the feature dictionary, which allows for encoded input to be sent to a master version of the ANN over a network instead of an original version of the input which usually includes more data than the encoded input. Thus, by using the feature dictionary for training of a master ANN there can be reduction of data transmission. 1. A method , comprising:hosting, by a first computing device, a master version of an artificial neural network (ANN);hosting, by the first computing device, a master version of a feature dictionary;receiving, by the first computing device, encoded features from a second computing device, wherein the received encoded features are encoded by the second computing device according to a local version of the feature dictionary hosted by the second computing device;decoding, by the first computing device, the received encoded features according to the master version of the feature dictionary; andtraining, by the first computing device, the master version of the ANN based on the decoded features using machine learning.2. The method of claim 1 , comprising transmitting claim 1 , by the first computing device claim 1 , the trained master version of the ANN to the second computing device.3. The method of claim 1 , comprising receiving claim 1 , by the first computing device claim 1 , the local version of the feature dictionary from the second computing device.4. The method of claim 3 , comprising changing claim 3 , by the first computing device claim 3 , the master version of the feature dictionary based on the received local version of the feature dictionary.5. The method of claim 4 , wherein the decoding comprises decoding the encoded features according to the changed master version of the feature dictionary.6. The method of claim 1 , ...

Подробнее
25-02-2021 дата публикации

DISTRIBUTED MACHINE LEARNING WITH PRIVACY PROTECTION

Номер: US20210056387A1
Принадлежит:

A system having multiple devices that can host different versions of an artificial neural network (ANN). In the system, changes to local versions of the ANN can be combined with a master version of the ANN. In the system, a first device can include memory that can store the master version, a second device can include memory that can store a local version of the ANN, and there can be many devices that store local versions of the ANN. The second device (or any other device of the system hosting a local version) can include a processor that can train the local version, and a transceiver that can transmit changes to the local version generated from the training. The first device can include a transceiver that can receive the changes to a local version, and a processing device that can combine the received changes with the master version. 1. A method , comprising:hosting, by a first computing device, a master version of an artificial neural network (ANN);receiving, by the first computing device, changes to a local version of the ANN from training of the local version of the ANN hosted by a second computing device; andcombining, by the first computing device, the received changes to the local version of the ANN with the master version of the ANN to generate an updated master version of the ANN.2. The method of claim 1 , wherein the first computing device is part of a distributed network of computers forming a cloud computing environment claim 1 , and wherein the second computing device is a mobile device.3. The method of claim 1 , wherein the training of the local version of the ANN comprises inputting user data locally stored in the second computing device claim 1 , and wherein the user data locally stored in the second computing device is not accessible by the first computing device.4. The method of claim 3 , wherein at least some of the user data locally stored in the second computing device is only accessible by the second computing device.5. The method of claim 1 , ...

Подробнее
25-02-2021 дата публикации

Machine learning with feature obfuscation

Номер: US20210056405A1
Принадлежит: Micron Technology Inc

A system having multiple devices that can host different versions of an artificial neural network (ANN). In the system, inputs for the ANN can be obfuscated for centralized training of a master version of the ANN at a first computing device. A second computing device in the system includes memory that stores a local version of the ANN and user data for inputting into the local version. The second computing device includes a processor that extracts features from the user data and obfuscates the extracted features to generate obfuscated user data. The second device includes a transceiver that transmits the obfuscated user data. The first computing device includes a memory that stores the master version of the ANN, a transceiver that receives obfuscated user data transmitted from the second computing device, and a processor that trains the master version based on the received obfuscated user data using machine learning.

Подробнее
04-03-2021 дата публикации

Error correction for content-addressable memory

Номер: US20210064455A1
Принадлежит:

Methods, systems, and devices for error correction for content-addressable memory (CAM) are described. A CAM may store bit vectors as a set of subvectors, which each subvector stored in an independent aspect of the CAM, such as in a separate column or array of memory cells within the CAM. The CAM may similarly segment a queried input bit vector and identify, for each resulting input subvector, whether a matching subvector is stored by the CAM. The CAM may identify a match for the input bit vector when the number of matching subvectors satisfies a threshold. The CAM may validate a match based on comparing a stored bit vector corresponding to the identified match to the input bit vector. The stored bit vector may undergo error correction and may be stored in the CAM or another memory array, such as a dynamic random access memory (DRAM) array. 1. A method , comprising:receiving a bit vector at a content-addressable memory (CAM) that stores a plurality of bit vectors and comprises a plurality of row lines that extend in a first direction and a plurality of column lines that extend in a second direction;segmenting the received bit vector to obtain a plurality of received subvectors;activating, for each received subvector of the plurality, the plurality of row lines using information included in the received subvector of the plurality;determining whether the received bit vector matches a stored bit vector based at least in part on the activating; andoutputting an indication of whether the received bit vector matches the stored bit vector.2. The method of claim 1 , wherein each stored bit vector corresponds to a plurality of stored subvectors claim 1 , further comprising:writing the plurality of stored subvectors to memory cells coupled with a corresponding plurality of column lines, wherein each stored subvector of the plurality is written to memory cells coupled with a respective column line of the plurality.3. The method of claim 1 , wherein each stored bit vector ...

Подробнее
17-03-2022 дата публикации

MEMORY DEVICES WITH SELECTIVE PAGE-BASED REFRESH

Номер: US20220084582A1
Автор: Akel Ameen D.
Принадлежит:

Several embodiments of memory devices and systems with selective page-based refresh are disclosed herein. In one embodiment, a memory device includes a controller operably coupled to a main memory having at least one memory region comprising a plurality of memory pages. The controller is configured to track, in one or more refresh schedule tables stored on the memory device and/or on a host device, a subset of memory pages in the plurality of memory pages configured to be refreshed according to a refresh schedule. In some embodiments, the controller is further configured to refresh the subset of memory pages in accordance with the refresh schedule. 1. A method of managing a memory device , the method comprising:receiving instructions to assign a first memory region a first refresh schedule, wherein the first memory region is currently assigned a second refresh schedule different from the first refresh schedule, and wherein the first refresh schedule corresponds to never refreshing corresponding memory regions;determining that a second memory region physically contiguous to the first memory region is assigned a third refresh schedule that is not the first refresh schedule;determining that a third memory region is assigned the first refresh schedule;writing data currently stored in first memory cells of the third memory region to second memory cells of the second memory region; andassigning the first memory region and the second memory region the first refresh schedule.2. The method of claim 1 , wherein writing the data includes moving the data from the first memory cells to the second memory cells.3. The method of claim 1 , wherein:the data is first data; andthe method further comprises moving second data from the second memory cells of the second memory region to the first memory cells of the third memory region.4. The method of claim 1 , further comprising assigning the third memory region the third refresh schedule.5. The method of claim 4 , wherein the third ...

Подробнее
11-03-2021 дата публикации

SPATIOTEMPORAL FUSED-MULTIPLY-ADD, AND RELATED SYSTEMS, METHODS AND DEVICES

Номер: US20210072957A1
Принадлежит:

Systems, apparatuses, and methods of operating memory systems are described. Processing-in-memory capable memory devices are also described, and methods of performing fused-multiply-add operations within the same. Bit positions of bits stored at one or more portions of one or more memory arrays, may be accessed via data lines by activating the same or different access lines. A sensing circuit operatively coupled to a data line may be temporarily formed and measured to determine a state (e.g., a count of the number of bits that are a logic “1”) of accessed bit positions of a data line, and state information may be used to determine a computational result. 1. A method , comprising:selecting for access first bit positions of first bits of an operand stored in a first portion of a memory array, the first bits accessible via a first data line;activating first access lines associated with the selected first bit positions;accessing the first bits of the operand; andreceiving at least a portion of a computational result responsive to the accessed first bits.2. The method of claim 1 , further comprising:selecting a first bit position of the first bit positions of the operand responsive to a value of a bit at a first bit position of an operator; andselecting a second bit position of the first bit positions of the operand responsive to a value of a bit at a second bit position of the operator.3. The method of claim 1 , further comprising:generating a partial computational result based at least in part on a number of accessed first bits that have a specified bit value.4. The method of claim 3 , wherein the generating the partial computational result comprises:generating a summation result and a carry-over.5. The method of claim 1 , further comprising:selecting for access second bit positions of second bits of the operand;activating second access lines associated with the selected second bit positions; andaccessing the second bits of the operand responsive to the activated ...

Подробнее
11-03-2021 дата публикации

METHODS FOR PERFORMING PROCESSING-IN-MEMORY OPERATIONS ON SERIALLY ALLOCATED DATA, AND RELATED MEMORY DEVICES AND SYSTEMS

Номер: US20210072986A1
Принадлежит:

Methods, apparatuses, and systems for in- or near-memory processing are described. Strings of bits (e.g., vectors) may be fetched and processed in logic of a memory device without involving a separate processing unit. Operations (e.g., arithmetic operations) may be performed on numbers stored in a bit-serial way during a single sequence of clock cycles. Arithmetic may thus be performed in a single pass as numbers are bits of two or more strings of bits are fetched and without intermediate storage of the numbers. Vectors may be fetched (e.g., identified, transmitted, received) from one or more bit lines. Registers of the memory array may be used to write (e.g., store or temporarily store) results or ancillary bits (e.g., carry bits or carry flags) that facilitate arithmetic operations. Circuitry near, adjacent, or under the memory array may employ XOR or AND (or other) logic to fetch, organize, or operate on the data. 1. A method , comprising:fetching, to circuitry adjacent a memory array, a first bit from a first group of bits;fetching, to the circuitry, a second bit from a second group of bits located on a first bit line, the second bit fetched based at least in part on a logic value of the first bit;fetching, to the circuitry, a third bit from the second group of bits;performing, at the circuitry, one or more operations on the second bit or the third bit, or both, based at least in part on a logic value of the first bit from the first group of bits; andwriting a first result of the one or more operations to one or more registers of the memory array.2. The method of claim 1 , wherein performing one or more operations comprises performing an XOR-carry-accumulate operation via a fuse-multiply-add (FMA) array or a sense amplifier array.3. The method of claim 1 , further comprising:fetching the first group of bits one bit at a time and only once;storing each bit of the first group of bits in the one or more registers;fetching the second group of bits one bit at a time ...

Подробнее
11-03-2021 дата публикации

METHODS FOR PERFORMING PROCESSING-IN-MEMORY OPERATIONS, AND RELATED MEMORY DEVICES AND SYSTEMS

Номер: US20210072987A1
Принадлежит:

Methods, apparatuses, and systems for in-or near-memory processing are described. Strings of bits (e.g., vectors) may be fetched and processed in logic of a memory device without involving a separate processing unit. Operations (e.g., arithmetic operations) may be performed on numbers stored in a bit-parallel way during a single sequence of clock cycles. Arithmetic may thus be performed in a single pass as numbers are bits of two or more strings of bits are fetched and without intermediate storage of the numbers. Vectors may be fetched (e.g., identified, transmitted, received) from one or more bit lines. Registers of a memory array may be used to write (e.g., store or temporarily store) results or ancillary bits (e.g., carry bits or carry flags) that facilitate arithmetic operations. Circuitry near, adjacent, or under the memory array may employ XOR or AND (or other) logic to fetch, organize, or operate on the data. 1. A method , comprising:loading a first number of bits into circuitry of a memory device, each bit of the first number of bits having a first state or a second state;loading a second number of groups of bits into the circuitry, each bit of the second number of groups of bits having the first state or the second state;multiplying each group of bits of the second number of groups of bits by each bit of the first number of bits to generate a number of scaled rows; andsumming, along associated bit positions, the number of scaled rows to generate an output row.2. The method of claim 1 , further comprising shifting each scaled row of the number of scaled rows by a number of column positions equal to a bit position of the bit of the first number of bits relative to a first scaled row to align the number of scaled rows along the associated bit positions.3. The method of claim 1 , further comprising generating an intermediate matrix comprising the number of scaled rows claim 1 , each scaled row of the intermediate matrix shifted at least one column position ...

Подробнее
11-03-2021 дата публикации

Performing processing-in-memory operations related to spiking events, and related methods, systems and devices

Номер: US20210073622A1
Принадлежит: Micron Technology Inc

Methods, apparatuses, and systems for in- or near-memory processing are described. Spiking events in a spiking neural network may be processed via a memory system. A memory system may store a group of destination neurons, and at each time interval in a series of time intervals of a spiking neural network (SNN), pass through a group of pre-synaptic spike events from respective source neurons, wherein the group of pre-synaptic spike events are subsequently stored in memory.

Подробнее
11-03-2021 дата публикации

PERFORMING PROCESSING-IN-MEMORY OPERATIONS RELATED TO PRE-SYNAPTIC SPIKE SIGNALS, AND RELATED METHODS AND SYSTEMS

Номер: US20210073623A1
Принадлежит:

Methods, apparatuses, and systems for in-or near-memory processing are described. Spiking events in a spiking neural network may be processed via a memory system. A memory system may store data corresponding to a group of destination neurons. The memory system may, at each time interval of a SNN, pass through data corresponding to a group of pre-synaptic spike events from respective source neurons. The data corresponding to the group of pre-synaptic spike events may be subsequently stored in the memory system. 1. A system , comprising:a memory array comprising a number of memory cells at intersections of a number of word lines and a number of bit lines, wherein data written to the number of memory cells corresponds to synaptic weights values;a driver configured to drive the number of word lines; and receive output signals from the number of bit lines;', 'in response to a first signal driven on a word line of the number of word lines before generation of a spike signal of a neuron, generate a second signal with a voltage, current, or timing characteristic, or combination thereof that increases a conductance of a first memory cell of the number of memory cells according to a spike timing dependent plasticity (STDP) characteristic of the first memory cell; and', 'in response to the first signal driven on the word line after the generation of the spike signal of the neuron, generate a third signal with a different voltage, current, or timing characteristic, or combination thereof that decreases the conductance of the first memory cell according to the STDP characteristic., 'circuitry comprising a sense amplifier coupled to the number of bit lines, the circuitry configured to2. The system of claim 1 , wherein the circuitry is further configured to claim 1 , in response to the generation of the spike signal of the neuron claim 1 , transmit a feedback signal to a bit line of the number of bit lines claim 1 , wherein the feedback signal is a decaying bias.3. The system of ...

Подробнее
18-03-2021 дата публикации

ACCESSING STORED METADATA TO IDENTIFY MEMORY DEVICES IN WHICH DATA IS STORED

Номер: US20210081121A1
Принадлежит:

A computer system stores metadata that is used to identify physical memory devices that store randomly-accessible data for memory of the computer system. In one approach, access to memory in an address space is maintained by an operating system of the computer system. Stored metadata associates a first address range of the address space with a first memory device, and a second address range of the address space with a second memory device. The operating system manages processes running on the computer system by accessing the stored metadata. This management includes allocating memory based on the stored metadata so that data for a first process is stored in the first memory device, and data for a second process is stored in the second memory device. 1. A method comprising:accessing, by a processing device of a computer system, memory in an address space, wherein memory devices of the computer system are accessed by the processing device using addresses in the address space;storing metadata that associates a first address range of the address space with a first memory device, and a second address range of the address space with a second memory device, wherein a first latency of the first memory device is different than a second latency of the second memory device; andallocating, based on the stored metadata, the first address range to an application executing on the computer system.2. The method of claim 1 , wherein allocating the first address range to the application is performed in response to a request by the application.3. The method of claim 1 , further comprising:in response to a first request by the application, providing an indication that the first latency is greater than the second latency;receiving a second request made by the application based on the indication; andin response to receiving the second request, allocating the second address range to the application.4. The method of claim 1 , wherein the first latency is less than the second latency claim 1 ...

Подробнее
18-03-2021 дата публикации

PROGRAMMABLE ENGINE FOR DATA MOVEMENT

Номер: US20210081141A1
Принадлежит:

A memory chip having a predefined memory region configured to store program data transmitted from a microchip. The memory chip also having a programmable engine configured to facilitate access to a second memory chip to read data from the second memory chip and write data to the second memory chip according to stored program data in the predefined memory region. The predefined memory region can include a portion configured as a command queue for the programmable engine, and the programmable engine can be configured to facilitate access to the second memory chip according to the command queue. 1. A memory chip , comprising;a predefined memory region configured to store program data transmitted from a microchip; anda programmable engine configured to facilitate access to a second memory chip to read data from the second memory chip and write data to the second memory chip according to program data stored in the predefined memory region.2. The memory chip of claim 1 , wherein the predefined memory region comprises a portion configured as a command queue for the programmable engine claim 1 , and wherein the programmable engine is configured to facilitate access to the second memory chip according to the command queue.3. The memory chip of claim 2 , wherein a part of the program data stored in the predefined memory region is configured to control the command queue.4. The memory chip of claim 3 , comprising a portion of memory configured to store data to be moved to the second memory chip claim 3 , and wherein data stored in the portion of memory is moved according to the command queue.5. The memory chip of claim 4 , comprising:a first set of pins configured to allow the memory chip to be coupled to the microchip via first wiring;a second set of pins configured to allow the memory chip to be coupled to the second memory chip via second wiring that is separate from the first wiring; andwherein the programmable engine is configured to facilitate access to the second memory ...

Подробнее
18-03-2021 дата публикации

FLEXIBLE PROVISIONING OF MULTI-TIER MEMORY

Номер: US20210081318A1
Принадлежит:

A system having a string of memory chips that can implement flexible provisioning of a multi-tier memory. In some examples, the system can include a first memory chip in a string of memory chips of a memory, a second memory chip in the string, and a third memory chip in the string. The first memory chip can be directly wired to the second memory chip and can be configured to interact directly with the second memory chip. The second memory chip can be directly wired to the third memory chip and can be configured to interact directly with the third memory chip. As part of implementing the flexible provisioning of a multi-tier memory, the first memory chip can include a cache for the second memory chip, and the second memory chip can include a buffer for the third memory chip. 1. A system , comprising:a first memory chip in a string of memory chips of a memory;a second memory chip in the string of memory chips; and wherein the first memory chip is directly wired to the second memory chip and is configured to interact directly with the second memory chip,', 'wherein the second memory chip is directly wired to the third memory chip and is configured to interact directly with the third memory chip,', 'wherein the first memory chip comprises a cache for the second memory chip, and', 'wherein the second memory chip comprises a buffer for the third memory chip., 'a third memory chip in the string of memory chips,'}2. The system of claim 1 , wherein the second memory chip comprises logical-to-physical mapping for the third memory chip.3. The system of claim 2 , further comprising a processor chip claim 2 , wherein the processor chip is directly wired to the first memory chip and is configured to interact directly with the first memory chip.4. The system of claim 3 , wherein the processor chip is a system on a chip (SoC).5. The system of claim 3 , wherein the processor chip is configured to configure the cache for the second memory chip.6. The system of claim 5 , wherein the ...

Подробнее
18-03-2021 дата публикации

MEMORY CHIP HAVING AN INTEGRATED DATA MOVER

Номер: US20210081336A1
Принадлежит:

A memory chip having a first set of pins configured to allow the memory chip to be coupled to a first microchip or device via first wiring. The memory chip also having a second set of pins configured to allow the memory chip to be coupled to a second microchip or device via second wiring that is separate from the first wiring. The memory chip also having a data mover configured to facilitate access to the second microchip or device, via the second set of pins, to read data from the second microchip or device and write data to the second microchip or device. Also, a system having the memory chip, the first microchip or device, and the second microchip or device. 1. A memory chip , comprising:a first set of pins configured to allow the memory chip to be coupled to a first microchip or device via first wiring;a second set of pins configured to allow the memory chip to be coupled to a second microchip or device via second wiring that is separate from the first wiring; anda data mover configured to facilitate access to the second microchip or device, via the second set of pins, to read data from the second microchip or device and write data to the second microchip or device.2. The memory chip of claim 1 , wherein data stored in a portion of the memory chip is accessible by or through the first microchip or device via the first set of pins.3. The memory chip of claim 2 , wherein the data mover is configured to combine the data stored in the portion of the memory chip by moving the data in blocks to the second microchip or device.4. The memory chip of claim 3 , wherein the blocks are at a granularity that is coarser than the data stored in the portion of the memory chip.5. The memory chip of claim 4 , wherein the data mover is configured to:buffer movement of changes to the data stored in the portion of the memory chip; andsend write requests to the second microchip or device in a suitable size due to the buffering by the data mover.6. The memory chip of claim 5 , wherein ...

Подробнее
25-03-2021 дата публикации

Exclusive or engine on random access memory

Номер: US20210089663A1
Принадлежит: Micron Technology Inc

Methods and apparatus of Exclusive OR (XOR) engine in a random access memory device to accelerate cryptographical operations in processors. For example, an integrated circuit memory device enclosed within a single integrated circuit package can include an XOR engine that is coupled with memory units in the random access memory device (e.g., having dynamic random access memory (DRAM) or non-volatile random access memory (NVRAM)). A processor (e.g., System-on-Chip (SoC) or Central Processing Unit (CPU)) can have encryption logic that performs cryptographical operations using XOR operations that are performed by the XOR engine in the random access memory device using the data in the random access memory device.

Подробнее
05-05-2022 дата публикации

Intelligent Content Migration with Borrowed Memory

Номер: US20220138102A1
Принадлежит:

Systems, methods and apparatuses to intelligently migrate content involving borrowed memory are described. For example, after the prediction of a time period during which a network connection between computing devices having borrowed memory degrades, the computing devices can make a migration decision for content of a virtual memory address region, based at least in part on a predicted usage of content, a scheduled operation, a predicted operation, a battery level, etc. The migration decision can be made based on a memory usage history, a battery usage history, a location history, etc. using an artificial neural network; and the content migration can be performed by remapping virtual memory regions in the memory maps of the computing devices.

Подробнее
03-06-2021 дата публикации

WRITING AND QUERYING OPERATIONS IN CONTENT ADDRESSABLE MEMORY SYSTEMS WITH CONTENT ADDRESSABLE MEMORY BUFFERS

Номер: US20210165609A1
Принадлежит:

An apparatus (e.g., a content addressable memory system) can have a controller, a first content addressable memory coupled to the controller, and a second content addressable memory coupled to the controller. The controller can be configured to cause the first content addressable memory to write data in the first content addressable memory, cause the second content addressable memory to write the data in the second content addressable memory, and cause the second content addressable memory to query the data written in the second content addressable memory while the first content addressable memory continues to write the data in the first content addressable memory. 120.-. (canceled)21. An apparatus , comprising:a controller;a first content addressable memory coupled to the controller; anda second content addressable memory coupled to the controller; cause the first content addressable memory to write data in the first content addressable memory for a first period of time;', 'cause the second content addressable memory to write the data in the second content addressable memory for a second period of time that overlaps a first initial portion of the first period of time; and', 'cause the second content addressable memory to query the data written in the second content addressable memory during a remaining portion of the first period of time., 'wherein the controller is configured to22. The apparatus of claim 21 , wherein the controller is further configured to cause the second content addressable memory to refrain from querying the data written in the second content addressable memory after the remaining portion of the first period of time.23. The apparatus of claim 21 , wherein the controller is further configured to cause the second content addressable memory to query the data written in the second content addressable memory while the first content addressable memory continues to write the data in the first content addressable memory.24. The apparatus of claim 21 , ...

Подробнее
04-06-2015 дата публикации

METHODS AND SYSTEMS FOR AUTONOMOUS MEMORY

Номер: US20150153963A1
Принадлежит: MICRON TECHNOLOGY, INC.

A method, an apparatus, and a system have been disclosed. An embodiment of the method includes an autonomous memory device receiving a set of instructions, the memory device executing the set of instructions, combining the set of instructions with any data recovered from the memory device in response to the set of instructions into a packet, and transmitting the packet from the memory device. 1. A method comprising:receiving a set of instructions at an autonomous memory device;executing the set of instructions in the memory device;combining, into a packet, the set of instructions with any data recovered from the memory device in response to the set of instructions; andtransmitting the packet from the memory device.2. The method of wherein receiving the set of instructions at the memory device and transmitting the packet from the memory device respectively comprise receiving the set of instructions from a network coupled to the memory device and transmitting the packet to the network.3. The method of wherein receiving the set of instructions comprises receiving a packet comprising the set of instructions.4. The method of further comprising parsing the received packet.5. The method of wherein parsing the received packet comprises:loading a program counter with an initial program counter value associated with the received set of instructions;loading an instruction memory with the set of instructions; andloading a register file with a set of initial conditions associated with the set of instructions.6. The method of wherein executing the set of instructions comprises:calculating a new program counter value after executing a first instruction of the set of instructions; andstoring the new program counter value in the program counter.7. The method of wherein executing the set of instructions comprises incrementing the initial program counter value after executing a first instruction of the set of instructions.8. The method of wherein executing the set of instructions ...

Подробнее
24-06-2021 дата публикации

MEMORY ACCESSING WITH AUTO-PRECHARGE

Номер: US20210193209A1
Принадлежит:

Methods, systems, and devices for memory accessing with auto-precharge are described. For example, a memory system may be configured to support an activate with auto-precharge command, which may be associated with a memory device opening a page of memory cells, latching respective logic states stored by the memory cells at a row buffer, writing logic states back to the page of memory cells, and maintaining the latched logic states at the row buffer (e.g., while maintaining power to latches of the row buffer, after closing the page of memory cells, while the page of memory cells is closed). 1. A method , comprising:receiving, at a memory device, an access command from a host device; and selecting each of the memory cells of the page;', 'sensing a respective logic state of each of the memory cells;', 'storing the respective logic state of each of the memory cells at a respective sense amplifier latch;', 'rewriting the respective logic state of each of the memory cells to each of the respective memory cells; and', 'deselecting each of the memory cells while the respective logic state is stored at the respective sense amplifier latch., 'accessing, at the memory device, a page of memory cells of the memory device based at least in part on the access command, wherein accessing the page of memory cells comprises2. The method of claim 1 , further comprising:maintaining the respective logic state of each of the memory cells at the respective sense amplifier latch until receiving another access command.3. The method of claim 1 , further comprising:receiving, at the memory device, a second access command; andtransmitting, to the host device, the respective logic state of each of the memory cells based at least in part on the second access command.4. The method of claim 1 , further comprising:applying power to the plurality of sense amplifier latches based at least in part on the access command; andmaintaining the power at the plurality of sense amplifier latches until ...

Подробнее
29-09-2022 дата публикации

FEATURE DICTIONARY FOR BANDWIDTH ENHANCEMENT

Номер: US20220309291A1
Принадлежит:

A system having multiple devices that can host different versions of an artificial neural network (ANN) as well as different versions of a feature dictionary. In the system, encoded inputs for the ANN can be decoded by the feature dictionary, which allows for encoded input to be sent to a master version of the ANN over a network instead of an original version of the input which usually includes more data than the encoded input. Thus, by using the feature dictionary for training of a master ANN there can be reduction of data transmission. 1. A method , comprising:hosting, by a first computing device, a master version of an artificial neural network (ANN);hosting, by the first computing device, a master version of a feature dictionary;receiving, by the first computing device, encoded features from a second computing device, wherein the received encoded features are encoded by the second computing device according to a local version of the feature dictionary hosted by the second computing device;decoding, by the first computing device, the received encoded features according to the master version of the feature dictionary; andtraining, by the first computing device, the master version of the ANN based on the decoded features using machine learning.2. The method of claim 1 , comprising transmitting claim 1 , by the first computing device claim 1 , the trained master version of the ANN to the second computing device.3. The method of claim 1 , comprising receiving claim 1 , by the first computing device claim 1 , the local version of the feature dictionary from the second computing device.4. The method of claim 3 , comprising changing claim 3 , by the first computing device claim 3 , the master version of the feature dictionary based on the received local version of the feature dictionary.5. The method of claim 4 , wherein the decoding comprises decoding the encoded features according to the changed master version of the feature dictionary.6. The method of claim 1 , ...

Подробнее
22-07-2021 дата публикации

Content addressable memory systems with content addressable memory buffers

Номер: US20210225447A1
Принадлежит: Micron Technology Inc

An apparatus (e.g., a content addressable memory system) can have a controller; a first content addressable memory coupled to the controller and a second content addressable memory coupled to the controller. The controller can be configured to cause the first content addressable memory to compare input data to first data stored in the first content addressable memory and cause the second content addressable memory to compare the input data to second data stored in the second content addressable memory such the input data is compared to the first and second data concurrently and replace a result of the comparison of the input data to the first data with a result of the comparison of the input data to the second data in response to determining that the first data is invalid and that the second data corresponds to the first data.

Подробнее
05-08-2021 дата публикации

Time to Live for Load Commands

Номер: US20210240398A1
Принадлежит:

A memory sub-system configured to be responsive to a time to live requirement for load commands from a processor. For example, a load command issued by the processor (e.g., SoC) can include, or be associated with, an optional time to live parameter. The parameter requires that the data at the memory address be available within the time specified by the time to live parameter. When the requested data is currently in the lower speed memory (e.g., NAND flash) and not available in the higher speed memory (e.g., DRAM, NVRAM), the memory sub-system can determine that the data cannot be made available with the specified time and optionally skip the operations and return an error response immediately. 1. An apparatus , comprising:a host interface;a plurality of memory devices having different latencies in accessing data stored in the memory devices; and determine a requested time duration for responding to the command;', 'identify a memory device, among the plurality of memory devices, that currently stores the item at the memory address; and', 'determine whether a latency of the memory device is sufficient to retrieve the item from the memory device as a response to the command within the requested time duration., 'a controller coupled to the host interface and the memory devices, wherein when the host interface receives a command from a processor to load an item from a memory address, the controller is configured to at least2. The apparatus of claim 1 , wherein the controller comprises one or more processors.3. The apparatus of claim 2 , wherein the controller is configured via one or more sequences of instructions executable by the one or more processors.4. The apparatus of claim 1 , wherein in response to a determination that the latency of the memory device is insufficient to retrieve the item from the memory device within the requested time duration claim 1 , the controller is configured to provide the response to the command within the requested time duration; and ...

Подробнее
26-08-2021 дата публикации

Distributed Computing based on Memory as a Service

Номер: US20210263856A1
Принадлежит:

Systems, methods and apparatuses of distributed computing based on Memory as a Service are described. For example, a set of networked computing devices can each be configured to execute an application that accesses memory using a virtual memory address region. Each respective device can map the virtual memory address region to the local memory for a first period of time during which the application is being executed in the respective device, map the virtual memory address region to a local memory of a remote device in the group for a second period of time after starting the application in the respective device and before terminating the application in the respective device, and request the remote device to process data in the virtual memory address region during at least the second period of time. 1. A method , comprising:executing first instructions in a computing device to process data, identified using virtual memory addresses in a virtual memory address region, based on mapping of the virtual memory address region into local memory of the computing device;changing from the mapping of the virtual memory address region into the local memory of the computing device into mapping of the virtual memory address region into local memory of a remote device that is connected to the computing device via a wired or wireless network connection;transmitting, from the computing device to the remote device over the wired or wireless network connection, at least a portion of content in the virtual memory address region; andtransmitting, from the computing device to the remote device, a request to execute, in the remote device, second instructions configured to process data in the virtual memory address region mapped to the local memory of the remote device.2. The method of claim 1 , further comprising:executing an application in the computing device, the application comprising the first instructions and the second instructions; andstoring data into the local memory of the ...

Подробнее
04-11-2021 дата публикации

Memory Management Unit (MMU) for Accessing Borrowed Memory

Номер: US20210342274A1
Принадлежит:

Systems, methods and apparatuses to accelerate accessing of borrowed memory over network connection are described. For example, a memory management unit (MMU) of a computing device can be configured to be connected both to the random access memory over a memory bus and to a computer network via a communication device. The computing device can borrow an amount of memory from a remote device over a network connection using the communication device; and applications running in the computing device can use virtual memory addresses mapped to the borrowed memory. When a virtual address mapped to the borrowed memory is used, the MMU translates the virtual address into a physical address and instruct the communication device to access the borrowed memory. 1. A device , comprising:a translation lookaside buffer; and translate a virtual memory address into a physical memory address according to a virtual to physical memory map in the translation lookaside buffer; and', 'access a memory, via a memory bus or a network communication interface, in accordance with the physical memory address;, 'a logic circuit coupled to the translation lookaside buffer and configured towherein in response to a remote computing apparatus lending a first random access memory of the remote computing apparatus to the device, the logic circuit is configured to change from mapping a first virtual memory region to a local memory accessible via the memory bus to mapping the first virtual memory region to a remote memory within the first random access memory of the remote computing apparatus accessible via the network communication interface.2. The device of claim 1 , further comprising:a microprocessor having a memory management unit, the memory management unit including the translation lookaside buffer and the logic circuit.3. The device of claim 1 , further comprising:the network communication interface; andthe memory bus coupled to the local memory, wherein the local memory include a second random ...

Подробнее
17-10-2019 дата публикации

MEMORY DEVICES WITH SELECTIVE PAGE-BASED REFRESH

Номер: US20190318779A1
Автор: Akel Ameen D.
Принадлежит:

Several embodiments of memory devices and systems with selective page-based refresh are disclosed herein. In one embodiment, a memory device includes a controller operably coupled to a main memory having at least one memory region comprising a plurality of memory pages. The controller is configured to track, in one or more refresh schedule tables stored on the memory device and/or on a host device, a subset of memory pages in the plurality of memory pages having an refresh schedule. In some embodiments, the controller is further configured to refresh the subset of memory pages in accordance with the refresh schedule. 1. A memory device comprising:a main memory including a memory region having a plurality of memory pages; and track a first subset of the plurality of memory pages having a first refresh schedule and a second subset of the plurality of memory pages having a second refresh schedule that is different than the first refresh schedule,', 'refresh the first subset of memory pages according to the first refresh schedule, and', 'refresh the second subset of memory pages according to the second refresh schedule., 'a controller operably coupled to the main memory, wherein the controller is configured to2. The memory device of claim 1 , wherein the first subset is a contiguous range of memory pages claim 1 , and the controller is configured to track the first subset using an identifier of a first page of the range and an identifier of a last page of the range.3. The memory device of claim 1 , wherein the first subset is a contiguous range of memory pages claim 1 , and the controller is configured to track the first subset using an identifier of a first page of the range and a length of the range.4. The memory device of claim 1 , wherein the controller is further configured to remove an imprint from a first memory page in the first subset by repeatedly refreshing the first memory page.5. The memory device of claim 1 , wherein the controller is further configured to ...

Подробнее
15-10-2020 дата публикации

Content addressable memory systems with content addressable memory buffers

Номер: US20200327942A1
Принадлежит: Micron Technology Inc

An apparatus (e.g., a content addressable memory system) can have a controller; a first content addressable memory coupled to the controller and a second content addressable memory coupled to the controller. The controller can be configured to cause the first content addressable memory to compare input data to first data stored in the first content addressable memory and cause the second content addressable memory to compare the input data to second data stored in the second content addressable memory such the input data is compared to the first and second data concurrently and replace a result of the comparison of the input data to the first data with a result of the comparison of the input data to the second data in response to determining that the first data is invalid and that the second data corresponds to the first data.

Подробнее
07-12-2017 дата публикации

METHODS AND SYSTEMS FOR AUTONOMOUS MEMORY SEARCHING

Номер: US20170351737A1
Принадлежит:

Methods and systems operate to receive a plurality of search requests for searching a database in a memory system. The search requests can be stored in a FIFO queue and searches can be subsequently generated for each search request. The resulting plurality of searches can be executed substantially in parallel on the database. A respective indication is transmitted to a requesting host when either each respective search is complete or each respective search has generated search results. 1. A method of autonomous memory searching performed by a memory controller of a memory device , for searching a non-volatile memory of the memory device , the method comprising:processing a write command provided from a host to the memory device, wherein the write command is addressed to the memory device and includes a search request indicated by a predetermined bit in a logical block address field of the write command;initiating a search of the memory in response to the write command, wherein the logical block address field of the write command further includes bits indicating information to initiate the search, such that the logical block address field in the write command includes the predetermined bit and additional bits in place of a logical block address; andobtaining results of the search of the memory.2. The method of claim 1 , wherein the write command includes an indication of a search criteria and a search key.3. The method of claim 2 , wherein the search criteria includes one or more of: “equal to” claim 2 , “less than” claim 2 , “greater than” claim 2 , “not equal to” claim 2 , “less than or equal to” claim 2 , “greater than or equal to” claim 2 , “AND” claim 2 , “OR” claim 2 , or “NOT”.4. The method of claim 1 , wherein the predetermined bit in the logical block address field further indicates search information is available to the memory device to perform the search request.5. The method of claim 1 , wherein the results include an indication that the search has been ...

Подробнее
29-10-2020 дата публикации

MEMORY DEVICES WITH SELECTIVE PAGE-BASED REFRESH

Номер: US20200342933A1
Автор: Akel Ameen D.
Принадлежит:

Several embodiments of memory devices and systems with selective page-based refresh are disclosed herein. In one embodiment, a memory device includes a controller operably coupled to a main memory having at least one memory region comprising a plurality of memory pages. The controller is configured to track, in one or more refresh schedule tables stored on the memory device and/or on a host device, a subset of memory pages in the plurality of memory pages configured to be refreshed according to a refresh schedule. In some embodiments, the controller is further configured to refresh the subset of memory pages in accordance with the refresh schedule. 1. A memory device having a plurality of memory pages , the memory device configured to:store a table indicating refresh schedules of the plurality of memory pages; andrefresh the plurality of memory pages according to the refresh schedules indicated by the table,wherein the refresh schedules include a first refresh schedule and a second refresh schedule different than the first refresh schedule,wherein the plurality of memory pages include a first subset of memory pages configured to be refreshed according to the first refresh schedule, andwherein the plurality of memory pages include a second subset of memory pages configured to be refreshed according to the second refresh schedule.2. The memory device of claim 1 , wherein:the first subset is a contiguous range of memory pages;the memory device is further configured to store an indicator of the range in the table; andthe indicator includes an identifier of a first page of the range and an identifier of a last page of the range.3. The memory device of claim 1 , wherein:the first subset is a contiguous range of memory pages;the memory device is further configured to store an indicator of the range in the table; andthe indicator includes an identifier of a first page of the range and a length of the range.4. The memory device of claim 1 , wherein:the refresh schedules ...

Подробнее
03-12-2020 дата публикации

Throttle Memory as a Service based on Connectivity Bandwidth

Номер: US20200379808A1
Принадлежит:

Systems, methods and apparatuses to throttle network communications for memory as a service are described. For example, a computing device can borrow an amount of random access memory of the lender device over a communication connection between the lender device and the computing device. The computing device can allocate virtual memory to applications running in the computing device, and configure at least a portion of the virtual memory to be hosted on the amount of memory loaned by the lender device to the computing device. The computing device can throttle data communications used by memory regions in accessing the amount of memory over the communication connection according to the criticality levels of the contents stored in the memory regions. 1. A method , comprising:establishing a communication connection between a first device and a second device;obtaining permission for the first device to use an amount of memory at the second device over the communication connection;allocating virtual memory to applications running in the first device;configuring at least a portion of the virtual memory to be hosted on the amount of memory at the second device;identifying priority levels of contents in memory regions used by the applications; andallocating network bandwidth of the communication connection, based on the priority levels, to data communications used by the memory regions in accessing the amount of memory over the communication connection.2. The method of claim 1 , wherein the priority levels are identified based at least in part on categories of the contents.3. The method of claim 2 , wherein the priority levels are identified further based on priorities of the applications controlling the contents.4. The method of claim 3 , wherein the priority levels are identified further based on priorities requested by the applications controlling the contents.5. The method of claim 4 , wherein the priorities requested by the applications controlling the contents are ...

Подробнее
03-12-2020 дата публикации

Memory as a Service for Artificial Neural Network (ANN) Applications

Номер: US20200379809A1
Принадлежит:

Systems, methods and apparatuses of Artificial Neural Network (ANN) applications implemented via Memory as a Service (MaaS) are described. For example, a computing system can include a computing device and a remote device. The computing device can borrow memory from the remote device over a wired or wireless network. Through the borrowed memory, the computing device and the remote device can collaborate with each other in storing an artificial neural network and in processing based on the artificial neural network. Some layers of the artificial neural network can be stored in the memory loaned by the remote device to the computing device. The remote device can perform the computation of the layers stored in the borrowed memory on behalf of the computing device. When the network connection degrades, the computing device can use an alternative module to function as a substitute of the layers stored in the borrowed memory. 1. A method implemented in a computing device , the method comprising:storing a first portion of an artificial neural network in local memory of the computing device, wherein a second portion of the artificial neural network in the computing device is stored in memory of a remote device, and the remote device and the computing device are connected via a wired or wireless network connection;executing an application in the computing device to generate an output of the first portion of the artificial neural network, wherein the second portion of the artificial neural network is configured to receive the output of the first portion of the artificial neural network as input to generate an output of the second portion of the artificial neural network;accessing, by the computing device, at least a portion of the memory of the remote device; andgenerating, in the computing device, a result corresponding to the output of the second portion of the artificial neural network.2. The method of claim 1 , wherein the accessing of the portion of the memory of the ...

Подробнее
03-12-2020 дата публикации

Intelligent Content Migration with Borrowed Memory

Номер: US20200379908A1
Принадлежит: Micron Technology Inc

Systems, methods and apparatuses to intelligently migrate content involving borrowed memory are described. For example, after the prediction of a time period during which a network connection between computing devices having borrowed memory degrades, the computing devices can make a migration decision for content of a virtual memory address region, based at least in part on a predicted usage of content, a scheduled operation, a predicted operation, a battery level, etc. The migration decision can be made based on a memory usage history, a battery usage history, a location history, etc. using an artificial neural network; and the content migration can be performed by remapping virtual memory regions in the memory maps of the computing devices.

Подробнее
03-12-2020 дата публикации

Distributed Computing based on Memory as a Service

Номер: US20200379913A1
Принадлежит:

Systems, methods and apparatuses of distributed computing based on Memory as a Service are described. For example, a set of networked computing devices can each be configured to execute an application that accesses memory using a virtual memory address region. Each respective device can map the virtual memory address region to the local memory for a first period of time during which the application is being executed in the respective device, map the virtual memory address region to a local memory of a remote device in the group for a second period of time after starting the application in the respective device and before terminating the application in the respective device, and request the remote device to process data in the virtual memory address region during at least the second period of time. 1. A method implemented in a computing device , the method comprising:executing a first application in the computing device;allocating a virtual memory address region to the first application, wherein the first application executed in the computing device stores data using virtual memory addresses in the virtual memory address region;generating a memory map that maps the virtual memory address region into local memory of the computing device;storing the data in the local memory of the computing device based on the memory map and in accordance with the virtual memory addresses;updating the memory map to map the virtual memory address region to local memory of a remote device that is connected to the computing device via a wired or wireless network connection;transmitting, over the wired or wireless network connection, at least a portion of content of the virtual memory address region to the remote device, in connection with the updating of the memory map; andtransmitting, from the computing device to the remote device, a request to execute a second application in the remote device, wherein the second application executed in the remote device processes the data in the ...

Подробнее
03-12-2020 дата публикации

Fine Grain Data Migration to or from Borrowed Memory

Номер: US20200379914A1
Принадлежит:

Systems, methods and apparatuses of fine grain data migration in using Memory as a Service (MaaS) are described. For example, a memory status map can be used to identify the cache availability of sub-regions (e.g., cache lines) of a borrowed memory region (e.g., a borrowed remote memory page). Before accessing a virtual memory address in a sub-region, the memory status map is checked. If the sub-region has cache availability in the local memory, the memory management unit uses a physical memory address converted from the virtual memory address to make memory access. Otherwise, the sub-region is cached from the borrowed memory region to the local memory, before the physical memory address is used. 1. A method implemented in a computing device , the method comprising:accessing an amount of memory from a remote device that is connected through a wired or wireless network to the computing device;allocating a virtual memory address region to address a portion of the amount of memory loaned by the remote device to the computing device;configuring a physical memory region in the computing device as a cache of the portion of the amount of memory at the remote device;storing, in the computing device, a virtual to physical memory map that identifies a mapping between the virtual memory address region and a physical memory address region corresponding to the physical memory region;storing, in the computing device, a memory status map that identifies cache availability statuses of sub-regions of the virtual memory address region, each of the cache availability statuses indicating whether content of a corresponding sub-region is available in the physical memory region; and converting the virtual memory address into a physical memory address using the virtual to physical memory map;', 'identifying, in the sub-regions of the virtual memory address region, a corresponding sub-region that contains the virtual memory address;', 'determining, from the memory status map, that the ...

Подробнее
03-12-2020 дата публикации

Memory Management Unit (MMU) for Accessing Borrowed Memory

Номер: US20200379919A1
Принадлежит:

Systems, methods and apparatuses to accelerate accessing of borrowed memory over network connection are described. For example, a memory management unit (MMU) of a computing device can be configured to be connected both to the random access memory over a memory bus and to a computer network via a communication device. The computing device can borrow an amount of memory from a remote device over a network connection using the communication device; and applications running in the computing device can use virtual memory addresses mapped to the borrowed memory. When a virtual address mapped to the borrowed memory is used, the MMU translates the virtual address into a physical address and instruct the communication device to access the borrowed memory. 1. A computing device , comprising:a communication device;random access memory; andat least one microprocessor having a memory management unit, registers, and execution units, the memory management unit coupled to the random access memory and the communication device;wherein the computing device is configured to access an amount of memory at a remote device over a network connection through the communication device;wherein the execution units are configured to execute instructions using at least virtual memory addresses mapped to the amount of memory at the remote device; and retrieve a first virtual memory address from the registers for execution of an instruction in the execution units;', 'translate the first virtual memory address into a first physical address, the first physical address identifying the remote device over the network connection and a second virtual memory address; and', 'instruct the communication device to access the memory at the remote device over the network connection using the second virtual memory address., 'wherein the memory management unit is configured to2. The computer device of claim 1 , wherein the first physical address includes a computer network address of the remote device.3. The ...

Подробнее
03-12-2020 дата публикации

Inter Operating System Memory Services over Communication Network Connections

Номер: US20200382590A1
Принадлежит:

Systems, methods and apparatuses to provide memory as a service are described. For example, a borrower device is configured to: communicate with a lender device; borrow an amount of memory from the lender device; expand memory capacity of the borrower device for applications running on the borrower device, using at least the local memory of the borrower device and the amount of memory borrowed from the lender device; and service accesses by the applications to memory via communication link between the borrower device and the lender device. 1. A computing device , the computing device being a borrower device , comprising:a communication device configured to communicate over a wired or wireless network connection;local memory; andat least one microprocessor coupled to the local memory, the microprocessor having a memory management unit configured to convert virtual memory addresses used by the microprocessor into physical memory addresses of the local memory and access the local memory according to the physical memory addresses converted from the virtual memory address; communicate, using the communication device, with a lender device;', 'access an amount of memory at the lender device;', 'expand memory capacity of the borrower device for applications running on the at least one microprocessor, using an amount of local memory on the borrower device and an amount of memory on the lender device; and', 'service accesses made by the applications to virtual memory by communication between the borrower device and the lender device via communication device., 'wherein the borrower device is configured to2. The borrower device of claim 1 , wherein the borrower device is further configured to:allocate a page of virtual memory to an application and map this page to a page in the loaned memory of the lender device, the loaned memory being a part of memory of the lender device loaned or provisioned by the lender device to the borrower device for memory allocations and accesses; ...

Подробнее
12-01-2023 дата публикации

PROGRAMMABLE METADATA

Номер: US20230009642A1
Принадлежит:

Methods, systems, and devices for programmable metadata and related operations are described. A method may include receiving signaling that indicates a set of rules for transitions of states of metadata at a memory device storing the metadata. The memory device may receive a command from a host device associated with a set of data after receiving the set of rules. The memory device may transition metadata associated with the set of data stored at the memory device from a first state to a second state based in part on the set of rules and the command. The memory device may execute the command received from the host device.

Подробнее
13-07-2021 дата публикации

Distributed computing based on memory as a service

Номер: US11061819B2
Принадлежит: Micron Technology Inc

Systems, methods and apparatuses of distributed computing based on Memory as a Service are described. For example, a set of networked computing devices can each be configured to execute an application that accesses memory using a virtual memory address region. Each respective device can map the virtual memory address region to the local memory for a first period of time during which the application is being executed in the respective device, map the virtual memory address region to a local memory of a remote device in the group for a second period of time after starting the application in the respective device and before terminating the application in the respective device, and request the remote device to process data in the virtual memory address region during at least the second period of time.

Подробнее
08-02-2023 дата публикации

Memory accessing with auto-precharge

Номер: EP4059017A4
Принадлежит: Micron Technology Inc

Подробнее
02-03-2023 дата публикации

In-memory associative processing for vectors

Номер: US20230065783A1
Принадлежит: Micron Technology Inc

Methods, systems, and devices for in-memory associative processing for vectors are described. A device may perform a computational operation on a first set of contiguous bits of a first vector and a first set of contiguous bits of a second vector. The first sets of contiguous bits may be stored in a first plane of a memory die and the computational operation may be based on a truth table for the computational operation. The device may perform a second computational operation on a second set of contiguous bits of the first vector and a second set of contiguous bits of the second vector. The second sets of contiguous bits may be stored in a second plane of the memory die and the computational operation based on the truth table for the computational operation.

Подробнее
03-12-2020 дата публикации

Distributed computing based on memory as a service

Номер: WO2020242681A1
Принадлежит: MICRON TECHNOLOGY, INC.

Systems, methods and apparatuses of distributed computing based on Memory as a Service are described. For example, a set of networked computing devices can each be configured to execute an application that accesses memory using a virtual memory address region. Each respective device can map the virtual memory address region to the local memory for a first period of time during which the application is being executed in the respective device, map the virtual memory address region to a local memory of a remote device in the group for a second period of time after starting the application in the respective device and before terminating the application in the respective device, and request the remote device to process data in the virtual memory address region during at least the second period of time.

Подробнее
03-12-2020 дата публикации

Intelligent content migration with borrowed memory

Номер: WO2020242682A1
Принадлежит: MICRON TECHNOLOGY, INC.

Systems, methods and apparatuses to intelligently migrate content involving borrowed memory are described. For example, after the prediction of a time period during which a network connection between computing devices having borrowed memory degrades, the computing devices can make a migration decision for content of a virtual memory address region, based at least in part on a predicted usage of content, a scheduled operation, a predicted operation, a battery level, etc. The migration decision can be made based on a memory usage history, a battery usage history, a location history, etc. using an artificial neural network; and the content migration can be performed by remapping virtual memory regions in the memory maps of the computing devices.

Подробнее
02-08-2023 дата публикации

Feature dictionary for bandwidth enhancement

Номер: EP4018386A4
Принадлежит: Micron Technology Inc

Подробнее
19-03-2024 дата публикации

Methods for performing processing-in-memory operations, and related memory devices and systems

Номер: US11934824B2
Принадлежит: Micron Technology Inc

Methods, apparatuses, and systems for in- or near-memory processing are described. Strings of bits (e.g., vectors) may be fetched and processed in logic of a memory device without involving a separate processing unit. Operations (e.g., arithmetic operations) may be performed on numbers stored in a bit-parallel way during a single sequence of clock cycles. Arithmetic may thus be performed in a single pass as numbers are bits of two or more strings of bits are fetched and without intermediate storage of the numbers. Vectors may be fetched (e.g., identified, transmitted, received) from one or more bit lines. Registers of a memory array may be used to write (e.g., store or temporarily store) results or ancillary bits (e.g., carry bits or carry flags) that facilitate arithmetic operations. Circuitry near, adjacent, or under the memory array may employ XOR or AND (or other) logic to fetch, organize, or operate on the data.

Подробнее
02-04-2024 дата публикации

Memory device with on-die cache

Номер: US11947453B2
Принадлежит: Micron Technology Inc

An example memory sub-system includes: a plurality of bank groups, wherein each bank group comprises a plurality of memory banks; a plurality of row buffers, wherein two or more row buffers of the plurality of row buffers are associated with each memory bank; a cache comprising a plurality of cache lines; a processing logic communicatively coupled to the plurality of bank groups and the plurality of row buffers, the processing logic to perform operations comprising: receiving an activate command specifying a row of a memory bank of the plurality of memory banks; fetching data from the specified row to a row buffer of the plurality of row buffers; and copying the data to a cache line of the plurality of cache lines.

Подробнее
02-03-2023 дата публикации

In-memory-assoziativverarbeitungssystem

Номер: DE102022121773A1
Принадлежит: Micron Technology Inc

Es werden Verfahren, Systeme und Vorrichtungen zur In-Memory-Assoziativverarbeitung beschrieben. Ein Gerät kann einen Satz von Anweisungen empfangen, die einen ersten Vektor und einen zweiten Vektor als Operanden für eine Rechenoperation angeben. Das Gerät kann aus einem Satz von Vektorabbildungssystemen ein Vektorabbildungssystem zum Ausführen der Rechenoperation unter Verwendung von Assoziativverarbeitung auswählen. Das Gerät kann den ersten Vektor und den zweiten Vektor in einen Satz von Ebenen, die jeweils ein Array inhaltsadressierbarer Speicherzellen umfassen, basierend auf dem ausgewählten Vektorabbildungssystem schreiben.

Подробнее
28-07-2022 дата публикации

Throttle Memory as a Service based on Connectivity Bandwidth

Номер: US20220237039A1
Принадлежит: Micron Technology Inc

Systems, methods and apparatuses to throttle network communications for memory as a service are described. For example, a computing device can borrow an amount of random access memory of the lender device over a communication connection between the lender device and the computing device. The computing device can allocate virtual memory to applications running in the computing device, and configure at least a portion of the virtual memory to be hosted on the amount of memory loaned by the lender device to the computing device. The computing device can throttle data communications used by memory regions in accessing the amount of memory over the communication connection according to the criticality levels of the contents stored in the memory regions.

Подробнее
13-02-2024 дата публикации

Redundant computing across planes

Номер: US11899961B2
Принадлежит: Micron Technology Inc

Methods, systems, and devices for redundant computing across planes are described. A device may perform a computational operation on first data that is stored in a first plane that includes content-addressable memory cells. The first data may be representative of a set of contiguous bits of a vector. The device may perform, concurrent with performing the computational operation on the first data, the computational operation on second data that is stored in a second plane. The second data may be representative of the set of contiguous bits of the vector. The device may read from the first plane and write to the second plane, third data representative of a result of the computational operation on the first data.

Подробнее
27-09-2023 дата публикации

Exclusive or engine on random access memory

Номер: EP4035052A4
Принадлежит: Micron Technology Inc

Подробнее
01-04-2021 дата публикации

Exclusive or engine on random access memory

Номер: WO2021061596A1
Принадлежит: MICRON TECHNOLOGY, INC.

Methods and apparatus of Exclusive OR (XOR) engine in a random access memory device to accelerate cryptographical operations in processors. For example, an integrated circuit memory device enclosed within a single integrated circuit package can include an XOR engine that is coupled with memory units in the random access memory device (e.g., having dynamic random access memory (DRAM) or non-volatile random access memory (NVRAM)). A processor (e.g., System-on-Chip (SoC) or Central Processing Unit (CPU)) can have encryption logic that performs cryptographical operations using XOR operations that are performed by the XOR engine in the random access memory device using the data in the random access memory device.

Подробнее
18-10-2023 дата публикации

Flexible provisioning of multi-tier memory

Номер: EP4031982A4
Принадлежит: Micron Technology Inc

Подробнее
25-03-2021 дата публикации

Flexible provisioning of multi-tier memory

Номер: WO2021055209A1
Принадлежит: MICRON TECHNOLOGY, INC.

A system having a string of memory chips that can implement flexible provisioning of a multi-tier memory. In some examples, the system can include a first memory chip in a string of memory chips of a memory, a second memory chip in the string, and a third memory chip in the string. The first memory chip can be directly wired to the second memory chip and can be configured to interact directly with the second memory chip. The second memory chip can be directly wired to the third memory chip and can be configured to interact directly with the third memory chip. As part of implementing the flexible provisioning of a multi-tier memory, the first memory chip can include a cache for the second memory chip, and the second memory chip can include a buffer for the third memory chip.

Подробнее
23-08-2022 дата публикации

Writing and querying operations in content addressable memory systems with content addressable memory buffers

Номер: US11422748B2
Принадлежит: Micron Technology Inc

An apparatus (e.g., a content addressable memory system) can have a controller, a first content addressable memory coupled to the controller, and a second content addressable memory coupled to the controller. The controller can be configured to cause the first content addressable memory to write data in the first content addressable memory, cause the second content addressable memory to write the data in the second content addressable memory, and cause the second content addressable memory to query the data written in the second content addressable memory while the first content addressable memory continues to write the data in the first content addressable memory.

Подробнее
16-02-2022 дата публикации

Content addressable memory systems with content addressable memory buffers

Номер: EP3953936A1
Принадлежит: Micron Technology Inc

An apparatus (e.g., a content addressable memory system) can have a controller; a first content addressable memory coupled to the controller and a second content addressable memory coupled to the controller. The controller can be configured to cause the first content addressable memory to compare input data to first data stored in the first content addressable memory and cause the second content addressable memory to compare the input data to second data stored in the second content addressable memory such the input data is compared to the first and second data concurrently and replace a result of the comparison of the input data to the first data with a result of the comparison of the input data to the second data in response to determining that the first data is invalid and that the second data corresponds to the first data.

Подробнее
15-10-2020 дата публикации

Content addressable memory systems with content addressable memory buffers

Номер: WO2020210097A1
Принадлежит: MICRON TECHNOLOGY, INC.

An apparatus (e.g., a content addressable memory system) can have a controller; a first content addressable memory coupled to the controller and a second content addressable memory coupled to the controller. The controller can be configured to cause the first content addressable memory to compare input data to first data stored in the first content addressable memory and cause the second content addressable memory to compare the input data to second data stored in the second content addressable memory such the input data is compared to the first and second data concurrently and replace a result of the comparison of the input data to the first data with a result of the comparison of the input data to the second data in response to determining that the first data is invalid and that the second data corresponds to the first data.

Подробнее
28-12-2022 дата публикации

Content addressable memory systems with content addressable memory buffers

Номер: EP3953936A4
Принадлежит: Micron Technology Inc

Подробнее
09-01-2024 дата публикации

Content addressable memory systems with content addressable memory buffers

Номер: US11869589B2
Принадлежит: Micron Technology Inc

An apparatus (e.g., a content addressable memory system) can have a controller; a first content addressable memory coupled to the controller and a second content addressable memory coupled to the controller. The controller can be configured to cause the first content addressable memory to compare input data to first data stored in the first content addressable memory and cause the second content addressable memory to compare the input data to second data stored in the second content addressable memory such the input data is compared to the first and second data concurrently and replace a result of the comparison of the input data to the first data with a result of the comparison of the input data to the second data in response to determining that the first data is invalid and that the second data corresponds to the first data.

Подробнее
08-12-2022 дата публикации

Memory chip having an integrated data mover

Номер: US20220391330A1
Принадлежит: Micron Technology Inc

A memory chip having a first set of pins configured to allow the memory chip to be coupled to a first microchip or device via first wiring. The memory chip also having a second set of pins configured to allow the memory chip to be coupled to a second microchip or device via second wiring that is separate from the first wiring. The memory chip also having a data mover configured to facilitate access to the second microchip or device, via the second set of pins, to read data from the second microchip or device and write data to the second microchip or device. Also, a system having the memory chip, the first microchip or device, and the second microchip or device.

Подробнее
13-07-2022 дата публикации

Performing processing-in-memory operations related to spiking events, and related methods, systems and devices

Номер: EP4026061A1
Принадлежит: Micron Technology Inc

Methods, apparatuses, and systems for in-or near-memory processing are described. Spiking events in a spiking neural network may be processed via a memory system. A memory system may store a group of destination neurons, and at each time interval in a series of time intervals of a spiking neural network (SNN), pass through a group of pre-synaptic spike events from respective source neurons, wherein the group of pre-synaptic spike events are subsequently stored in memory.

Подробнее
11-03-2021 дата публикации

Performing processing-in-memory operations related to spiking events, and related methods, systems and devices

Номер: WO2021046569A1
Принадлежит: MICRON TECHNOLOGY, INC.

Methods, apparatuses, and systems for in-or near-memory processing are described. Spiking events in a spiking neural network may be processed via a memory system. A memory system may store a group of destination neurons, and at each time interval in a series of time intervals of a spiking neural network (SNN), pass through a group of pre-synaptic spike events from respective source neurons, wherein the group of pre-synaptic spike events are subsequently stored in memory.

Подробнее
01-12-2022 дата публикации

Error control for content-addressable memory

Номер: US20220382609A1
Принадлежит: Micron Technology Inc

Methods, systems, and devices for error control for content-addressable memory (CAM) are described. A CAM may store bit vectors as a set of subvectors, which each subvector stored in an independent aspect of the CAM, such as in a separate column or array of memory cells within the CAM. The CAM may similarly segment a queried input bit vector and identify, for each resulting input subvector, whether a matching subvector is stored by the CAM. The CAM may identify a match for the input bit vector when the number of matching subvectors satisfies a threshold. The CAM may validate a match based on comparing a stored bit vector corresponding to the identified match to the input bit vector. The stored bit vector may undergo error correction and may be stored in the CAM or another memory array, such as a dynamic random access memory (DRAM) array.

Подробнее
07-12-2023 дата публикации

Memory device security and row hammer mitigation

Номер: US20230393770A1
Принадлежит: Micron Technology Inc

Systems, methods, and apparatus for memory device security and row hammer mitigation are described. A control mechanism may be implemented in a front-end and/or a back-end of a memory sub-system to refresh rows of the memory. A row activation command having a row address at control circuitry of a memory sub-system and incrementing a first count of a row counter corresponding to the row address stored in a content addressable memory (CAM) of the memory sub-system may be received. Control circuitry may determine whether the first count is greater than a row hammer threshold (RHT) minus a second count of a CAM decrease counter (CDC); the second count may be incremented each time the CAM is full. A refresh command to the row address may be issued when a determination is made that the first count is greater than the RHT minus the second count.

Подробнее
07-12-2023 дата публикации

Row hammer mitigation using a victim cache

Номер: US20230393992A1
Принадлежит: Micron Technology Inc

Row hammer attacks takes advantage of unintended and undesirable side effects of memory devices in which memory cells interact electrically between themselves by leaking their charges and possibly changing the contents of nearby memory rows that were not addressed in an original memory access. Row hammer attacks are mitigated by using a victim cache. Data is written to cache lines of a cache. A least recently used cache line of the cache is written to the victim cache.

Подробнее
14-11-2023 дата публикации

Systems and methods for reducing latency in cloud services

Номер: US11817087B2
Автор: Ameen D. Akel
Принадлежит: Micron Technology Inc

Systems and methods for distributing cloud-based language processing services to partially execute in a local device to reduce latency perceived by the user. For example, a local device may receive a request via audio input, that requires a cloud-based service to process the request and generate a response. A partial response may be generated locally and played back while a more complete response is generated remotely.

Подробнее
03-12-2020 дата публикации

Inter operating system memory services over communication network connections

Номер: WO2020242663A1
Принадлежит: MICRON TECHNOLOGY, INC.

Systems, methods and apparatuses to provide memory as a service are described. For example, a borrower device is configured to: communicate with a lender device; borrow an amount of memory from the lender device; expand memory capacity of the borrower device for applications running on the borrower device, using at least the local memory of the borrower device and the amount of memory borrowed from the lender device; and service accesses by the applications to memory via communication link between the borrower device and the lender device.

Подробнее
14-03-2024 дата публикации

Sequence alignment with memory arrays

Номер: US20240086100A1
Принадлежит: Micron Technology Inc

A memory device may be used to implement a Bloom filter. In some examples, the memory device may include a memory array to perform a multiply-accumulate operation to implement the Bloom filter. The memory device may store multiple portions of a reference genetic sequence in the memory array and compare the portions of the reference genetic sequence to a read sequence in parallel by performing the multiply-accumulate operation. The results of the multiply-accumulate operation between the read sequence and the portions of the reference genetic sequence may be used to determine where the read sequence aligns to the reference sequence.

Подробнее
14-03-2024 дата публикации

Sequence alignment with memory arrays

Номер: US20240087643A1
Принадлежит: Micron Technology Inc

A memory device may be used to implement a Bloom filter. In some examples, the memory device may include a memory array to perform a multiply-accumulate operation to implement the Bloom filter. The memory device may store multiple portions of a reference genetic sequence in the memory array and compare the portions of the reference genetic sequence to a read sequence in parallel by performing the multiply-accumulate operation. The results of the multiply-accumulate operation between the read sequence and the portions of the reference genetic sequence may be used to determine where the read sequence aligns to the reference sequence.

Подробнее
24-08-2023 дата публикации

Parity-based error management for a processing system

Номер: US20230267043A1
Принадлежит: Micron Technology Inc

Methods, systems, and devices for parity-based error management are described. A processing system that performs a computational operation on a set of operands may perform a computational operation, (e.g., the same computational operation) on parity bits for the operands. The processing system may then use the parity bits that result from the computational operation on the parity bits to detect, and discretionarily correct, one or more errors in the output that results from the computational operation on the operands.

Подробнее
28-11-2023 дата публикации

Spatiotemporal fused-multiply-add, and related systems, methods and devices

Номер: US11829729B2
Принадлежит: Micron Technology Inc

Systems, apparatuses, and methods of operating memory systems are described. Processing-in-memory capable memory devices are also described, and methods of performing fused-multiply-add operations within the same. Bit positions of bits stored at one or more portions of one or more memory arrays, may be accessed via data lines by activating the same or different access lines. A sensing circuit operatively coupled to a data line may be temporarily formed and measured to determine a state (e.g., a count of the number of bits that are a logic “1”) of accessed bit positions of a data line, and state information may be used to determine a computational result.

Подробнее
29-12-2022 дата публикации

Inter Operating System Memory Services over Communication Network Connections

Номер: US20220417326A1
Принадлежит: Micron Technology Inc

Systems, methods and apparatuses to provide memory as a service are described. For example, a borrower device is configured to: communicate with a lender device; borrow an amount of memory from the lender device; expand memory capacity of the borrower device for applications running on the borrower device, using at least the local memory of the borrower device and the amount of memory borrowed from the lender device; and service accesses by the applications to memory via communication link between the borrower device and the lender device.

Подробнее
29-06-2022 дата публикации

Machine learning with feature obfuscation

Номер: EP4018391A1
Принадлежит: Micron Technology Inc

A system having multiple devices that can host different versions of an artificial neural network (ANN). In the system, inputs for the ANN can be obfuscated for centralized training of a master version of the ANN at a first computing device. A second computing device in the system includes memory that stores a local version of the ANN and user data for inputting into the local version. The second computing device includes a processor that extracts features from the user data and obfuscates the extracted features to generate obfuscated user data. The second device includes a transceiver that transmits the obfuscated user data. The first computing device includes a memory that stores the master version of the ANN, a transceiver that receives obfuscated user data transmitted from the second computing device, and a processor that trains the master version based on the received obfuscated user data using machine learning.

Подробнее
25-02-2021 дата публикации

Machine learning with feature obfuscation

Номер: WO2021034602A1
Принадлежит: MICRON TECHNOLOGY, INC.

A system having multiple devices that can host different versions of an artificial neural network (ANN). In the system, inputs for the ANN can be obfuscated for centralized training of a master version of the ANN at a first computing device. A second computing device in the system includes memory that stores a local version of the ANN and user data for inputting into the local version. The second computing device includes a processor that extracts features from the user data and obfuscates the extracted features to generate obfuscated user data. The second device includes a transceiver that transmits the obfuscated user data. The first computing device includes a memory that stores the master version of the ANN, a transceiver that receives obfuscated user data transmitted from the second computing device, and a processor that trains the master version based on the received obfuscated user data using machine learning.

Подробнее
17-10-2023 дата публикации

Error control for content-addressable memory

Номер: US11789797B2
Принадлежит: Micron Technology Inc

Methods, systems, and devices for error control for content-addressable memory (CAM) are described. A CAM may store bit vectors as a set of subvectors, which each subvector stored in an independent aspect of the CAM, such as in a separate column or array of memory cells within the CAM. The CAM may similarly segment a queried input bit vector and identify, for each resulting input subvector, whether a matching subvector is stored by the CAM. The CAM may identify a match for the input bit vector when the number of matching subvectors satisfies a threshold. The CAM may validate a match based on comparing a stored bit vector corresponding to the identified match to the input bit vector. The stored bit vector may undergo error correction and may be stored in the CAM or another memory array, such as a dynamic random access memory (DRAM) array.

Подробнее
27-02-2024 дата публикации

Performing processing-in-memory operations related to spiking events, and related methods, systems and devices

Номер: US11915124B2
Принадлежит: Micron Technology Inc

Methods, apparatuses, and systems for in- or near-memory processing are described. Spiking events in a spiking neural network may be processed via a memory system. A memory system may store a group of destination neurons, and at each time interval in a series of time intervals of a spiking neural network (SNN), pass through a group of pre-synaptic spike events from respective source neurons, wherein the group of pre-synaptic spike events are subsequently stored in memory.

Подробнее
14-03-2024 дата публикации

Tightly-coupled random access memory interface shim die

Номер: US20240088084A1
Автор: Ameen D. Akel, Brent Keeth
Принадлежит: Micron Technology Inc

An interface shim layer for a tightly-coupled random access memory device is disclosed. The interface shim layer redirects and coalesces integrated channels and connections between a stacked plurality of memory die and an application specific integrated circuit and directly connects to the memory die and to the application specific integrated circuit. A passive version of the interface shim layer incorporates a plurality of routing layers to facilitate routing of signals to and from the stacked plurality of memory die and the application specific integrated circuit. An active version of the interface shim layer incorporates separate physical interfaces for both the stacked plurality of memory die and the application specific integrated circuit to facilitate routing. The active version of the interface shim layer may further incorporate memory controller functions, built-in self-test circuits, among other capabilities that are migratable into the active interface shim layer.

Подробнее
01-02-2024 дата публикации

Bloom filter integration into a controller

Номер: US20240036762A1
Принадлежит: Micron Technology Inc

Systems, apparatuses, and methods related to bloom filter implementation into a controller are described. A memory device is coupled to a memory controller. The memory controller is configured to implement a counting bloom filter, increment the counting bloom filter in response to a row activate command of the memory device, determine whether a value of the counting bloom filter exceeds a threshold value, and perform an action in response to the value exceeding the threshold value.

Подробнее
27-07-2023 дата публикации

Accessing stored metadata to identify memory devices in which data is stored

Номер: US20230236747A1
Принадлежит: Micron Technology Inc

A computer system stores metadata that is used to identify physical memory devices that store randomly-accessible data for memory of the computer system. In one approach, access to memory in an address space is maintained by an operating system of the computer system. Stored metadata associates a first address range of the address space with a first memory device, and a second address range of the address space with a second memory device. The operating system manages processes running on the computer system by accessing the stored metadata. This management includes allocating memory based on the stored metadata so that data for a first process is stored in the first memory device, and data for a second process is stored in the second memory device.

Подробнее
09-04-2024 дата публикации

Distributed computing based on memory as a service

Номер: US11954042B2
Принадлежит: Micron Technology Inc

Systems, methods and apparatuses of distributed computing based on memory as a service are described. For example, a set of networked computing devices can each be configured to execute an application that accesses memory using a virtual memory address region. Each respective device can map the virtual memory address region to the local memory for a first period of time during which the application is being executed in the respective device, map the virtual memory address region to a local memory of a remote device in the group for a second period of time after starting the application in the respective device and before terminating the application in the respective device, and request the remote device to process data in the virtual memory address region during at least the second period of time.

Подробнее
09-05-2024 дата публикации

Redundant computing across planes

Номер: US20240152292A1
Принадлежит: Micron Technology Inc

Methods, systems, and devices for redundant computing across planes are described. A device may perform a computational operation on first data that is stored in a first plane that includes content-addressable memory cells. The first data may be representative of a set of contiguous bits of a vector. The device may perform, concurrent with performing the computational operation on the first data, the computational operation on second data that is stored in a second plane. The second data may be representative of the set of contiguous bits of the vector. The device may read from the first plane and write to the second plane, third data representative of a result of the computational operation on the first data.

Подробнее
03-12-2020 дата публикации

Memory as a service for artificial neural network (ann) applications

Номер: WO2020242665A1
Принадлежит: MICRON TECHNOLOGY, INC.

Systems, methods and apparatuses to throttle network communications for memory as a service are described. For example, a computing device can borrow an amount of random access memory of the lender device over a communication connection between the lender device and the computing device. The computing device can allocate virtual memory to applications running in the computing device, and configure at least a portion of the virtual memory to be hosted on the amount of memory loaned by the lender device to the computing device. The computing device can throttle data communications used by memory regions in accessing the amount of memory over the communication connection according to the criticality levels of the contents stored in the memory regions.

Подробнее
27-07-2022 дата публикации

Flexible provisioning of multi-tier memory

Номер: EP4031982A1
Принадлежит: Micron Technology Inc

A system having a string of memory chips that can implement flexible provisioning of a multi-tier memory. In some examples, the system can include a first memory chip in a string of memory chips of a memory, a second memory chip in the string, and a third memory chip in the string. The first memory chip can be directly wired to the second memory chip and can be configured to interact directly with the second memory chip. The second memory chip can be directly wired to the third memory chip and can be configured to interact directly with the third memory chip. As part of implementing the flexible provisioning of a multi-tier memory, the first memory chip can include a cache for the second memory chip, and the second memory chip can include a buffer for the third memory chip.

Подробнее
25-03-2021 дата публикации

Programmable engine for data movement

Номер: WO2021055207A1
Принадлежит: MICRON TECHNOLOGY, INC.

A memory chip having a predefined memory region configured to store program data transmitted from a microchip. The memory chip also having a programmable engine configured to facilitate access to a second memory chip to read data from the second memory chip and write data to the second memory chip according to stored program data in the predefined memory region. The predefined memory region can include a portion configured as a command queue for the programmable engine, and the programmable engine can be configured to facilitate access to the second memory chip according to the command queue.

Подробнее
03-08-2022 дата публикации

Exclusive or engine on random access memory

Номер: EP4035052A1
Принадлежит: Micron Technology Inc

Methods and apparatus of Exclusive OR (XOR) engine in a random access memory device to accelerate cryptographical operations in processors. For example, an integrated circuit memory device enclosed within a single integrated circuit package can include an XOR engine that is coupled with memory units in the random access memory device (e.g., having dynamic random access memory (DRAM) or non-volatile random access memory (NVRAM)). A processor (e.g., System-on-Chip (SoC) or Central Processing Unit (CPU)) can have encryption logic that performs cryptographical operations using XOR operations that are performed by the XOR engine in the random access memory device using the data in the random access memory device.

Подробнее
25-04-2024 дата публикации

Associative processing memory sequence alignment

Номер: US20240136016A1
Принадлежит: Micron Technology Inc

Associative processing memory (APM) may be used to align reads to a reference sequence. The APM may store shifted permutations and/or other permutations of the reference sequence. A read may be compared to some or all of the permutations of the reference sequence and the APM may provide an output for each comparison. In some examples, the APM may compare the read to many permutations of the reference sequence to the read in parallel. Inferences may be made based on the comparisons between the read and the portions and/or permutations of a reference sequence. Based on the inferences, a candidate alignment location in the reference sequence for a read to be determined.

Подробнее
04-06-2024 дата публикации

In-memory associative processing for vectors

Номер: US12001708B2
Принадлежит: Micron Technology Inc

Methods, systems, and devices for in-memory associative processing for vectors are described. A device may perform a computational operation on a first set of contiguous bits of a first vector and a first set of contiguous bits of a second vector. The first sets of contiguous bits may be stored in a first plane of a memory die and the computational operation may be based on a truth table for the computational operation. The device may perform a second computational operation on a second set of contiguous bits of the first vector and a second set of contiguous bits of the second vector. The second sets of contiguous bits may be stored in a second plane of the memory die and the computational operation based on the truth table for the computational operation.

Подробнее
03-12-2020 дата публикации

Memory management unit (mmu) for accessing borrowed memory

Номер: WO2020242683A1
Принадлежит: MICRON TECHNOLOGY, INC.

Systems, methods and apparatuses to accelerate accessing of borrowed memory over network connection are described. For example, a memory management unit (MMU) of a computing device can be configured to be connected both to the random access memory over a memory bus and to a computer network via a communication device. The computing device can borrow an amount of memory from a remote device over a network connection using the communication device; and applications running in the computing device can use virtual memory addresses mapped to the borrowed memory. When a virtual address mapped to the borrowed memory is used, the MMU translates the virtual address into a physical address and instruct the communication device to access the borrowed memory.

Подробнее