METHOD AND APPARATUS FOR CONTROLLING MEMORY
The disclosure is a method and device the memory number are disclosed. By semiconductor technology's development, the rotary body vertically stacked through the upper hole and a lower chip silicon chip stack chip package the TSV (Through a-silicon via) technology has been developed. For example, stacked memory 3D TSV interface to the HBM (High Bandwidth Memory), HBC (Hybrid Memory Cube), etc. Wide I/O. This 3D processor integrated in the wide bandwidth such as stacked memory package can be [...] number. On the other hand, contained in the package memory is wide bandwidth [...] number while, due to the small relatively storage space, can be utilized to the cache of memory located outside the package. However, stored in memory or read or, when write data from memory, package/unnecessary for transmitting requests in external memory is in many cases disclosed. In addition, package/can be in external memory bandwidth of the other, are efficiently utilized the same needs to be disclosed. The method and device memory number a number or a reversed F. [...]. The technical and accomplish in the embodiment is as a technical and into said number is not limited number and, in the embodiment of hereinafter are specific number can be converted from in yet another technique. The device according to one aspect includes number memory, memory write request is received, and outputs the address of said memory associated with a physical address of the cache group, said conversion cache group of address counter corresponding increase, determined by detecting said dirty state cache group, compress the dirty group detector, said dirty bit is contained in the dirty state cache group managing dirty list, and dirty list manager, said cache groups said dirty bits are set cache dirty whether can exhibit. The other computing device according to one side, at least at least one core, some information stored in a memory unit storing one data area set one tag region, said cache and said memory upon receiving a request from the core, said dirty cache whether track, heat of predicting whether said cache, said tracking result or a prediction based on the result of said request for said cache memory or said, memory number can be a water level. The method according to one side the other number memory, memory write request is received, and outputs the address of said memory associated with a physical address of the cache group, said conversion cache group of address counter corresponding increase, said detecting and said determined by the dirty state cache group compress the, said dirty bit is contained in the dirty state cache group comprising dirty list in the computer, said cache groups said dirty bits are set cache dirty whether can exhibit. According to said determined, instead of reading cache detecting dirty state cache group, compress the, external memory or cache data read or data write request transmitted, number of operations is reduced to read cache, cache performance of a pattern electrode can be achieved. Figure 1 shows a computing device to account for surface also are disclosed. Figure 2 shows a set (set non-associative) to account for surface also associated with cache structure are disclosed. Figure 3 shows a direct mapping (direct-a mapped) to account for cache structure surface are disclosed. Figure 4 shows a direct mapping structure that maps to a method also one in the embodiment according to external memory data cache to explain surface are disclosed. Figure 5 shows a number to account for surface is also memory device are disclosed. Figure 6 shows a number one in the embodiment according to the memory block to account for device also are disclosed. Figure 7 shows a device number to account for surface is also one in the embodiment according to memory are disclosed. Also one in the embodiment according to Figure 8 shows a computing device to account for block are disclosed. Figure 9 shows a one in the embodiment according to the method number memory shown flow also are disclosed. Figure 10 shows a according to one in the embodiment also, external memory read request for data cache and method shown specifically flow details are disclosed. Figure 11 shows a according to one in the embodiment also, a request for data writing method shown specifically to external memory cache and flow details are disclosed. The a term in the embodiment used for the ability of a typical possible while in the embodiment the in general terms selected but, this art intended or precedent the information to descriptors, such as can be depending on appearance of a new technique. In addition, optionally selecting certain terms may, in this case corresponding portion comprise in the embodiment based on the description of their meaning will. Thus, the terms in the embodiment of a term used for a simple non-name, the term is defined based on semantics and the in the embodiment of having throughout the content should. In the embodiment described in for, when connected to any portion that synthetic resin, this when connected directly as well as, its intermediate between other components when electrically connected thereto comprises a unit. In addition, when that any portion of any components, particularly the opposite substrate [...] number but without other components further can include other components which means that the other. In addition, in the embodiment described to ".. Part ",".. Module " terms of processing units and at least one function and the operation means, implemented in software or hardware or a combination of hardware and software can be. In the embodiment of "consists of" or "includes" used in the specification the term articles various elements, or several steps which do not necessarily all including interpreted, some are not included and may be some components or, or additional components or step should be interpreted. In the embodiment for description is relayed to the number of unintended interpreted and scope of the invention, a log allows the technician art can be accurately and hereinafter in the embodiment is for a range of rights will be interpreted. In the embodiment are detailed hereinafter with reference to the attached drawing are for example only to less than 1000. Figure 1 shows a computing device to account for surface also are disclosed. The reference also 1, such as desktop computer or mobile computing device (110) core (120), cache (130) and memory number control unit (140) can be a. On the other hand, computing device (110) is functionality and is a power the light source is mounted may be packaged in a disclosed. In addition, computing device (110) comprises an outer memory (150) can be connected. Core (120) a computing device (110) number plower hardware configuration and operation of all configurations, at least at least one core (120) computing device (110) can be included. In addition, core (120) comprises a cache (130) or external memory (150) or read or stored, cache (130) or external memory (150) can be write data from. Memory number control unit (140) a computing device (110) is included, cache (130) [...] be a number. In addition, core (120) to receive the data read from the request or data writing request, cache (130) or external memory (150) can be transmitted. Cache (130) in the form of a high speed memory device which includes a buffer, external memory (150) storage capacity than the small but, [...] number can be a wider bandwidth. The, computing device (110) comprises an outer memory (150) being accessed frequently stored data in the data cache (130) stored, external memory (150) repeatedly without retrieving data, cache (130) is equal to or directly readable. For example, cache (130) comprises a flash memory type (flash memory type), SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory) but at least one of types of storage media, the number is not one. In addition, cache (130) core (120) can be integrated into such as package, the on package memory (on-a package memory) can be defined. External memory (150) a computing device (110) of the graphics device located within or outside the memory unit which, cache (130) may have a storage capacity greater than a. For example, external memory (150) includes a flash memory type (flash memory type), DRAM (Dynamic Random Access Memory), hard disk type (hard disk type), ROM (ROM, Read provided Only Memory), EEPROM (Electrically Erasable Programmable Read provided Only Memory), PROM (Programmable Read provided Only Memory), magnetic memory, magnetic disk, optical disk but at least one of types of storage media, the number is not one. The reference also 1, external memory (150) core (120) is contained in the package can be located outside, the off package memory module (off-a package memory) can be defined. On the other hand, cache (130) external memory (150) to the cache of visitors, large tag can be due to door number is generated. Wherein the external memory tag (150) on the cache (130) information for mapping (mapping) is big. I.e., cache tag (130) data stored in the external memory (150) which region of stored data indicating whether, external memory (150) be a address information. In addition, external memory (150) goes up tag storage capacity of negative polarization component. The, tag cache (130) stored in higher than level cache, cache upper level cache (130) less storage space than up, large tag can be lowering the performance of the upper level cache. On the other hand, a cache hit (hit) and cache miss (miss) can be determined through the tag. Wherein, the external memory address associated with the cache hit corresponding tag cache (130) with conventional pressure to the components found in, the external memory address associated with the cache miss corresponding tag cache (130) when it has not been found in big. Figure 2 shows a set (set non-associative) to account for surface also associated with cache structure are disclosed. Large number point method for overcoming the door caused by a cache tag, tag not higher level cache, for caching method of computing device the pin is. The, tag and data set associative cache can be based on. According to set associative, cache set (200) comprises a plurality of tag blocks (210) and a plurality of data blocks (220) can be comprising. The, external memory (150) been cached data stored (130) when stored, data cache set (200) included in the plurality of data blocks (220) one data block (250) can be stored. The reference 2 also, row cache is one cache set (200) can be stored. On the other hand, tag block (230) includes a plurality of tag region (240) can be composed. On the other hand, set associative cache to a cache hit/miss determination, memory number control unit (140) is first cache set (200) the tag included in the blocks (210) should read. Tag blocks (210) when the read results in a cache hit, core cache set (200) included in data blocks (220) read. I.e., set associative cache cache set (200) a plurality of tag blocks (210) is the inclusion of, high cache hit rates to be coated. However, cache hit even minimum data read request is converted twice, cache read operation can be repeated several times. Figure 3 shows a direct mapping (direct-a mapped) to account for cache structure surface are disclosed. On the other hand, tag and data can be stored in the cache based on the direct mapping of cache. According to direct mapping structure, can be cache set one tag region one data area. Thus, when data stored in the external memory is received, data of a cache can be region specific data. I.e., direct mapping of cache, with one set associative cache tag set and is respectively are the same. The data area is data block of Figure 2 can be the same. The reference also 3, cache set (320) is one tag region (330) and one data area (340) can be composed. In addition, cache rows (310) can be stored in a plurality of cache sets. The, core of a cache tag and data thereof can read once. When direct mapping cache structure, since only one data-readable core recovers the one tag and, set associative cache cache hit compared with a low relative dielectric constant thereof can. However, when the cache hit set associative cache can be reduced compared to the number of cache operation and a reading operation. Cache (130) is in accordance with the kind, set associative caches can be applied one of direct mapping cache structure. For example, cache (130) is when the cache SRAM, the application of the set associative cache hit rate, improving the performance is prevented from being maxims. While, cache (130) is when DRAM cache, data request processing in a short time since the application of the direct mapping structure, total exchanging program execution speed is increased cache hit rate can be. Figure 4 shows a direct mapping structure that maps to a method also one in the embodiment according to external memory data cache to explain surface are disclosed. Direct mapping electrodes cache (410) comprises a plurality of cache group (430) include, external memory (420) comprises a plurality of pages (440, 445) can be comprising. I.e., cache group (430) cache (410) and may also act as units, page (440, 445) external memory (420) that is incapable of unit are disclosed. In addition, pages (440, 445) data in the cache group (430) certain block can be stored. In addition, external memory (420) various data cache (410) can be a block of sharing. On the other hand, cache (410) has a corresponding dirty bit (dirty bit) indicating whether dirty block further comprises blocks can be. On the other hand, core (120) from external memory (420) when receiving a read request data, memory number control unit (140) comprises a cache (410) requested page tag stored in the external memory (420) by comparing the address of, determining cache hit and can be cache misses. In addition, memory number control unit (140) comprises a cache (410) is dirty bits can be supply by dirty are displayed. Memory number control unit (140) includes a core (120) receives a request from data writing same cache (410) when writes, cache (410) data stored in the external memory (420) when different state data stored in, dirty bit values 1 for displaying the same display substrate. When the cache hit, back cache dirty bit has a value of 0 (410) with the stored data to external memory (420) and the data stored to the same state, value 1 can be know back different state. The reference also 4, cache (410) of block number 11 is number 1 page (440) data and number 2 address 111 page (445) 211 one address of data can be stored. The, cache (410) of external memory block number corresponding to the tag 11 (420) meaning that the address 111 when, memory number control unit (140) is inputted to a cache hit can be. In addition, cache (410) into the value of the dirty bit corresponding to the block number 1 of 11, memory number control unit (140) comprises a cache (410) of data stored in external memory block number 11 (A ') (420) 111 determines that the data stored in address of (A1) can be different from those. Figure 5 shows a number to account for surface is also memory device are disclosed. On the other hand, cache (130) and external storage (150) for fully utilize bandwidth of dispatch (self-a balancing dispatch) method include balance flow tides. When dispatch balance data read request, external memory (150) and cache (130) calculates the predicted processing time, based on the calculated results, data in an external memory read request (150) or cache (130) to transmit method are disclosed. This balance the dispatch, cache (130) can be applied only to clean state. If, cache (130) is if dirty state, memory number control unit (140) is balance needs to dispatch without, latest values in cache (130) since only be reading the data are disclosed. The, balance signal from dispatch, first cache (130) should whether dirty is identified. The reference also 5, the device memory number (500) includes a number of Figure 1 memory control section (140) is included, dirty state cache (140) can be tracking. The memory number the device (500) is dirty region detector (510) and dirty list manager (520) can be a. Dirty region detector (510) includes a data writing request many external memory (150) can be detecting page in the flash memory. Specifically, dirty region judging section (510) tracks in aid of external memory (150) of the address of a plurality of hash function (hash function) inputted to the can. Then, hash function using a value driver drives the plurality of tables can index (index). I.e., dirty region judging section (510) the external memory request data writing each respective table (150) of corresponding to the address counters can be increase. The, increase in data write counters whenever a request is received, a plurality of table stored in both counters reaches or exceeds the threshold, dirty region detector (510) which receives page can be that the dirty state. The, dirty region detector (510) external memory (150) units of page address can be dirty state. In addition, dirty region is detected, dirty region detector (510) is capable of a corresponding halving value of counters. Dirty list manager (520) is dirty region detector (510) to determine the dirty state in external memory (150) can be managing of page address is contained in the dirty list. The dirty list includes NRU (Not Used Recently) (540) and dirty state pages to a page tag (550) can be like. The NRU (540) to indicate for the BTS transmits the used area and the unused, sacrificial supplement can be based. If a memory write request part, including page is already dirty list corresponding address if, cache (130) and external storage (150) write back (write-a back) manner data writing request is transmitted. Write back scheme is a cache hit during cache (130) only modified data stored, external memory (150) is maintained multivalent data stored, cache miss when modified data external memory (150) should stored in big. The, dirty region detector (510) dirty list manager detects the new dirty region (520) is mounted previously stored in page number selects dirty list sacrificial page back and, to determine the page can be adding dirty state. The dirty list for storing a new page sacrificial page back big page number. Selecting sacrificial page reference include, recently used (NRU) the page, the page referenced number is the lowest (Least Frequently Used, LFU), most long are not used in the page (Least Recently Used, LRU), are first incoming page (First provided In First provided Out, FIFO) but in the uncompressed domain, the number is not one. On the other hand, it is ensured sacrificial page should clean state. The, present as a sacrificial page in which dirty during the data contained in the data in order to identify, sacrificial page that the cache (130) sets all cache data read request is transmitted. The, cache (130) enables the dirty and if, memory number control unit (140) comprises an outer memory (150) a set of data corresponding to data cache are updated to the state of the other sacrificial page clean. I.e., the device memory number (500) when a data writing request, memory number control unit (140) comprises a cache of dirty bits should read dirty whether to determine. The memory number control unit (140) comprises a data read request should be performed at least once therein. This, data writing request must be performed whenever, cache (130) can be selected from the group consisting of efficiency. Figure 6 shows a number one in the embodiment according to the memory block to account for device also are disclosed. 6 also reference surface, the number memory device (600) includes a number of Figure 1 memory control section (140) is included, dirty state cache (140) can be tracking. The memory number the device (600) is dirty group detector (610) on the dirty list manager (620) can be a. Dirty group detector (610) is dirty state cache group can be detected. Specifically, dirty group detector (610) external memory (150) a request is received, external memory (150) and outputs the address associated with a physical address of the cache group, conversion cache group of address counter that is corresponding to an increase in, can be determined by detecting dirty state cache group. Wherein, a plurality of cache sets including unit be a cache group. In addition, cache set one tag and data areas can be. I.e., cache (130) is be a direct mapping cache structure. In addition, dirty group detector (610) comprises a cache group of address by use of a plurality of hash functions and, a plurality of translation cache group of address counter are both greater than the threshold value corresponding to a cache group dirty state cache group can be detected. In addition, dirty group detector (610) after detecting the dirty state cache group, capable of halving the counters. Dirty list manager (620) is dirty group detector (610) according to a detection result, dirty bit is contained in the dirty state cache group can be managing dirty list. The, cache dirty bit set cache groups can be representative of whether the dirty state. On the other hand, dirty list manager (620) is dirty list storing unit can be. In addition, dirty list manager (620) is dirty group detector (610) is dirty state upon detection of the cache group, group of cache address is not the dirty list can be included. If, when the dirty address cache group are not included in the list, dirty list manager (620) is dirty list including at least different cache group of address number surfaces can. The, back a cache group number sacrificial group can be defined. Selecting criteria are sacrificial group selecting criteria so that sacrificial page of Figure 5, described details dispensed to each other. In addition, sacrificial group comprising cache set when dirty state, dirty list manager (620) is sacrificial group in order to maintain clean state, sacrificial group of address can be output. Figure 7 shows a device number to account for surface is also one in the embodiment according to memory are disclosed. 7 also reference surface, the device memory number (700) includes a number of Figure 1 memory control section (140) is included, dirty state cache (140) can be tracking. The memory number the device (700) is dirty group detector (710) and dirty list manager (720) can be a. 5 compared to the also, dirty group detector (710) is not page, can be used as index unit including a plurality of cache sets cache group. I.e., dirty group detector (710) index is external memory unit (150) without the use of maps the physical memory, cache group of address can be used. Thus, the device memory number (700) is the cache group can know the dirty state directly, cache (130) to reduce the number of the operations can be read. On the other hand, dirty list manager (720) is NRU (740), cache group tag (750) and dirty bit (760) are included in a dirty list can be managing. NRU (740) is of Figure 5 NRU (540) is the same, described details dispensed to each other. Of Figure 5 page tag (550) external memory (150) while the unit of page address, of Figure 7 cache group tag (750) comprises a cache group units of a substrate. In addition, cache groups each cache on a set of dirty list, dirty bit (760) can be a. The, memory number control unit (140) is dirty state cache groups can be easily ascertained whether any dirty cache set. Also one in the embodiment according to Figure 8 shows a computing device to account for block are disclosed. Memory number control unit (140) comprises a cache (130) to access a previously, first after confirming whether cache hit and a cache miss, cache only if there is a cache hit (130) can be access. The method include identifying a cache hit cache (130) the upper level of the map stored in the cache miss (miss map) method utilizing may be disclosed. However, higher level cache storing higher level cache miss map can be burden. On the other hand, cache (130) other method include predictor method utilizing data is not the pin is. The reference also 8, computing device (810) core (840), cache (850), memory number control unit (830) and include, function and is a power the light source is mounted may be packaged in a disclosed. In addition, computing device (810) external memory (820) can be connected. Core (840) and external storage (820) of Figure 1 includes a core (120) and external storage (150) of each of the described so that dispensed to each other. Cache (850) external memory (820) partial information one tag region stored in cache set unit storing one data area can be. In addition, cache (850) is DRAM of direct event handler, the number is not one. Memory number control unit (830) core (840) from external memory (820) upon receiving a request, cache (850) is dirty whether track, cache (850) predicting whether heat, tracking result or external memory based on the prediction results (820) or cache (850) can request. In addition, memory control section number (830) is tracker (831), predictor (833) and memory interface (832) can be a. Tracker (831) 6 of Figure 7 and the device also includes a memory number (600, 700) can be corresponding. Tracker (831) core (840) from external memory (820) a desired, external memory (820) and outputs the address associated with a physical address of the cache group, conversion cache group of address counter corresponding increase, dirty state cache group determined by detecting, compress the, dirty bit dirty state cache group including a communication network transmits dirty list, cache (850) can be dirty state of tracking. Predictor (833) comprises a cache (850) for predicting whether heat can be. The, predictor (833) is based on address instruction (instruction a-based) and car prediction, but group address region (region-a based) based prediction, the number is not one. Memory interface (832) can be a tracking result or a prediction based on the results, external memory (820) or cache (850) can request. Specifically, memory interface (832) of received external memory request (820) number 1 and reading the data stored in the request contains, tracker (831) detecting clean state cache group, predictor (833) predictive cache hit is when, external memory (820) to cache (850) of bandwidth utilization rate based on an external memory read request (820) or cache (850) can be transmitted. In addition, memory interface (832) of received external memory request (820) number 1 and reading the data stored in the request contains, tracker (831) detecting clean state cache group, predictor (833) predictive cache miss is when, in an external memory read request (820) is used to, number 1 data cache (850) can be transmitted. If, an external memory received request (820) when a corresponding page number 2 to communications, tracker (831) is dirty state detects the cache group, memory number control unit (140) comprises a cache (850) can be data to number 2. While, tracker (831) detects the clean state cache group, memory number control unit (140) comprises a cache (850) and external storage (820) can be data to number 2. Figure 9 shows a according one also in the embodiment, memory number plower shown flow method are disclosed. In step 910, tracker (831) external memory (150) a write request is received, external memory (150) and outputs the address associated with a physical address of the cache group, conversion cache group of address counter that is corresponding to an increase in, can be determined by detecting dirty state cache group. The, tracker (831) and by use of a plurality of hash functions comprises a cache group of address, a plurality of translation cache group of address counter are both greater than the threshold value corresponding to a cache group, dirty state cache group can be detected. In addition, tracker (831) dirty state after detecting the cache group, capable of halving the counters. Wherein, cache set one tag region can be one data area. I.e., cache (850) is be a direct mapping cache structure. In step 920, tracker (831) compress the is, dirty bit is contained in the dirty state cache group can be managing dirty list. The, cache dirty bit set cache groups can be indicative of whether the dirty state. In addition, detects a dirty state cache group, tracker (831) comprises a cache group address is not the dirty list can be included. The, cache group of address as signal when least dirty list, tracker (831) is dirty list including at least different cache group of address number surfaces can. The cache group number sacrificial group can be defined surfaces. In addition, sacrificial group comprising cache set when dirty state, tracker (831) is sacrificial group in order to maintain clean state, sacrificial group of address can be output. Figure 10 shows a according to one in the embodiment also, external memory read request for data cache and method shown specifically flow details are disclosed. Core (840) from external memory (820) when a read request is received, memory number control unit (830) comprises a cache (850) is whether dirty state, comparing the available bandwidth and whether cache hit, cache read request (850) whether to external memory (820) determines whether transmission can. In step 1010, tracker (831) is read it helps external memory (150) corresponding to cache (850) can be detecting dirty state during cache group. The, memory number control unit (830) is first read request external memory (150) can be associated with translating the physical address of the cache group. The, conversion cache group address of the first dirty list, cache groups when cache set of dirty bit is 1, memory number control unit (830) is read request relates to determine cache set can be dirty state. If read request if dirty state cache set, memory interface (832) is to consider the balance without dispatch, cache read request (850) should transmit other. However, if read request clean state cache set, memory number control unit (830) is determining whether or not the cache hit can be. I.e., it helps reading external memory (150) corresponding a cache (850) is performing stage can be dirty state 1020, 1030 can be performing stage clean state. In step 1020, memory interface (832) is cache read request (850) can be transmitted. In step 1030, predictor (833) is read it helps external memory (150) corresponding to cache (850) enables the it is determined if the can. Specifically, read request clean state when the cache set, predictor (833) comprises a cache (850) by reading the tag included in the, cache (850) it helps data read external memory (150) corresponding to determine whether other. If, when the cache hit, memory interface (832) is balance dispatch can be performed. However, when the cache miss, memory interface (832) external memory (820) accomplishing the read request. I.e., as a predicted cache hit when, performing step 1040 can be, otherwise performing step 1050 can be. In step 1040, memory interface (832) comprises a cache (850) of usable external memory bandwidth (820) of usable bandwidth can be compared. Specifically, memory interface (832) comprises a cache (850) and one or more external memory (820) contained in the request queue of the number of identifying available bandwidth can be calculating method. The, memory interface (832) dispatch balance is based on the cache (850) and external storage (820) of bandwidth can be more efficiently utilized. If, cache (850) of usable when a wider bandwidth, step 1020 can be performed. However external memory (820) of usable when a wider bandwidth, step 1050 can be performed. In step 1050, memory interface (832) is an external memory read request (820) can be transmitted. The, memory interface (832) external memory (820) data stored in the cache (850) it as, cache (850) can be a clean state. Figure 11 shows a according to one in the embodiment also, a request for data writing method shown specifically to external memory cache and flow details are disclosed. Core (840) from data writing when a request is received, memory number control unit (830) comprises a cache (850) is based upon whether the dirty are displayed, read or write data cache (850) or external memory (820) can be transmitted. In step 1110, tracker (831) is dirty state cache group can be detected. Specifically, tracker (831) external memory write is requested (820) and outputs the address associated with a physical address of the cache group, conversion cache group of address counter corresponding increase, can be determined by detecting dirty state cache group. If, dirty state cache group is detected, halving and dirty state cache group corresponding counter, step 1140 can be performed. If, clean state cache group is detected 1130 stage can be performed. In step 1140, memory interface (832) write back is based on the manner in data cache (850) can be transmitted. In addition, tracker (831) causes the cache set of dirty bits is received can be changed to 1. In step 1130, tracker (831) is stored in a dirty state cache group can be judges whether or not the dirty list. If, step 1120 can be performed if the dirty list stored cache group, otherwise step 1150 can be performed. In step 1120, interface (832) is a writing request based on a write and read cache (850) and external storage (820) can be transmitted. The, dirty state cache (850) clean can be changed. In step 1150, tracker (831) is stored in a dirty list back and sacrificial group number, can be adding dirty state cache group. The, selecting criteria are sacrificial group, selecting criteria so that sacrificial page of Figure 5, described details dispensed to each other. In addition, dirty list when a new cache group added, tracker (831) additionally includes a counter capable of halving a cache group. On the other hand, tracker (831) is based on sacrificial group of dirty bits, determine whether sacrificial group can be dirty state. If, dirty if sacrificial group, tracker (831) is stored in an external memory data cache set sacrificial groups (820) to a wireless Internet, clean off the sacrificial group can be. After the step 1150, step 1140 can be performed. I.e., after adding the new dirty state cache group, write request can only cache. In addition, tracker (831) causes the cache set of dirty bits is received can be changed to 1. The range of claim are represented by said carry rather than in the embodiment described, in which the meaning of and range and method for claim some general outline is comprised of uniformly interpreted should all changing or modified form. The present invention relates to a method and an apparatus for controlling a memory. The apparatus can detect a cache group in a dirty state without reading a cache and determine whether to transmit a data read request or a data write request to a cache or an external memory depending on a detection result. Therefore, a read request for the cache is reduced and thereby the performance of the cache is improved. COPYRIGHT KIPO 2017 Memory when a request is received, and outputs the address of said memory associated with a physical address of the cache group, said conversion cache group of address counter corresponding increase, determined by detecting said dirty state cache group, dirty group detector; and said compress the, said dirty bit is contained in the dirty state cache group managing dirty list, dirty list manager; and, said dirty bits indicates that the cache set said cache groups dirty are displayed, the device memory number. According to Claim 1, said dirty group detector, said cache group of address by use of a plurality of hash functions and, said counter are both greater than the threshold value corresponding to a plurality of transform cache group of address when, said cache group dirty state cache group detection, the device memory number. According to Claim 2, said dirty group detector, dirty state after detecting said cache group, a halving said counters to the reaction solution, the device memory number. According to Claim 1, wherein said cache set one tag region including one of data areas, the device memory number. According to Claim 1, said threshold value to said dirty group dirty state upon detection of the cache group, said group comprising said address of said cache dirty list manager is not the dirty list, the device memory number. According to Claim 5, are not included in the list if said cache group of dirty said address, said dirty list manager, said dirty list including at least said back to dirty list managing different cache group of address number, the device memory number. According to Claim 6, said cache comprising cache group number dirty state when set back, said back dirty list management unit outputs said address of a cache group number, the device memory number. At least one core; some information stored in a memory unit storing one data area set one tag region, cache; and said core from said memory upon receipt of the request, said dirty cache whether track, heat of predicting whether said cache, said cache memory or said tracking result or said prediction based on a result of said request for said, memory number control unit; including a, computing device. According to Claim 8, said memory number the fishermen, said core from said memory upon receipt of the request, and outputs the address of said memory associated with a physical address of the cache group, said conversion cache group of address counter corresponding increase, determined by detecting said dirty state cache group, compress the, dirty state cache group including a communication network transmits said dirty bit dirty list, said dirty cache state tracking, tracker; predicting whether said cache heat, predictor; and said tracking result or a prediction based on the results, said memory or said request for said cache, memory interface; and, said cache groups said dirty bits indicates that the cache set whether dirty, computing device. According to Claim 9, said memory interface, said request is stored in a memory and said data read request number 1, said tracker clean state cache group detecting, when said cache hit prediction is predictive, said memory cache of said cache memory or said read request based on said bandwidth utilization rate is said to send, computing device. According to Claim 9, said memory interface, said request is stored in a memory and said data read request number 1, detecting said tracker clean state cache group, said prediction is predictive cache miss when, said read request is performed after said memory, said data cache is to send said number 1, computing device. According to Claim 9, number 2 data writing request when said communications, said tracker detects the dirty state group said cache, said cache memory number a fisherman said number 2 said data stream, said tracker detects the clean state cache group, said memory number the fisherman said memory and said data cache said number 2 is to, computing device. According to Claim 9, located outside of said package and said memory, said prediction based on address instruction (instruction provided based) due to the prediction, computing device. Memory when a request is received, and outputs the address of said memory associated with a physical address of the cache group, said conversion cache group of address counter corresponding increase, dirty state cache group determined by said step of detecting; and said compress the, said dirty bit is contained in the dirty state cache group dirty list in the computer; and, said dirty bit indicates that the dirty whether said cache groups set cache, the memory number method. According to Claim 14, said step of detecting the, by use of a plurality of hash functions and address of said cache group, said group of address counter are both threshold corresponding to cache a plurality of translation cache group, said cache group detection than the dirty state, the memory number method. According to Claim 15, said dirty state cache group detected, said counters support boss step; further including, memory number the method. According to Claim 14, wherein said cache set one tag region including one of data areas, the method number memory. According to Claim 14, said detects a dirty state cache group, said group comprising said address cache is not the dirty list; further including, memory number the method. According to Claim 18, said cache group of address when said least dirty list as signal, said back the dirty list including at least different cache group of address number; further including, memory number the method. According to Claim 19, said cache comprising cache group number back dirty state when set, outputting the address of said cache group number and back; further including, memory number the method.