ELECTRONIC CALCULATING DEVICE ARRANGED TO CALCULATE THE PRODUCT OF INTEGERS

26-03-2020 дата публикации
Номер:
US20200097257A1
Принадлежит:
Контакты:
Номер заявки: 24-81-1646
Дата заявки: 07-12-2017

FIELD OF THE INVENTION

[0001]

The invention relates to an electronic calculating device, a calculating method, and a computer readable storage.

BACKGROUND

[0002]

In computing, integers may be encoded in the Residue Number System (RNS) representation. In a Residue Number System (RNS), a modulus m is a product m=m1. . . mkof relatively prime smaller moduli mi, and integers y∈[0, m) are uniquely represented by their list of residues (y1, . . . , yk), where yi=|y|mi for all i; the latter notation denotes the unique integer yi∈[0, mi) that satisfies y≡yimod mi. As a consequence of the Chinese Remainder Theorem (CRT) for integers, the RNS representation is unique for nonnegative integers smaller than the product of the moduli, also called the dynamical range of the RNS.

[0003]

An advantage of an RNS is that computations can be done component-wise, that is, in terms of the residues. By employing an RNS, computations on large integers can be performed by a number of small computations for each of the components that can be done independently and in parallel. RNS's are widely employed, for example in Digital Signal Processing (DSP), e.g. for filtering, and Fourier transforms, and in cryptography.

[0004]

Especially in white-box cryptography the RNS representation is advantageous. In white-box, computations are done on encoded data, using tables that represent the result of the computations. Arithmetic on RNS represented integers can often be done separately on the RNS digits. For example, to add or multiply two integers in RNS representation it suffices to add or multiply the corresponding components modulo the corresponding moduli. The arithmetic modulo the moduli of the RNS can be done by table look-up. In white-box cryptography the table lookup may be encoded. Using an RNS to a large extent eliminates the problem of carry. Although even in white-box it is possible to correctly take carry into account, using RNS can simplify computations considerably. Moreover, the presence or absence of a carry is hard to hide and can be a side-channel through which a white-box implementation can be attacked, e.g., a white-box implementation of a cryptographic algorithm depending on a secret key, such as a block cipher, etc.

[0005]

Since the dynamical range of an RNS is the product of the moduli, a large dynamical range can only be realized by increasing the number of moduli and/or by increasing the size of the moduli. This can be undesirable, especially in the case where the arithmetic is implemented by table lookup, in which case the tables become too big, or too many tables are required (or both). So, a very large dynamical range of the RNS requires either very large tables or a very large number of tables.

SUMMARY OF THE INVENTION

[0006]

An electronic calculating device arranged to calculate the product of integers is provided as defined in the claims. The device comprises a storage configured to store integers in a multi-layer residue number system representation, the multi-layer RNS representation having at least an upper layer RNS and a lower layer RNS, the upper layer RNS being a residue number system for a sequence of multiple upper moduli, the lower layer RNS being a residue number system for a sequence of multiple lower moduli, an integer being represented in the storage by a sequence of multiple upper residues modulo the sequence of upper moduli, upper residues for at least one particular upper modulus being further-represented in the storage by a sequence of multiple lower residues of the upper residue modulo the sequence of lower moduli.

[0007]

The calculating device allows realizing a dynamical range that is as large as desired while employing a fixed, small set of RNS moduli, so that computations, such as additions, subtractions, multiplications, with very large integers or computations modulo a very large modulus can be done with a small set of small tables for the modular arithmetic for the RNS moduli.

[0008]

In an embodiment, the upper multiplication routine is further configured to compute the product of the first (x) and second integer (y) modulo a further modulus (N). For example, in an embodiment, the calculation device computes the Montgomery product xyM−1mod N.

[0009]

The calculating device is an electronic device, and may be a mobile electronic device, e.g., a mobile phone. Other examples include a set-top box, smart-card, computer, etc. The calculating device and method described herein may be applied in a wide range of practical applications. Such practical applications include: cryptography, e.g., in particular cryptography requiring arithmetic using large numbers, e.g., RSA, Diffie-Hellman, Elliptic curve cryptography etc.

[0010]

A method according to the invention may be implemented on a computer as a computer implemented method, or in dedicated hardware, or in a combination of both. Executable code for a method according to the invention may be stored on a computer program product. Examples of computer program products include memory devices, optical storage devices, integrated circuits, servers, online software, etc. Preferably, the computer program product comprises non-transitory program code stored on a computer readable medium for performing a method according to the invention when said program product is executed on a computer.

[0011]

In a preferred embodiment, the computer program comprises computer program code adapted to perform all the steps of a method according to the invention when the computer program is run on a computer. Preferably, the computer program is embodied on a computer readable medium.

[0012]

Another aspect of the invention provides a method of making the computer program available for downloading. This aspect is used when the computer program is uploaded into, e.g., Apple's App Store, Google's Play Store, or Microsoft's Windows Store, and when the computer program is available for downloading from such a store.

BRIEF DESCRIPTION OF THE DRAWINGS

[0013]

Further details, aspects, and embodiments of the invention will be described, by way of example only, with reference to the drawings. Elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. In the Figures, elements which correspond to elements already described may have the same reference numerals. In the drawings,

[0014]

FIG. 1 schematically shows an example of an embodiment of an electronic calculating device,

[0015]

FIG. 2a schematically shows an example of an embodiment of an electronic calculating device,

[0016]

FIG. 2b schematically shows an example of an embodiment of representing integers in a multi-layer RNS,

[0017]

FIG. 3 schematically shows an example of an embodiment of representing integers in a multi-layer RNS,

[0018]

FIG. 4 schematically shows an example of an embodiment of a calculating method,

[0019]

FIG. 5a schematically shows a computer readable medium having a writable part comprising a computer program according to an embodiment,

[0020]

FIG. 5b schematically shows a representation of a processor system according to an embodiment.

DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

[0021]

While this invention is susceptible of embodiment in many different forms, there are shown in the drawings and will herein be described in detail one or more specific embodiments, with the understanding that the present disclosure is to be considered as exemplary of the principles of the invention and not intended to limit the invention to the specific embodiments shown and described.

[0022]

In the following, for the sake of understanding, elements of embodiments are described in operation. However, it will be apparent that the respective elements are arranged to perform the functions being described as performed by them.

[0023]

Further, the invention is not limited to the embodiments, and the invention lies in each and every novel feature or combination of features described herein or recited in mutually different dependent claims.

[0024]

Embodiments of the invention enable modular arithmetic for arbitrarily large moduli using arithmetic modulo fixed, small moduli, in particular using a fixed, small number of lookup tables. Modular multiplication is a difficult operation, but various methods, e.g., Montgomery, Barrett, Quisquater, etc., have been devised to approximate this operation, in the following sense: if r=xy mod N with 0≤r<N is the exact result of the multiplication modulo N, then these methods deliver a result z of the form z=r+qN for a small non-negative integer q. We will refer to such a result as a pseudo-residue. See, e.g., Jean-François Dehm. Design of an efficient public-key cryptographic library for RISC-based smart cards. PhD thesis, Université Catholique de Louvain, 1998, for a discussion of a number of modular arithmetic algorithms, in particular, modular multiplication, more in particular Montgomery multiplication.

[0025]

We will speak of a pseudo-residue r+qN with expansion bound φ if the pseudo-residue satisfies 0≤q<φ, so remain bounded by a fixed multiple φN of the modulus N. An integer p is a pseudo-residue of the integer x modulo m if p=x mod m and 0≤p<φm, for some predetermined integer gyp. The integer φ is called the expansion bound, and limits the growth of the pseudo-residues. If φ=1, the pseudo-residue is a regular residue. It is possible, to further loosen the restriction on pseudo residues, e.g., by merely requiring that −φm<p<φm. For convenience of presentation we will not make this loosened assumption, but it is understood that the discussion below could easily be adapted to take the less restrictive bound into account. This type of pseudo-residues is termed a symmetric pseudo-residue.

[0026]

In yet a further generalization, upper and lower expansion bounds may be used, e.g., by requiring that φLm<p<φUm for lower expansion factor φL, and upper expansion factor φU. The lower and upper expansion factors may be positive or negative, although φLU. For example, the pseudo-residue may satisfy φL≤q<φUwith φ=φU−φL. Other, more complicated methods exist to compute the exact residue r, for example by doing extra subtractions of the modulus, by doing an extra multiplication or reduction, or by doing an exact division. Interestingly, modular arithmetic methods typically deliver the result as a pseudo-residue. Extra efforts are required to obtain the exact residue. For example, the Montgomery algorithm in Dehm (section 2.2.6) has as the final two steps that “if Un>N then U=Un−N else U=Un” omitting this extra reduction step would give a modular reduction algorithm in which the output is a pseudo residue with expansion factor 2. Modular multiplication algorithms with a larger expansion factor, even as high as a few hundred may be used in the algorithm. This is not a problem, e.g., if long as conversion is only needed after a long sequence of operations within the system. In general, when referring to a residue, it may be a pseudo-residue or exact residue.

[0027]

In an embodiment of the calculating device, an upper multiplication routine is configured to receive upper residues (xi, yi) that are smaller than a predefined expansion factor times the corresponding modulus (xi, yiUMi) and is configured to produce upper residues (zi) of the product of the received upper residues (z) that are smaller than the predefined expansion factor times the corresponding modulus (ziUMi). In addition, the upper multiplication routine may be configured to receive upper residues (xi, yi) that are larger or equal than a further predefined expansion factor times the corresponding modulus (xi,yi≥φLMi) and is configured to produce upper residues (zi) of the product of the received upper residues (z) that are larger or equal than the predefined expansion factor times the corresponding modulus (zi≥φLMi). In case, φL>0, we will refer to φ=φU−φLas the expansion factor.

[0028]

An important observation underlying embodiments of the invention is the following. Given a method to do modular arithmetic using an RNS, we can use that method with a small RNS with moduli mi, say, to implement the modular arithmetic for each of the moduli Miof a big RNS that implements the modular arithmetic for a big modulus N. In other words, we can use a method for modular arithmetic with a RNS to build a “hierarchical” RNS with two or more layers of RNS's built on top of each other. We will refer to such hierarchical RNS systems as Multi-Layer Residue Number Systems (multi-layer RNS). In this way, we can use a small RNS, with a small dynamical range, to implement a bigger RNS, with a bigger dynamical range.

[0029]

We will refer to the RNS with the largest dynamic range as the first layer, or the top layer, and to the RNS with the smallest dynamic range as the lowest layer, or the bottom layer; In an embodiment, with two layers, the bottom layer would be the second layer.

[0030]

In an embodiment, such a hierarchical system is built by implementing a method to do modular arithmetic using an RNS that works with pseudo-residues instead of exact residues. Provided that the pseudo-residues remain bounded, that is, provided that they have a guaranteed expansion bound; this allows constructing very efficient systems. We stress that in such a hierarchical RNS system, all the RNS in the different layers except in the bottom layer are “virtual”, in the sense that only the bottom RNS actually does the arithmetic; all (or mostly all) of the arithmetic in higher layers is delegated to the bottom RNS.

[0031]

In a typical application of a multi-layer RNS, the modular arithmetic in the bottom RNS is done by lookup tables; in that case, the multi-layer RNS system can be devised in such a way that no further arithmetic is needed beyond that of the bottom level. This makes such multi-layer RNS system particularly attractive to be used in white-box applications. In addition, hardware implementations of these multi-layer RNS systems are highly parallelizable and thus offer great promise in terms of speed.

[0032]

The method has been implemented to do modular exponentiation, such as required in, e.g., RSA and Diffie-Hellman, with moduli of size around 2048 bits. In a preferred embodiment of our method, we use a two-layer multi-layer RNS, employing 8-bit moduli in the bottom RNS and 66-bit moduli in the first RNS layer. The resulting system took approximately 140000 table lookups to do a 2048-bit modular multiplication; as a consequence, a modular exponentiation with a 2048-bit modulus and a 500-bit exponent can be realized on a normal laptop in less than half a second.

[0033]

FIG. 1 schematically shows an example of an embodiment of an electronic calculating device 100.

[0034]

Calculating device 100 comprises a storage 110. Storage 110 is configured to store integers in a multi-layered RNS. The multi-layered RNS has at least two layers. The first (top, upmost) layer is defined by a sequence of multiple upper moduli Mi. A second (lower) layer is defined by a sequence of multiple lower moduli mi. An integer in storage 110 can be represented as a sequence of upper pseudo-residues modulo the sequence of multiple upper moduli Mi. At least one of the upper residues is in turn expressed as a sequence of lower residues modulo the sequence of multiple lower moduli mi, e.g., it is ‘further-represented’. It is not needed that each of the upper residues is expressed in this way, but this is a possible embodiment. Note that the lower RNS can be used to express upper residues for more than one upper residue. In fact, in an embodiment the same lower RNS is used for each of the upper residues. In case each of the upper residues is expressed in the lower RNS, the integer is ultimately expressed as multiple residues modulo m1, multiple residues modulo m2, etc., as many as there are residues in the upper layer. In this case, the upper residues are stored in storage 110, but only in the form of sequences of lower residues. Calculating device 100 may comprise an input interface to receive the integers for storage in storage 110, and for calculating thereon. The result of a multiplication may be stored in storage 110, where it may be used as input for further computations. Integers stored in multi-layer RNS, like integers stored in singe-layer RNS can be added as well, this is not further expanded upon below.

[0035]

Calculating device 100 comprises a processor circuit 120 and a further storage 130. Further storage 130 comprises computer instructions executable by processor circuit 120. Processor circuit may be implemented in a distributed fashion, e.g., as multiple sub-processor circuits. Further storage 130 comprises a lower multiplication routine 131 and an upper multiplication routine 132. In case there are more than two layers in the multi-layer RNS, there may also be multiple multiplication routines, e.g., a first layer multiplication routine, a second layer multiplication routine, a third layer multiplication routine, and so on. Note that the multiplication routines may perform additional functionality, e.g., other modular operations, e.g., modular addition etc.

[0036]

Lower multiplication routine 131 is configured to compute the product of two integers that are represented in the lower RNS. In particular, lower multiplication routine 131 may be used to multiply two further-represented upper pseudo residues (xj, yj) corresponding to the same upper modulus (Mj) modulo said upper modulus (Mj). Note that the lower multiplication routine 131 produces the result modulo the upper modulus (Mj) that is appropriate. Moreover, the result of the modulo operation is a pseudo residue that satisfies an expansion bound. The expansion bound may be small, say 2, or even 1, or may be larger, say a few hundred, but it allows the system to stay in RNS representation.

[0037]

Upper multiplication routine 132 is configured to compute the product of a first integer x and second integer y represented in the upper layer by component-wise multiplication of upper residues of the first integer (xi) and corresponding upper residues of the second integer (yi) modulo the corresponding modulus (Mi), wherein the upper multiplication routine calls upon the lower multiplication routine to multiply the upper residues that are further-represented. Note that the dynamic rang of the upper layer RNS is determined by the upper moduli Mi, whereas that of the lower layer RNS is determined by the lower moduli mi. Thus, lower moduli may be used multiple times to build a larger dynamic range. Note that normally, in a single-layer RNS this would not work. Repeating a modulus would not increase the dynamic range at all.

[0038]

Typically, the upper and lower moduli are chosen relatively prime. The inventors have realized however, that this condition, although convenient, is not strictly necessary. A multi-layer RNS would also work if the moduli are not all chosen to be relatively prime, in this case, one may take the dynamic range of the lower layer as the least common multiple of the moduli m1, . . . , mk, and the dynamic range of the upper layer as the least common multiple of the moduli M1, . . . , Mk. In an embodiment, at least two of the upper or at least two of the lower moduli have a greatest common divisor larger than 1. This may be helpful as an additional source of obfuscation. See, e.g., “The General Chinese Remainder Theorem”, by Oystein Ore (included herein by reference).

[0039]

Typically, the calculating device 100 will not be a stand-alone device, but will be used as part of a larger calculating device 150, that uses calculating device 100 to perform modular arithmetic. For example, larger device 150 may comprise calculating device 100. For example, a larger device 150 may compute modular exponents, e.g. for cryptographic purposes, etc.

[0040]

Further details on various embodiments how processor circuit 120 may be configured to multiply two integers or on their representation in storage are explained below.

[0041]

FIG. 2a schematically shows an example of an embodiment of an electronic calculating device 200. Embodiments according to FIG. 2b may be implemented in a number of ways, including hardware of the type illustrated with FIG. 1.

[0042]

Calculating device 200 comprises a storage 230. Storage 230 stores integers in the form of the multi-layer RNS system. Shown are integers 210 and 220; more integers are possible. FIG. 2b illustrates the form integers 210 and 220 may have.

[0043]

As shown in FIG. 2b, integer 210 is represented a sequence of multiple upper residues 211 modulo a sequence of multiple upper moduli. If the integer is x, the upper moduli are Mi, then the sequence of residues may be xi=xMi. The notation xMi denotes a pseudo-residue modulo the modulus Mi. The pseudo-residue may be larger than Mibut satisfies an expansion bound, e.g., it is smaller than φMifor some expansion factor φ. In an embodiment, there is a single fixed expansion factor per layer. However, it is possible to have a different expansion factor per modulus, per layer.

[0044]

Shown in FIG. 2b are three upper residues corresponding to three upper moduli. Two or more moduli is possible. For example, upper residue 210.1 may be x1=xMi, upper residue 210.2 may be x2=xM2, etc. At least one of the upper residues is further-represented in the storage by data representing a sequence of multiple lower residues (xjmi; 212, 222) of the upper residue (xj) modulo the sequence of lower moduli (mi).

[0045]

Shown in FIG. 2b are three lower residues corresponding to three lower moduli. Two or more lower moduli is possible; there is no need for the number of upper and lower moduli to be equal. For example, upper residue 210.2, e.g. x2=xM2, may be further-represented in the storage by a sequence 212 of multiple lower residues xjmi, assuming that the modulus with index j is further-represented.

[0046]

For example, lower residue 210.2.1 may be x2m1, and lower residue 210.2.2 may be x2m2, etc.

[0047]

It is important to note that none of the upper moduli Mineeds to be a product of lower moduli mi. In particular, in an embodiment, the further represented modulus Mjis both larger than each of the lower moduli, and not a product of any one of them. In yet a further embodiment, no upper modulus is a product of lower moduli, with the possible exception of the redundant modulus or moduli (if these are used).

[0048]

If upper residue 210.2 is the only upper residue that is further represented, then storage 230 may store upper residues 210.1, 210.3, and the lower residues 210.2.1, 210.2.2 and 210.2.3. Note that upper residue 210.2 is stored but in the form of a sequence of lower residues. In an embodiment, all of the upper residues are stored as a sequence of lower residues. In other words, the number 210 is represented in a first RNS form 211 with a first set of moduli Mi, each of these residues is represented in a second RNS form 212 with a second set of moduli mi. The moduli of the second RNS may be the same for each of the upper residues. Although this is not necessary, it significantly reduces the complexity of the system and the number of tables. Note that each of these residues may be pseudo-residues. Furthermore, the residues may be represented in a form suitable for Montgomery multiplication, e.g., multiplied with a Montgomery constant. The residues may also be encoded.

[0049]

The second integer 220 may be represented in the same form as first integer 210. Shown a sequence of multiple upper residues 221, of which upper residues 220.1-220.3 are shown. At least one of the upper residues, in this case upper residues 220.2 is further represented as a sequence of multiple lower residues 222, of which lower residue 220.2.1-220.2.3 are shown.

[0050]

Returning to FIG. 2a, calculating device 200 further comprises an upper multiplication routine 244 and a lower multiplication routine 242. Lower multiplication routine 242 is configured to multiply two upper residues in the lower, e.g., second RNS system. Note that in addition to multiplication, lower multiplication routine 242 may be configured with additional modular arithmetic, e.g., addition. Upper multiplication routine 244 is configured to multiply first integer 210 and second integer 220 represented in the upper RNS system. However, as the upper moduli are represented in the form of an RNS system itself, the arithmetic on these refer to the lower multiplication routine 242. The upper multiplication routine 244 may also be configured with additional arithmetic, e.g., addition.

[0051]

Arithmetic in the bottom RNS may use look-up tables to perform modular arithmetic. Calculating device 200 may comprises a table storage 245 storing tables therefore. This makes the method well-suited to be used in white-box applications since it can work with small data elements only, so that all arithmetic can be done by table lookup. In an embodiment, table storage 245 comprises tables to add and to multiply for each of the lower moduli, or in case of more than two layers, the lowest (bottom) moduli.

[0052]

Instead of table look up, the calculations on the lowest layer may also be performed by other means, e.g., implemented using arithmetic instructions of a processor circuit, or using an arithmetic co-processor. In an embodiment, moduli of the form 2m−c with small c can be used. For example, with m=16, and c<8.

[0053]

See, for more information on white-box, the paper by Chow et al “A White-Box DES Implementation for DRM Applications”. See, for more information on white-box, and in particular on encoding using states the application “Computing device configured with a table network”, published under number WO2014096117. See also, “Computing device comprising a table network”, published under number WO2014095772, for information on how to represent computer programs in white box form. There three references are included herein by reference.

[0054]

In an embodiment, the system is implemented using white-box cryptography. Data is represented in encoded form, possibly together with a state. States are redundant variables so that the encoding is not unique. For example, a (possibly very large) integer y may be represented by its list of pseudo residues (y1, . . . , yk), in encoded form (in particular the lower residues). That is, every residue yiis given in the form yi=E(yi,si), were siis a state-variable and E is some encoding function (typically a permutation on the data-state space). Operations on encoded variables are typically performed using look-up tables. Larger operations are broken up into smaller operations if needed. As a result, the computation may take the form of a table network, comprising multiple look up tables. Some tables take as input part of the input to the algorithm, e.g., the number be conversed. Some tables take as input the output of one or more other tables. Some tables produce part of the output. For example, the required arithmetic modulo the miis typically implemented by some form of table look-up, at least if the m, are relatively small.

[0055]

White-box prefers methods that do computations with relatively small (encoded) data. In the invention, this works particular well, since due to the multi layers the residues on which computations are done can be kept small. For example, the encoded data may be about byte size.

[0056]

The inventors found that the system is improved if the tables to compute at the lowest level, e.g., addition and multiplication, are the same size, even for different lower moduli. This avoids the use of conversion tables. For example, we implement for each small modulus (e.g. 8-bit at most) the addition- and multiplication tables on numbers of byte-size, instead of just for the proper residues. Furthermore, if tables have the same size, the size of a table does not reveal the size of the lower moduli.

[0057]

Furthermore, suppose that m=max miis the maximum size of the moduli mi, and the lookup table for m, has entries of size T1, with outputs of size smaller than mi, say. The maximum size of a residue coming out of any of the tables is m−1, so as long as Ti>=m for all I we can use outputs from one table as entries for another table. Most efficient is Ti=m for all i. In an embodiment, the size of the lookup tables for the modular arithmetic operations are extended to at least accommodate entries of the size of the largest lower modulus.

[0058]

Creating tables for table storage 245 may be done by selecting an arithmetic operation, say ƒ(x1,x2) in case of two inputs, and computing the function for all possible operands, in the example over all values of x1and x2and listing the results in a table. In case the table is to be encoded, an enumeration of Eƒ(ƒ(E1−1(x1),E2−1x2)); in this formula, the function E1, E2, Eƒare the encodings of the two inputs, and of the output respectively.

[0059]

Further detail of various possible embodiments of the first and second multiplication routine are given below.

[0060]

The multi-layer RNS representation may be extended to three or more layers, this is shown in FIG. 3. FIG. 3 shows an integer 310, e.g. as stored in storage 230. The integer is represented by a sequence of multiple first layer residues 311 of integer 310 modulo a first sequence of moduli. Of first sequence 311 three residues are shown: first layer residue 310.1, 310.2, and 310.3.

[0061]

At least one, of the first layer residues, in the illustration residue 310.2, is represented as a sequence of multiple second layer residues 312, of the first layer residue, in this case residue 310.2. Second layer sequence 312 comprises the first layer residue modulo a second sequence of moduli. Of second sequence 312, three residues are shown: second layer residue 310.2.1, 310.2.2, and 310.2.3.

[0062]

At least one, of the second layer residues, in the illustration residue 310.2.2, is represented as a sequence of multiple third layer residues 312, of the second layer residue, in this case residue 310.2.2. Third layer sequence 313 comprises the second layer residue modulo a third sequence of moduli. Of third sequence 313, three residues are shown: third layer residue 310.2.2.1, 310.2.2.2, and 310.2.2.3.

[0063]

The upshot is that integer 310 is at least partly represented by residues modulo a third sequence of residues. The sizes of the moduli in the third sequence can be much smaller than the sizes of the moduli in the second sequence, and much yet than those in the first sequence.

[0064]

If all of the first layer residues are represented as third layer residues, this representation makes it possible to compute with integers represented like integer 310 while only computing with small moduli.

[0065]

The three hierarchical layers, shown in the multi-layer RNS of FIG. 3 can be extended to more layers. For example, it is possible to regard the second and third layers as a multi-layer RNS, e.g., as shown in FIG. 2b, to which a hierarchical higher layer 311 is added.

[0066]

In an embodiment, modular arithmetic is implemented on the upper level, and as a consequence no overflow problems are suffered. If no modular arithmetic is implemented for most of the moduli, the representation system may suffer from overflow problems. Multi-layered RNS systems as described herein should not be confused with so-called two-level systems, which in fact do not have two levels of RNS, but use pairs of related moduli, typically of the form 2n±1, or even 2n±a with a small. In these cases, larger moduli are formed as the product of moduli on the lower level and, as a consequence, there is actually just one RNS.

[0067]

An advantage of the Montgomery multiplication algorithm in RNS that we propose below is that it employs pseudo-residues and postponed Montgomery reduction to increase efficiency of the calculations.

[0068]

Residue Number Systems are very widely employed, for example in various digital signal processing algorithms and in cryptography. A difficulty is that in order to realize a very large dynamical range of the RNS, either very many or very big moduli are required. Modular arithmetic for big moduli quickly becomes difficult to implement directly. On the other hand, there simply are not enough small moduli to realize a very large dynamical range. For example, the largest dynamical range provided with moduli of size at most 256 is at most (28)54, a 432-bit number, obtained by taking 54 prime powers of the 54 distinct primes below 256; in fact, the size can be at most 2363. Any larger dynamical range is simply not possible. Also, if the modular arithmetic is implemented by lookup tables, a dynamical range of the maximal size would require quite a large number of tables. In contrast, embodiments allow for example to realize any dynamical range up to a value slightly larger than 2048 bits while using only 18 moduli of size at most 256. The method also allows for heavy parallelization. The method, when well designed, does not suffer from overflow problems and can be applied as often as desired, for example for a modular exponentiation.

[0069]

Interesting aspects of various embodiments include the following:

[0070]

The idea that we can use a generic method to do modular arithmetic using an RNS to build two or more RNS's on top of each other, thus enlarging the dynamical range of the bottom RNS to that of the top RNS. In an embodiment, a system of layered RNS's is provided, where each residue or pseudo-residue value is contained in the dynamical range of the RNS below, and is represented by the RNS below. Furthermore, modular arithmetic for these pseudo-residues is implemented, in such a way that at all times the dynamical range of the representing RNS on the level below is respected. More than two layers are possible, e.g., three or more layers. In an embodiment, each layer contains residues for at least two moduli. In an embodiment, at least one modulus of the first layer is relatively prime to a modulus in the second layer, e.g., at least one modulus on each non-bottom layer is relatively prime to a modulus of the RNS of the level below. In an embodiment, the RNS in successive layers have increasing dynamical ranges, e.g., the first layer has a larger dynamic range than the second and so on.

[0071]

The idea that it is sufficient to have a method for modular arithmetic employing RNS's, and doing only addition and multiplication in the RNS moduli, delivering results in the form of pseudo-residues instead of exact residues, provided that the pseudo-residues remain bounded (that is, that there is a known expansion bound). This in combination with the derivation of precise expressions for the various expansion bounds. Many modular algorithms using a RNS can be adapted to work with pseudo-residues.

[0072]

The use of base extension with a redundant modulus using pseudo-residues, and of using Montgomery reduction combined with postponed modular reduction on the higher level RNS's, in combination with precise expressions for certain expansion bounds.

[0073]

The idea to do an “approximate” division-and-round-down operation for suitable divisors entirely within an RNS and working with pseudo-residues.

[0074]

The use of fixed-size lookup tables for the modular arithmetic on the bottom level (i.e., the use of 28×28lookup tables when all small moduli are of size at most 28), to make base extension on higher levels more efficient.

[0075]

The use of redundant moduli on higher levels that are each a product of one or more of the moduli on the bottom level, so that exact modular arithmetic is possible for these moduli.

[0076]

The use of special representations of integers x, of the form HjxjM−jwith fixed constants Hjdepending only on the modulus, for pseudo-residues xjMj, in order to simplify the algorithm. This improvement generalizes on Montgomery representations. For example, Hs=|1/m|Msgives Montgomery representation. It gains about 20% of the operations. It is possible, to make valid embodiments without this improvement, e.g., wherein all residues are in Montgomery representation.

[0077]

Below several embodiments of the invention are disclosed, including with different underlying modular multiplication methods. At present, the preferred embodiment is based on Montgomery multiplication. We show how to implement the modular multiplication, which is the difficult operation (addition and subtraction will be discussed separately) in RNS. The system allows multiple layers, so we will describe how to add a new RNS layer on top of an existing one. Here, the bottom layer can simply be taken as an RNS with moduli mifor which the required modular arithmetic is implemented, for example, by table lookup, by some direct method, or by any other method.

[0078]

The top layer on which to build a new RNS will consist of an RNS with (relatively prime) moduli Mi, and this top layer will meet the following bounded expansion requirement: There are positive integers m and φiwith the following properties. Given integers 0≤x,y<φiMi, we can compute a pseudo-residue z with expansion bound φi(so with z<φiMi) that represents the modular product |xy|Mi, that is, for which mz≡xy mod Mi. We will write z=x⊗(Mi,m)y to denote such an integer. Thus, for every Mi, there will be some means of computing a pseudo-residue representing a modular product and satisfying a given expansion bound, provided that both operands are pseudo-residues satisfying the same expansion bound.

[0079]

Note that we might weaken the above requirement to the requirement that, given integers x,y with −φi(0)Mi<x,y<φi(1)Mi, we can compute a pseudo-residue z=x⊗(Mi,m)y with mz≡xy mod Miand −φi(0)Mi<z<φi(1)Mi. The point here is that we need to have some constraint so that if the constraint is satisfied by x, y, then it is also satisfied by the pseudo-residue z that represents the result of the modular multiplication |xy|Mi.

[0080]

To implement a multi-layer RNS, we could take as the first layer an RNS formed by a number of moduli mifor which we can directly implement the required modular arithmetic, for example by table lookup. In such a system, all expansion bounds φiare equal to 1. In an embodiment, the expansion bound for the lowest layer of the RNS equals 1, but the expansion bound for higher layers, the expansion bound is larger than 1. The method now describes how to add a new modulus N as one of the moduli of the new RNS layer to be added. Thus, the multi-layer system is built up from the lowest layer to higher layers.

[0081]

The modular multiplication in the upper layer may be done with various methods. For example, in a first method the modular multiplication may be based on integer division with rounding down within the RNS, employing only modular addition/subtraction and modular multiplication for the RNS moduli, e.g., as in Hitz-Kaltofen. This method can then be employed to do modular reduction

[0000]

h|h|N=h-hNN,

[0000]

and hence also modular multiplication entirely within an RNS. We briefly describe the idea. The method uses an extended RNS consisting of K+L moduli Mi, grouped into a base RNS M1, . . . , MKand an extension MK+1, . . . , MK+LWe write M=Mi. . . MKand M=MK+1. . . MK+Lto denote the dynamical ranges of the base RNS and the extension, respectively. We will use M<M. Given an integer h and a modulus N, with 0≤h, N<M, first employ an iterative Newton algorithm to compute

[0000]

R=MN;

[0082]

then given R, compute

[0000]

Q=hRM;

[0083]

then one of Q or Q+1 equals

[0000]

hN.

[0000]

The iterative Newton algorithm takes z0=0, z1=2, and then

[0000]

zi+1=zi(2M-Nzi)M

[0084]

until zi=zi-1. It can be shown that this algorithm always halts, with either zior zi+1 equal to

[0000]

MN.

[0000]

The basic step is to compute

[0000]

uM,

[0000]

where u=zi(2M−ziN) or u=hR. For example, we may use that ui=|u|Mi is maintained for all of the RNS moduli M1, . . . , MK+L. The number r=|u|M<M is represented by the basic residues uifor 1≤i≤K. The Mixed Radix representation

[0000]


r=r0+r1M1+ . . . +rK−1M1. . . MK−1

[0085]

with 0≤ri-1<Mifor 1≤i≤K may then be obtained from modular calculations modulo the Mi. Once this representation is obtained, we can do base extension: we can obtain the missing residues in the extended RNS by letting

[0000]

rK+j=i=1Lri-1MK+jM1Mi-1MK+jMK+j

[0086]

for j=1, . . . , L. Now to compute

[0000]

Q=uM,

[0000]

we first compute the full representation of r=|u|Mfrom the basis residues uiwith 1≤i≤K by computing the MR representation followed by a base extension. Then we compute the representation of the division Q=(u−r)M−1in the extended moduli MK+1, . . . , mK+L, which is possible since M has an inverse modulo the mK+jand M<M. Finally, by a second base extension, now from the extended residues, we compute the full representation of Q. For example, we can indeed compute

[0000]

Q=hN,

[0000]

and hence the modular reduction

[0000]

|h|N=h-hNN,

[0000]

in the RN S with moduli M1, . . . , MK+Lusing only modular additions, modular multiplications by precomputed constants, and modular multiplications modulo the RNS moduli Mi. So, provided that N2<M, we can compute the residue |xy|Kfrom h<N2entirely within the RNS.

[0087]

This first method to do modular arithmetic as sketched above can be used to build a layered RNS system. Indeed, to build a new RNS layer on top of a layered RNS system, with top layer an extended RNS with moduli Mi, mk+zas above, we construct a new extended RNS with moduli M1, . . . , MK, MK+1, . . . , MK+Lthat each satisfy Mi2<m=m1. . . mk. Now we can implement the modular arithmetic for each of the M1as needed in the RNS formed by M1, . . . MK+Lin terms of modular additions and multiplications modulo the mi. That is, we can delegate the modular arithmetic modulo each of the Mito the layer below. The resulting system as disclosed above works entirely with exact residues, although we found that it is possible to build a more efficient system that works with pseudo-residues instead. Since this method as described here works with exact residues, we have an expansion bound φ=1.

[0088]

For example, in a second method the modular multiplication may be based on Montgomery multiplication and involves the modulus N and a Montgomery constant M (it is assumed that gcd(N, M)=1). The operands X, Y and the result Z≡XY mod N of the modular multiplication Z≡XY mod N are in Montgomery representation, that is, represented by numbers x≡XM, y≡YM,z≡ZM mod N, so that xy≡zM mod N. In terms of the Montgomery representations, we want to find an integer solution z, u of the equation

[0000]


h+uN=Mz,

[0089]

where h=xy is the ordinary (integer) product of x and y. The conventional form of the (single-digit) Montgomery multiplication method is the following. Pre-compute the constant N=|(−N)−1|M; then do

[0090]

1. h=xy;

[0091]

2. u=|hN|M;

[0092]

3. z=(h+uN)/M.

[0093]

Since h+uN≡0 mod M, the division in step 3 is exact; moreover, for the result Z we have Mz≡h≡xy mod N; moreover, if x, y are in fact pseudo-residues with expansion bound φ, then 0≤xy<φN, hence

[0000]

0z=(xy+uN)/M<(ϕ2N2+MN)/M=(ϕ2NM+1)N.(1)Ifϕ2NM+1ϕ,

[0094]

then the result z again meets the expansion bound 0≤z<φN. For example, to have φ=2, it is sufficient to require that M≥4N. More general, putting φ=1/ε with 0<ε<1, the final result again meets the expansion bound φ provided that the modulus satisfies

[0000]


N≤ε(1−ε)M.

[0095]

There are various possible methods to adapt this algorithm for an implementation in a RNS. The computation of N in RNS is straightforward, e.g. it may be precomputed or otherwise. Interestingly, also a representation of u in the right RNS is obtained. For example, u=hN mod M would determine u but only gives the residues of u in the left RNS. Note that the z in step 3 may use division by M, so it can be computed directly only in the right RNS. However, by using base extension, either for u, or for z the rest may also be computed. We found that the latter choice worked slightly better.

[0096]

A better method seems to use an extended RNS consisting of K+L moduli Mi, grouped into a base or left RNS M1, . . . , MK, with dynamical range M=M1. . . MK, and an extension or right RNS MK+1, . . . , MK+L, with dynamical range M′=MK+1. . . MK+L. These left and right RNS should not be confused with the layers of the multi-layer RNS, but are two parts of the same layer. For example, the following method may be used from Jean-Claude Bajard, Sylvain Duquesne, Milos Ercegovac, Nicolas Meloni. Residue systems efficiency for modular products summation: Application to Elliptic Curves Cryptography. Proceedings of SPIE: Advanced Signal Processing Algorithms, Architectures, and Implementations. XVI, August 2006, 6313, 2006

[0097]

The Montgomery constant M may be taken as the product of the left moduli. Moreover, we will use an additional, redundant modulus M0in order to do base-extension. Note that we use these methods with pseudo-residues instead of with exact residues. Note, in particular the base extension for z instead of for u, and the novel postponed addition/multiplication steps 2 and 5 in the method below.

[0098]

Our method consists of finding a suitable solution (u, z) of the equation

[0000]


h+uN=zM  (2)

[0099]

with h=xy. We will write z=R(N,M)(h) to denote the solution found by our algorithm, and we will refer to this solution as a Montgomery reduction of h. Note that Montgomery reduction provides an approximation to an integer division by the Montgomery constant M, therefore provides a means to reduce the size of a number. The idea of the algorithm is the following. We can use equation (2) to compute a suitable u such that u≡h(−N)−1mod Mifor left moduli Mi. A possible solution is to take u=Σi=1Kμi(M/Mi) with μi=(h|−N−1(M/Mi)−1|MiMi for all i≤K. This is not necessarily the smallest possible u but it surely satisfies h+uN≡0 mod M. Then we can compute pseudo-residues zj=(hj+uN)M−1)Mj for right and redundant moduli Mj. Finally, we can do base extension to compute the residues of z modulo the left moduli Mi: writing z=Σj=K+1K+Lηj(M′/Mj)−qM′ with ηj=zj|(M′/Mj)−1|MjMj, we can first use the redundant residues to compute q exactly, and then we can use this expression for z to determine pseudo-residues of z modulo the base moduli Mi.

[0100]

We now turn to the details of an embodiment of this method. We begin by listing the setup, inputs and result for the method. We use the following.

[0101]

1. Given are a modulus N, an extended RNS with base (left) RNS formed by base moduli M1, . . . , MKand extension (right) RNS formed by MK+1, . . . , MK+Lwith dynamical ranges M=M1. . . MKand M′=MK+1. . . MK+L, and a redundant modulus M0. Preferably, all moduli are relatively prime in pairs except possible for (M0, N). As noted above, it is not strictly necessary thought that all moduli are relatively prime, although this may lead to a smaller dynamic range.

[0102]

2. An implementation of Montgomery multiplication and Montgomery reduction for the moduli of the extended RNS such that

    • if ei=ai(Mi,m)biwith 0≤ai,bi1Mi, then eiis a pseudo-residue modulo Mifor which mei≡aibimod Miand 0≤ei1Mi, for all i.
    • if ei=ai(Mi,m)Ciwith 0≤ai1Miand 0≤Ci<Mi, then 0≤ei1Mi, for all i (a possibly sharper expansion bound holds for multiplication moduli Miby a true residue, for example a constant).
    • If zi=R(Mi,m)(hi) is the computed Montgomery reduction of hi, then z, is a pseudo-residue for which mzi≡himod Miand 0≤zi1Mi, provided that 0≤hi12Mi2.
    • Modular arithmetic for the redundant modulus is exact, that is, all pseudo-residues modulo M0are in fact true residues.

[0107]

So we implement the modular arithmetic modulo the Miwith expansion bound φ1, and expansion bound ϕ1for multiplication by a constant. For the redundant modulus, we require expansion bound equal to 1. In fact, these expansion bounds may even depend on the modulus Mi; for simplicity, we have not included that case in the description below. Here m is a constant which is the Montgomery constant for the RNS level below.

[0108]

3. Input for the Montgomery multiplication algorithm are pseudo-residues x, y modulo N for which x, y<φN, represented with respect to the entire moduli set M0, . . . , MK+L, in Montgomery representation with expansion factor φ1, except for the redundant modulus. That is, x is represented by a=(a0, a1, . . . , aK+L) with mx≡aimod Mi, and 0≤ai1Mifor 0≤i≤K+L and a0≡x mod M0; and similarly y is represented by b=(b0, . . . , bK+L) with my≡bimod Mi, and 0≤bi1Mifor 1≤i≤K+L and b0≡y mod M0. We will refer to such a representation as a residue Montgomery representation.

[0109]

4. The computed output of a Montgomery multiplication or reduction will be a pseudo-residue z for which 0≤z<φN, represented with respect to the entire moduli set in Montgomery representation by c=(c0, c1, . . . , cK+L) with 0≤ci1Miand mci≡z mod Mifor 1≤i≤K+L and c0≡z mod m0; for the result z of a Montgomery multiplication by a constant less than N we will have z<ϕN, with possibly ϕ smaller than φ. Here, z satisfies (2), with h=xy in case of a Montgomery multiplication of x and y.

[0110]

The modular arithmetic operations that are implemented are the following.

[0111]

1. Integer Multiplication in RNS

[0112]

Given inputs x,y as in point 3 above, we can compute the integer product h=xy, represented with respect to the entire moduli set in residue Montgomery representation e=(e0, e1, . . . , eK+L), by computing

[0000]


ei=ai(Mi,m)bi

[0113]

for 0≤i≤K+L and e0=a0(M0,1)b0=|a0b0|M0. In view of the above, notably in point 2, this indeed produces a residue Montgomery representation for h.

[0114]

2. Montgomery Reduction

[0115]

Assuming h to be represented in residue Montgomery representation as e=(e0, e1, . . . , eK+L), the Montgomery reduction z=R(M,N)(h) is computed by the following steps.

[0116]

1. Compute

[0000]


μi=e1(Mi,m)|−N−1(M/Mi)−1|Mi

[0117]

for the lower moduli (that is, for i=1, . . . , K). As a consequence, the integer u=Σi=1K(M/Mi) satisfies v=h+uM=zN for some integer z.

[0118]

2. Next, compute

[0000]

γj=ejM-1mMj+i=1KμiNMi-1m2Mj

[0119]

(using component-wise integer addition and integer multiplication to compute the products and the sum), followed by the Montgomery reduction

[0000]


cj=R(Mj,m)j)

[0120]

for the extension moduli (that is, for j=K+1, . . . , K+L). For the redundant modulus, we simply compute

[0000]

c0=z0=|(h+uN)/M|M0=e0M-1+i=1KμiMi-1NM0.

[0121]

Here, the cjform the residue Montgomery representations for the extension and redundant residues of z=(h+uM)/N.

[0122]

Note that for the bottom level RNS, all modular arithmetic is direct, with Montgomery constant 1; so, on the bottom level, the additions and multiplications for γjwould be implemented as modular operations, and no reduction would be required.

[0123]

3. Now, compute

[0000]


ηj=cj(Mj,m)|(M′/Mj)−1|Mj

[0124]

for extension moduli (that is, for K+1≤j≤K+L).

[0125]

4. Next, compute

[0000]

q=|c0(-M)-1m-1+j=K+1K+Lnj(Mj)-1|M0,

[0126]

(sum over the extension moduli), with exact modular arithmetic. Now z=Σηj(M′/Mj)−qM′.

[0127]

5. Finally, compute

[0000]

γi=q-Mm2Mi+j=K+1K+Lηjm2(M/Mj)Mi

[0128]

(using component-wise integer addition and integer multiplication to compute the products and the sum), followed by the Montgomery reduction

[0000]


ci=R(Mi,m)i)

[0129]

for the lower moduli (that is, for i=1, . . . , K).

[0130]

Modular Dot Products and Modular Sums with Postponed Reduction

[0131]

To compute a t-term dot product sum σ=(x(1)c(1)+ . . . +x(t)c(t))N, where the c(i)are constants, we compute

[0000]


h=x(1)|c(1)M|N+ . . . +(t)|c(t)M|N,

[0132]

in RNS, so by components-wise integer multiplication and addition, followed by

[0000]


σ=R(N,M)(h).  (3)

[0133]

Similarly, we can compute a t-term sum S=x(1)+ . . . +x(t)Neither by the method above taking constants c(i)=1, or by computing instead the pseudo-residue σ′=R(N,M)(s)), where Mσ′≡σ mod N, while incorporating the extra factor |M−1|Ninto subsequent calculations.

[0134]

Possible Bounds

[0135]

Note that all the required constants in the above algorithms can be precomputed. The above method can be immediately implemented, but it will only work correctly for all possible inputs provided that a number of conditions (bounds) hold to prevent overflow and to guarantee that the final results again satisfy the specified expansion bounds.

[0136]

First, we list possible requirements on the moduli. First of all, the moduli M0and M1, . . . , MK+Lshould form a RNS, so they should preferably be relatively prime in pairs. Moreover, all moduli, except possibly M0, should be relatively prime to the modulus N. Note that if M and M′ are co-prime, then left and right moduli are co-prime, and that if M0is coprime with M′, then M0is be coprime with the right moduli; these things are desired.

[0137]

Now, for Montgomery reduction z=R(N,M)(h) to work for h=xy, given that 0≤x,y<φN, that is, to produce a number z with 0≤z<φN again, it is required that

[0000]

ϕ2NM+Uϕ,(4)

[0138]

where UM is the maximum size of u=Σi=1Kμi(M/Mi). If (4) holds, then Montgomery reduction z=R(N,M)(h) will produce a z with 0≤z<φN whenever 0≤h<φ2N2. If the μisatisfy an expansion bound μi1Mi, then U=Kϕ1. A similar condition turns up again in other multiplication algorithms, and can be solved as follows. From the inequality, we see that φ>U>0. Writing

[0000]


φ=U/ε

[0139]

with 0<ε<1, we conclude that we should have

[0000]


N≤ε(1−ε)M/U,U=Kϕ1,

[0140]

Note that in order to maximize the size of the modulus N that we still can handle, we should choose ε=½.

[0141]

If we reduce h=xC for some constant C<N, we obtain that the result z<(φN/M+U)N, that is, Montgomery multiplication by a constant has expansion bound ϕ=φN/M+U. From φ=/ε and N≤ε(1−ε)M/U, we see that we can guarantee that

[0000]


ϕ≤U+1−ε<U+1=εφ+1.

[0142]

The modulus h should always be representable without overflow in the RNS formed by the base, extension and redundant moduli; hence

[0000]


φ2N2≤M0MM′;  (5)

[0143]

moreover, in order that z is represented without overflow in the RNS formed by the extension moduli, we require that

[0000]


φN≤M′.

[0144]

Since N≤ε(1−ε)M/U and φ=U/ε, we conclude that φN≤(U/ε)ε(1−ε)M/U=(1−ε)M; if we combine that with φN≤M′, we find that φ2N2<(1−ε)MM′<M0MM′, so this condition is implied by the other conditions. Since φN<(U/ε)ε(1−ε)M/U=(1−ε)M, the bound φN<M′ is certainly satisfied if

[0000]


(1−ε)M≤M′.

[0145]

In step 4, we have that z=Σi=K+1K+Lηj(M′/Mj)−qM′; since 0≤z<φN≤M′ and 0≤ηj1Mj, so that Σi=K+1K+Lηj(M′/Mj)<ϕ1L, we conclude that 0≤q<ϕ1L. So q is determined from its residue modulo the redundant modulus M0provided that

[0000]


M0≥φ1L.

[0146]

Finally, in order that the two postponed reductions in step 2 and step 5 of the algorithm work (that is, produce a small enough z), we need that γij2N2. Using the bounds μi1Miand ηj1Mjfor i=1, . . . , K and j=K=1, . . . , K+L, we see that we could require

[0000]

ϕ1Mj+φ1i=1KMiϕ12Mj,M0+φ1j=K+1K+LMjϕ12Mi.

[0147]

In order to understand these bounds, we offer the following. On a level above bottom level, all ordinary moduli are very large and about equal, and much larger than the redundant modulus. Then, writing ϕ≈U1and ε1≈½, the desired value, we find that the bounds roughly state that K, L≤4U1. For example, for a two-level system, we have U1=k, the number of base moduli in the bottom RNS, so we approximately need that the numbers K and L of base and extension moduli in the first level, respectively, satisfy K, L≤4k. In our two-level preferred embodiment, it turns out that these bounds come for free.

[0148]

In order to guarantee that the computed pseudo-residue σ satisfies the expansion bound 0≤σ<φN, we should guarantee that the number h in (3) is smaller than φ2N2; this leads to the bound

[0000]


t≤φ2

[0000]

if the x(i)satisfy 0≤x(i)<θN for all i. where in general θ=ϕ.

[0149]

Note that the postponed reductions in steps 2 and 5 of the algorithm are (K+1)-term and (L+1)-term dot product for the moduli Mi; they work under slightly less severe conditions since we have better bounds for the μiand the ηj.

[0150]

A number of practical issues are addressed below

[0151]

1. Table Sizes

[0152]

Consider the algorithm above, now implemented in the bottom RNS with moduli m0, m1, . . . , mk+l, say. In step 2 and 5 of the algorithm, the numbers μi(representing a residue modulo mi) and ηj(representing a residue modulo mj) are multiplied with a constant which is a residue modulo a different modulus ms. On higher levels, this is no problem since both numbers are represented in RNS with respect to the moduli one level lower; however, on the lowest level, such numbers are from the range [0, mi) or [0, mj), respectively, and are supposed to serve as an entry in the addition or multiplication table for modulus ms. The resulting problem can be solved in two different ways.

[0153]

1. First, for every modulus mswe may use a unary reduction table Rsthat converts a number 0≤a<maxtmtto its residue Rs(a)=|a|msmodulo ms. This allows having arithmetic tables of different, hence on average smaller sizes, but requires an extra table access for arithmetic operations on the lowest level, hence would make the program slower.

[0154]

2. A second solution is to extend all arithmetic tables to a fixed size s=maxsms; this allows effortless arithmetic operations at the lowest level and no modular conversion needed, for increased speed and simplicity, at the cost of slightly larger tables.

[0155]

In our preferred embodiment, which emphasizes speed, we have chosen the second solution.

[0156]

2. The Redundant Moduli

[0157]

On the bottom level, we may require m0≥k, which allows the redundant modulus m0to be very small. On the next level, we may require m0≥Kϕ1, which requires the redundant modulus M0to be at least of size about L(k+1), which is typically slightly larger than the largest small modulus. Also, in step 2 of the algorithm in the previous section, we want to do this step for the redundant modulus in an easier way, by table lookup and not using Montgomery reduction. This requires that we can obtain from the “big”μi(so in RNS-notation with respect to the small moduli) in an easy way the big-redundant residue. Again, the resulting problems can be solved in two ways.

[0158]

1. Take m0=m0≥L(k+1). Then all tables must be of slightly larger size, but things are simple. Note that having extra reduction tables for all other small moduli would then help to decrease table sizes, at the expense of speed.

[0159]

2. Take M0to be the product m′r1 . . . m′rt with m′ri|mri for 1≤i≤t, (typically m′ri=mri), for some suitable divisors and a suitable t, where ri∈{1, . . . , k+l} for all i. Then suitable residues modulo the m′ri are always available from the corresponding residues modulo mri, and all operations are easy, except at one place. We can represent big numbers by a list of big residues in Montgomery RNS representation with respect to the small moduli for each of the big moduli, and a final big-redundant residue in the form of a list of residues modulo the m′ri (or simply modulo the mri). Then in step 4 of the algorithm, we obtain q as a list of residues modulo each of the mri, taking 2lr operations instead of just 2l. Note that in step 4 of the algorithm for the “big” moduli, we need the residues modulo the redundant modulus M0of the numbers μi; these residues are immediately available if the “big” redundant modulus is product of (divisors of) moduli mion the bottom level. Now in step 5, we have available qi=q mod m′ri; to compute qhimod Mias (pseudo)-residue, we need qhi,jmod mifor all j; this is immediate for the last r small moduli, but may use some form of base extension, or an additional table, for the other small moduli.

[0160]

Below an advantageous embodiment is given based on this multiplication method. In that embodiment, we have taken k=l=9, K=L=32, so that we may take m0≥10. For the big redundant modulus, we need that M0≥320 to ensure that in step 4 of the algorithm, the size of M0is at least the maximum size 320 of the value of q. Therefore, we take r=2, and hence M0=mk+lmk+l−1=253·233. then q=q0if q0=q1or q0=q1+233, and q=q0+253 if q1=q0+20. Since q0falls into the maximum entry-size for the multiplication tables, we can implement the multiplication by q in step 4 of the algorithm as a multiplication by q0, possibly followed by a multiplication by 253 and an addition. In this way, the total extra costs for the entire algorithm will now be limited to the cost of an if-statement and 2K table lookups.

[0161]

Pre- and post-processing, e.g., conversion to/from Montgomery form and conversion, or to/from RNS representation may be required. These are standard operations, which are not further discussed. For example, before starting computations in the Montgomery representations, the data may still have to be put into Montgomery and RNS form. form. After the computations, the data may have to be reduced to residues by subtracting a suitable multiple of the modulus. The Montgomery constant may have to be removed too, and the data may have to be reconstructed from the RNS representation, etc.

[0162]

The algorithm in the previous section can be improved; in fact, we can do without one (and possibly two) of the steps in the algorithm. Here we will present the improvements. The idea is to change the way in which the residues are represented to better adapt to the base extension step. We will use the same notation and assumptions as before. For example, in an embodiment, a calculating device is presented, wherein a sequence of constants Hsis defined for the moduli Mmat least for the upper layer, so that a residue xsis represented as a pseudo-residue yssuch that xs=Hsysmod Ms, wherein at least one Hsdiffers from m−1mod Ms. These representations are unique provided that Hsand Msare co-prime. Hsmay be different from the Montgomery constants used above or in the cited literature. An advantage is easy computation of h=xy, since we can find the representation of the residues hsof h by Montgomery multiplication of the representations of the residues xsand ys, for every s.

[0163]

Our starting point is the assumption A(m, Bi, φi, φi) that, for all moduli n co-prime to m and satisfying a bound n≤Bi, we can build or construct a device that implements (in software, or in hardware) a Montgomery reduction z=R(n,m)(h), a Montgomery multiplication z=x⊗(n,m)y, and “weighted sums”, with expansion bound φiand constant-expansion bound ϕ1. That is, given integers x,y and h with 0≤x,y<φ1n and 0≤h<φ12n2and an integer constant c with 0≤c<n, then we have algorithms to compute an integer z satisfying z≡hm−1mod n or z≡xym−1mod n with 0≤z<φin and an algorithm to compute z≡cym−1mod n with 0≤z<ϕ1n. Moreover, we assume that we can also build a device that implements for every such modulus n the computation of a “weighted sum” S=c1x1+ . . . +ctxtfor given integer constants c1, . . . , ctwith 0≤ci<n for i=1, t and integers x1, . . . , xtwith 0≤xi1n for all i, provided that 0≤S<φ12n2. Alternatively, the assumption may involve for example symmetric expansion bounds, that is, assuming |x|, |y|≤φ1n, |h|≤φ12and |c|≤n/2, the algorithm computes such z with |z|≤φ1n or with |z|≤ϕ1n, and assuming |ci|≤n/2 and |xi|≤φ1n for all i, the algorithm computes such S provided that |S|<φ12n2. Even more general, the assumption may involve two-sided bounds (that is, bounds of the type −θLn<V<θRn for pseudo-residues v). A person skilled in the art will have no problem to adapt the description below to suit these more general conditions: the method remain the same, only, for example, the precise form of the intervals containing the constants, and the necessary conditions under which the method can be guaranteed to work, need to be adapted. For simplicity, we restrict the description to the simplest form of the assumption.

[0164]

We now describe our algorithm to implement (Montgomery) multiplication ⊗(N,M)modulo N with Montgomery constant M and Montgomery reduction R(N,M), for suitable moduli N and Montgomery constant M, given the assumption A(m, B1, φi, ϕ1). First, we choose a left (G)RNS M1, . . . , Mk, a right (G)RNS Mk+1, . . . , Mk+l, and a redundant modulus M0. (Later, we will see that k and l have to satisfy an upper bound.) Here we take the moduli such that

    • gcd(Ms, m)=1 and Ms≤B1for=1, . . . , k+l;
    • gcd(Mi,Mj)=1 for i=1, . . . , k and j=k+1, . . . , k+l;
      • We will need that gcd(M0, Ms)=1 for s=1, . . . , k+l. Also, M0needs to be large enough, e.g., M0≥lφ1(for other forms of the assumption, this lower bound may have to be adapted). Moreover, we will need that the arithmetic modulo the redundant modulus M0can be done exact, that is, every residue modulo M0is contained in the interval [0, M0) (or, another interval of size M0). For example, the redundant modulus M0can be the product of smaller moduli M0s, with the arithmetic modulo these smaller moduli, and hence the arithmetic modulo M0, being exact.

[0168]

We define

[0000]


M=|cm(M1, . . . ,Mk);M′=κm(Mk+1, . . . ,Mk+l)

[0000]

so that M and M′ are the dynamical ranges of the left and right GRNS, respectively. For base extension, we will rely on the existence of constants L1, . . . , Lk(for the left GRNS) and Lk+1, . . . , Lk+l(for the right GRNS) with 0≤Ls<Msfor s=1, . . . , k+l such that for any integer v for which v≡vsmod msfor all s we have that

[0000]

vv1L1MM1++vkLkMMkmodM,vvk+1Lk+1MMk+1++vk+1Lk+1MMk+1modM.(1)

[0169]

The existence of such constants Lsare guaranteed by the results from the paper (Ore—The General Chinese Remainder Theorem). Note that if the left and right GRNS are in fact both RNS (that is, if the moduli are in fact co-prime), then the Lsare uniquely determined modulo Ms, with

[0000]


Li≡(M/M1)−1mod Mi, Lj=(M′/Mj)−1mod Mj

[0000]

for i=1, . . . , k and j=k+1, k+l. In particular, in that case Lsand msare co-prime. Note that this last condition cannot be guaranteed in general for a GRNS.

[0170]

Next, choose ε with 0<ε<1. Let the modulus N be a positive integer satisfying gcd(N, M)=1 and N≤B, where B=ε(1−ε)M/U with U=kϕ1; put ϕ=U/ε and ϕ=φN/M+U≈U+1−ε; ensure that φN≤M′, for example by letting M′≥(1−ε)M. (Note that if we want to maximize B, we should take ε=½; later we will see that there can be other reasons to take ε<½.) Furthermore, set δ=maxi≤k<jMi/Mj, δ+=maxi≤k<jMj/Mi, and δ0=maxi≤kM0/Mi. (Note that δ≈δ+≈1 and δ0≈0.) Then we require in addition that k≤(φ12−φ1)/(ϕ1δ) and l≤(φ12−δ0)/(ϕ1δ+). (The above expressions apply for the “standard” expansion bounds; for other type of expansion bounds, they may have to be adapted.)

[0171]

We claim that now assumption A(M,B,φ,ϕ) holds. The algorithms that illustrate this claim are the following. We first choose constants Hs(used in the representation of inputs/outputs x, y, z) and Ks(used in the representation of inputs h to the Montgomery reduction) for s=1, . . . , k+l; we require that Hsand Ksare co-prime with M. Set H0=K0=1. Furthermore, we choose (small) constants S1, . . . , Skwith Siand Micoprime for all i, which we use to optimize the algorithm. For example, we can have Hs=Ks=m−1for s=1, . . . , k+l, so that all residues are in Montgomery representation, and Si=1 for i=1, . . . , k. With this choice, the method below reduces to the earlier one. However, other choices may be more advantageous, as explained below. Then, pre-compute constants

    • Ci=|−N−1KiLiSi−1m|Mi (i=1, . . . , k);
    • D0,0=|M−1|M0, D0,i=|SiMi−1N|M0 (i=1, . . . , k),
    • Dj,0=|Hk+jM−1m2|Mk+j, Dj,i=|SiNMi−1Hk+j−1m|Mk+j (i=1, . . . , k), (j=1, . . . , l);
    • Es=|Hk+sLk+sm|Mk+s, (s=1, . . . , l);
    • F0=|(−M′)−1|M0, Fs=|Mk+s−1|M0 (s=1, . . . , l);
    • Gi,0=|−M′Him|Mi, Gi,j=|Lk+jHk+j−1m|Mi(j=1, . . . , l) (i=1, . . . , k).

[0178]

Now given x and y, represented as (α0, α1, . . . , αk+l) and (β0, β1, . . . , βk+l), respectively, with 0≤α0, β0<M0(or, for example, with |α0|, |β0|≤M0/2) and with 0≤αs, βs1Ms(or, for example, with |αss|<<φ1Ms) for all s=1, . . . , k+l) so that x≡αsHsmod Msand y≡βsHsmod Msfor s=0, 1, . . . , k+l, we compute z=x⊗(N,M)y as z=R(N,M)(h) with h=xy. First, we do

[0179]

1. χ0=|α0β0|M0, χss(Ms,m)βs(s=1, . . . , k+l);

[0000]

Then h=xy is represented by the χsfor s=1, . . . , k+l with constants Ks=Hs2m, and χ0=|h|M0, that is, h≡Hs2smod Msfor s=1, . . . , k+l.

[0180]

Next, assume that h is represented by pseudo-residues χ0, χ1, . . . , xk+lwith respect to constants K1, . . . , Kk+lso that h≡Ksχsmod Msfor s=1, k+l and χ0=|h|M0. To compute z=R(N,M)(h), with we do the following steps.

    • 1. μii(Mi,m)Ci(i=1, . . . , k);
    • 2. ξ0=|χ0D0,01D0,1+ . . . +μkD0,k|M0;
    • sk+jk+jDj,01Dj,1+ . . . +μkDj,kξk+j=R(Mk+j,m)(sk+j) (j=1, . . . , l);
    • 3. ηk+jk+j(Mk+j,m)Ej(j=1, . . . , l);
    • 4. q=|ξ0F0k+1F1+ . . . +ηk+lFl|M0;
    • 5. ti=qGi,0k+1Gi,1+ . . . +ηk+lGi,1ξi=R(Mi,m)(ti) (i=1, . . . , k).
      Now the number z represented by (ξ0, ξ1, . . . , ξk+l), that is, for which z≡ξsHsmod Msfor s=0, 1, . . . , k+l, satisfies z=x ⊗(N,M)y, with z satisfying the expansion bound provided that x, y, h satisfy the required expansion bounds.

[0187]

Remark 1.1 Note that if the arithmetic modulo all the msis exact, then we can take Montgomery constant m=1. In that case, we can take R(Ms,m)(h)=h, so that steps 3 and 6 of the above algorithm can be simplified by leaving out the Montgomery reduction step.

[0188]

Remark 1.2 It may be advantageous to make certain special choices.

    • If we choose

[0000]


Hk+s=|Lk+s−1|Mk+s

[0190]

for s=1, . . . , l, then Ek+s=m, hence ηk+s≡χk+smod Mk+sfor all s=1, . . . , l; as a consequence, we may be able to skip step 4 of the above algorithm, see Remark 1.3.

    • Similarly, if we choose

[0000]


Ki=|−NSiLi−1|Mi,

[0192]

then Ci=m, and hence μi≡χimod Mi; if this holds for every i=1, . . . , k, then we may be able to skip step 2 of the above algorithm, see Remark 1.3. In the full Montgomery multiplication algorithm, we would have Ki=Hi2m after step 1; as a consequence, for the simplification we would need that

[0000]


Hi2≡−NLi−1Sim−1mod Mi.

[0000]

That choice is only available if Liand Miare co-prime and if −NLi−1Sim−1is a square modulo Mi. We need Sismall in order to get a good a-priory bound on u. One attractive choice is to take Miprime with Mi≡3 mod 4, so that −1 is a non-square modulo Mi(such a restriction on the top-level moduli is almost for free); in that case, we can choose Si=1 or Si=1 to make −NLi−1Sim−1a square. These last choices are extra attractive in combination with the use of symmetric expansion bounds: indeed, in that case the upper bound on u will not be influenced by the choice of the Si.

    • Note also that if we succeed in skipping steps 2 and 4, then the entire algorithm for z=x⊗(N,M)y can be done in-place! In general, most of the algorithm can be done in-place, except that we require an extra register to store the u, distinct from xiand the ηk+jdistinct from ξk+j.

[0194]

Remark 1.3 If we skip step 2 with μi≡χiand replace μiby χiin step 3, then the resulting sk+jmay be larger. The reason is that the μiare bounded by ϕ1Miwhile the χiare bounded by φ1Mi. Let us consider the bounds in more detail. We have seen earlier that, writing

[0000]


U=ϕ1k,

[0195]

we have that

[0000]


B=ε(1−ε)M/U,φ=U/ε,ϕ≈U+1−≈U.

[0196]

In an optimally designed system, we will have ε≈½, so that φ≈2ϕ. If the lower system is similarly designed, we would have that

[0000]


B11(1−ε1)m/U1,φ=U111≈U1+1−ε1≈U1

[0197]

for some constant U1, with ε1=½ in an optimally designed system. We can handle a k-term weighted sum of the μimodulo some Miroughly when k≤φ121≈ε1−1φ1≈ε1−2U1, and we could handle a k-term weighted sum of the χimodulo some Miroughly when k≤φ121≈ε1−1U1, where U1is independent of ε1. We can thus increase the number of (bigger) terms that we can handle by choosing a smaller value of ε1; for example, taking ε1=¼ instead of ε1=½. However, that means that the value of B1decreases by a factor ¾. Since log2(3)≈1.6, we find that every modulus Msin the top level will have about 0.4 bits less. For k=l≈30 as in our example, this would result in a value M that has about 12 fewer bits. So, in this way we can handle values of the modulo N that are about 12 bits smaller, or we would have to increase k by 1. We see that by fine-tuning the system on a lower level we can optimize the performance on the top level. Note that on the top level, we must replace the bound U=ϕ1k by U=φ1k, which also lowers the upper bound B, but only by approximately a factor 2 if ε≈½.

[0198]

A similar remark applies when we want to skip step 4 by replacing ηjby ξjin steps 5 and 6. Indeed both replacements require similar measures.

[0199]

Note that when implementing a Montgomery multiplication by a constant, then χk+jand ηk+jwill be both upper bounded by the same bound ϕ1Mk+j; in that case, the improvement can be done without further adaptations. A similar remark applies to the possible improvement in the first part of the method.

[0200]

To complete the method, we will describe how to implement weighted sum S=c(1)x(1)+ . . . +c(1)x(t)when 0≤S<φ2N2and 0≤c(i)<N and 0≤x(i)<φN for all i. Our bounds are such that numbers h=xy can be represented in the full GRNS, that is, we have that φ2N2≤MM′. As a consequence, the weighted sum S can also be represented in the full GRNS. Therefore, it is sufficient to compute (a representation of) the residues of s in the full GRNS, that is, to compute

[0000]


Ss≡Ks−1(cs(1)xs(1)+ . . . +cs(t)xs(t))mod Ms

[0000]

for certain constants Ks, for every s. Suppose that the residues xs(r)are represented by pseudo-residues αs(r)for which xs(r)≡Hsαs(r), for every s. Then we should compute ss≡ds(1)xs(1)+ . . . +ds(t)xs(t)mod Mswith ds(i)=|Ks−1Hsc(i)|Ms for all i. One method to do this computation is to set es=|Ks−1Hsc(i)m|Ms, then compute Ts=es(1)xj(1)+ . . . +es(t)xs(t), so that Ss=R(Ms,m)(Ts) for all s. By our assumptions A(m,B111), this works as long as we can guarantee that Ts≤φ12Ms2for all s. On the other hand, if we cannot guarantee that the upper bound on Tsholds, then we can use constants es=|Ks−1Hsc(i)m2|Ms, and compute Tsin the form Ts=R(Ms,m)jTs,j), where each Ts,jis of the form Ts,j=R(Ms,m)i∈Ijes(i)xs(i)) (that is, we construct a “reduction tree”). A person skilled in the art will be easily able to adapt these ideas in more general forms. We remark that the method as described in the above algorithm (so with only one, postponed, reduction) will work in a two-level system, where it is enough to just require that Ts≤φ12Ms2for all s.

[0201]

Below a third variant of modular reduction is given based on Barrett multiplication. The modular reduction |h|Nof an integer h may be obtained as

[0000]

|h|N=h-hNN.

[0202]

Barrett multiplication involves an operation called Barrett reduction, which tries to estimate the quotient └h/N┘. In its most general form, Barrett reduction involves two additional positive integer parameters M, M′ and is defined as

[0000]

B(N,M,M)(h)=h-hMC/MN,whereC=MMN

[0203]

is a constant that can be precomputed. The usefulness of Barrett reduction is based on the following observation. We have that B(N,M,M′)(h)≡h mod N and |h|N≤B(N,M,M′)(h)≤|h|NhN, where

[0000]

Δh=hMM+(M-1)(1N-1MM).

[0204]

Barrett reduction B(N,M,M′)to do a modular multiplication can be implemented in a RNS by the following algorithm. We write c=a⊗(N,M,M′)b to denote that c is a pseudo-residue obtained by an RNS implementation of the Barrett multiplication c=ab−B(N,M,M′)(ab)≡ab mod N. Again, we use an extended RNS with a base RNS formed by base moduli M1, . . . , MKwith dynamical range M=M1. . . MK, and an extension RNS formed by extension moduli MK+1. . . , MK+Lwith dynamical range M′=MK+1. . . MK+L.

[0205]

1. h=xy, done via hs=xsysMs=xs(Msm,m′)ysfor s=0, . . . , K+L;

[0206]

2. μi=hi(M/Mi)−1Mi, done via μi=hi(Mi,m,m′)|(M/Mi)−1|Mi for i=1, . . . , K; now u=Σi=1Kμi(M/Mi)≤ϕKM and p=(h−u)/M is integer.

[0207]

3. pj=jM−1i=1Kμi|1/Mi|MjMj for j=K+1, . . . , K+L;

[0208]

4. Use base extension to find the pifor i=1, . . . , K;

[0209]

5. ηj=|piC(M′/Mj)−1|Mj, done via ηj=pj(Mj,m,m′)|C(M′/Mj)−1|Mj for j=0 and for j K+1, . . . , K+L;

[0210]

now v=Σj=K+1K+Lηj(M′/Mj)≤Lϕ1M′ and q=(pC−v)/M′=(C(h−u)/M−v)/M′ is integer.

[0211]

6. qi=pi|C/M′|Mij=K+1K+Lηj|−1/Mj|Mim, and hence for z=h−qN we have zi=hi+pi|(−NC)/M′|Mij=K+1K+Lηj|N/Mj|MiMi for i=0, . . . , K;

[0212]

7. Use base extension to find the z1for j=K+1, . . . , K+L.

[0213]

We need a number of moduli comparable to the Montgomery algorithm, but this method will require some extra operations (two base extensions instead of one). Bounds may now be derived that have to hold to guarantee a correctly working algorithm. The same speed-ups that can be applied to the single-digit Montgomery multiplication algorithm with RNS (same-size tables, postponed reduction, suitable choice of redundant moduli) apply here, and similar techniques apply to derive the required bounds.

[0214]

As fourth example, we now sketch a digit-based Montgomery multiplication algorithm with an RNS. Suppose we have an RNS M1, . . . , MKwith dynamical range M=M1. . . MKand redundant modulus M0, with expansion factor φ1, say. Here we may take M>>Bsand MK=B. To compute z such that Bsz≡xy mod N, first write y in approximate B-ary form as

[0000]

y=i=0s-1eiBi-ɛBs=e-δBs

[0215]

with 0≤et1B and 0≤δ<φ1for some expansion factor φ1. Then run the following algorithm.

[0216]

1. z(−1)=0;

[0217]

2. For t=0, . . . s−1, set

[0000]


h(t)=z(t−1)+xet

[0000]


and

[0000]


z(t)=R(N,B)(h(t))=(h(t)+utN)/B,

[0000]


where

[0000]


ut≡h(t)N mod B.

[0218]

3. z′=z(s−1);

[0219]

4. z=z′−xδ.

[0220]

It is easily shown that, writing u=u0+u1B+ . . . +us-1us-1, we have Bsz′=xe+uN and Bsz=xy+uN. As we have a full RNS representation x=(x0; x1, . . . , xK) for x, with pseudo-residues 0≤xi=xMi1Mifor all i=0; 1, . . . , K, and similar for y. Since MK=B, we can compute the “digits” etof y with the RNS and the pseudo-residues ut=h(t)NMK with expansion factor φ1. Hence u<φ1/Bs, so if x,y<φN, then z′<φN again provided that

[0000]

ϕ2NBs+ϕ1ϕ;(6)

[0221]

setting φ=φ1/ε we see that we need the bound

[0000]


N≤ε(1−ε)Bs1.

[0222]

Moreover, it is easily seen that in order that all intermediate results z(t)satisfy an expansion bound z(t)<θN, it is sufficient that

[0000]


θ≥(φ+1)φ1B/(B−1).

[0223]

So as long as we have a large enough dynamical range, this method delivers a correct result within expansion factor φ.

[0224]

The above should be enough to use this method to build a multi-layer RNS system.

[0225]

An advantageous embodiment of the invention is a two-layer Multi-layer RNS based on the second modular multiplication method (Montgomery based) as described above, optimized for modular multiplication with 2048-bits moduli N. It can be shown that in such a system, with bottom zero-layer moduli m0: m1, . . . , mk+lwith k≈l, and with top first-layer moduli M0; M1, . . . , MK+Lwith K≈L, and with the arithmetic moduli the bottom moduli miimplemented with table lookup for modular addition and for modular multiplication, the number of table lookups for a modular multiplication modulo N takes about 24Kk2+8K2k table lookups. Moreover, it can also be shown that with bottom moduli of size at most 2tand with N of size 2b, the number of table lookups is minimized by taking k≈√{square root over (b/(3t))} and K≈b/(tk), giving approximately 16√{square root over (3)}/(b/t)3/2table lookups. Taking b=2048 and t=8 gives k≈9 and K≈28. In our preferred embodiment, we take k=l=9 and K=L=32, which turns out slightly better than the above estimates.

[0226]

For the small moduli, we take the primes

    • 191,193,197,199,211,223,227,229,233,239,241,251,

[0228]

which are the largest primes less than 256, and the composite numbers

[0229]

256=28,253=11·23,249=3·83,247=13·19,235=5·47,217=7·31,

[0230]

which are the largest numbers of the from p1m with m>13 prime, and which produces the largest attainable product for any list of 18 relatively prime numbers of size at most 256. Note that 255=3·5·17 is a worse choice for both 3 and 5, similarly 245=5·72is a worse choice for both 5 and 7; the choices for 2, 11, and 13 are evidently optimal. Note that, as a consequence, the small moduli involve as prime factors all primes of size at least 191, and further the primes 2,3,5,7,11,13,19,23,31,47,83. So as redundant modulus, we can take m0=17>k=9=l.

[0231]

We take ε1=k/(2k+1). In fact, even taking ε1=½ works. Then the best partition of these 18 moduli such that m′≥(1−ε1)m with m maximal turns out to take as base moduli

    • 256,251,249,247,241,239,235,199,197

[0233]

and as extension moduli

    • 191,193,211,217,223,227,229,233,253,

[0235]

with

[0236]

m=2097065983013254306560, m′=1153388216560035715721.

[0237]

Now the choice of the large moduli for the top layer follows. We take ε2=½, which leads to the biggest possible upper bound for the Ms, so that we need to take the large moduli such that

[0000]


Ms≤Mmax1(1−ε1)m/k=57669314532864493430.

[0238]

We want to build a system to handle RSA moduli N having up to b=2048 bits; so, we also require that

[0000]


Nmin=22048−1≤ε2(1−ε2)M/U1,

[0239]

it turns out that we need to take K=32 lower primes below Mmax, the smallest being

    • 57669314532864492373

[0241]

in order to have M large enough. Then to have M′≥(1−ε2)N, we need another L=32 primes, starting with the prime

    • 57669314532864491189.

[0243]

The resulting Multi-layer RNS has been implemented in a computer program, both in Sage and in C/C++. The C++ program uses approximately 137000 table lookups for a 2048-bit modular multiplication, and takes less than 0.5 seconds on a normal 3 GHz laptop to compute 500 Montgomery multiplications.

[0244]

As mentioned earlier, embodiments are very suitable to do exponentiation as required, for example, in RSA and Diffie-Hellman, also and especially in a white-box contest. Similarly, the invention can be used in Elliptic Curve Cryptography (ECC) such as Elliptic Curve Digital Signature Algorithm (ECDSA) to implement the required arithmetic modulo a very large prime p. The method is very suitable to implement leak-resistant arithmetic: We can easily change the moduli at the higher level just by changing some of the constants in the algorithm. Note that at the size of the big moduli (e.g., around 66 bits), there is a very large number of primes available for the choice of moduli. Other applications are situations where large integer arithmetic is required and a common RNS would have too many moduli or too big moduli.

[0245]

In the various embodiments, the input interface may be selected from various alternatives. For example, input interface may be a network interface to a local or wide area network, e.g., the Internet, a storage interface to an internal or external data storage, a keyboard, etc.

[0246]

Typically, the device 200 comprises a microprocessor (not separately shown) which executes appropriate software stored at the device 200; for example, that software may have been downloaded and/or stored in a corresponding memory, e.g., a volatile memory such as RAM or a non-volatile memory such as Flash (not separately shown). Alternatively, the device 200 may, in whole or in part, be implemented in programmable logic, e.g., as field-programmable gate array (FPGA). Device 200 may be implemented, in whole or in part, as a so-called application-specific integrated circuit (ASIC), i.e. an integrated circuit (IC) customized for their particular use. For example, the circuits may be implemented in CMOS, e.g., using a hardware description language such as Verilog, VHDL etc.

[0247]

The processor circuit may be implemented in a distributed fashion, e.g., as multiple sub-processor circuits. The storage may be an electronic memory, magnetic memory etc. Part of the storage may be non-volatile, and parts may be volatile. Part of the storage may be read-only.

[0248]

FIG. 4 schematically shows an example of an embodiment of a calculating method 400.

[0249]

The method comprises a storing stage 410 in which integers are stored in multi-layer RNS format. For example, the integers may be obtained from a calculating application in which integers are manipulated, e.g., an RSA encryption or signature application, etc. The numbers may be also be converted from other formats, e.g., from a radix format into RNS format.

[0250]

The method further comprises a computing stage 420 in which the product of a first integer and a second integer is computed. The computing stage comprises at least a lower multiplication part and an upper multiplication part, e.g., as described above.

[0251]

Many different ways of executing the method are possible, as will be apparent to a person skilled in the art. For example, the order of the steps can be varied or some steps may be executed in parallel. Moreover, in between steps other method steps may be inserted. The inserted steps may represent refinements of the method such as described herein, or may be unrelated to the method.

[0252]

A method according to the invention may be executed using software, which comprises instructions for causing a processor system to perform method 400. Software may only include those steps taken by a particular sub-entity of the system. The software may be stored in a suitable storage medium, such as a hard disk, a floppy, a memory, an optical disc, etc. The software may be sent as a signal along a wire, or wireless, or using a data network, e.g., the Internet. The software may be made available for download and/or for remote usage on a server. A method according to the invention may be executed using a bitstream arranged to configure programmable logic, e.g., a field-programmable gate array (FPGA), to perform the method.

[0253]

It will be appreciated that the invention also extends to computer programs, particularly computer programs on or in a carrier, adapted for putting the invention into practice. The program may be in the form of source code, object code, a code intermediate source, and object code such as partially compiled form, or in any other form suitable for use in the implementation of the method according to the invention. An embodiment relating to a computer program product comprises computer executable instructions corresponding to each of the processing steps of at least one of the methods set forth. These instructions may be subdivided into subroutines and/or be stored in one or more files that may be linked statically or dynamically. Another embodiment relating to a computer program product comprises computer executable instructions corresponding to each of the means of at least one of the systems and/or products set forth.

[0254]

FIG. 5a shows a computer readable medium 1000 having a writable part 1010 comprising a computer program 1020, the computer program 1020 comprising instructions for causing a processor system to perform a calculating method, according to an embodiment. The computer program 1020 may be embodied on the computer readable medium 1000 as physical marks or by means of magnetization of the computer readable medium 1000. However, any other suitable embodiment is conceivable as well. Furthermore, it will be appreciated that, although the computer readable medium 1000 is shown here as an optical disc, the computer readable medium 1000 may be any suitable computer readable medium, such as a hard disk, solid state memory, flash memory, etc., and may be non-recordable or recordable. The computer program 1020 comprises instructions for causing a processor system to perform said calculating method.

[0255]

FIG. 5b shows in a schematic representation of a processor system 1140 according to an embodiment. The processor system comprises one or more integrated circuits 1110. The architecture of the one or more integrated circuits 1110 is schematically shown in FIG. 5b. Circuit 1110 comprises a processing unit 1120, e.g., a CPU, for running computer program components to execute a method according to an embodiment and/or implement its modules or units. Circuit 1110 comprises a memory 1122 for storing programming code, data, etc. Part of memory 1122 may be read-only. Circuit 1110 may comprise a communication element 1126, e.g., an antenna, connectors or both, and the like. Circuit 1110 may comprise a dedicated integrated circuit 1124 for performing part or all of the processing defined in the method. Processor 1120, memory 1122, dedicated IC 1124 and communication element 1126 may be connected to each other via an interconnect 1130, say a bus. The processor system 1110 may be arranged for contact and/or contact-less communication, using an antenna and/or connectors, respectively.

[0256]

For example, in an embodiment, the calculating device may comprise a processor circuit and a memory circuit, the processor being arranged to execute software stored in the memory circuit. For example, the processor circuit may be an Intel Core i7 processor, ARM Cortex-R8, etc. The memory circuit may be an ROM circuit, or a non-volatile memory, e.g., a flash memory. The memory circuit may be a volatile memory, e.g., an SRAM memory. In the latter case, the verification device may comprise a non-volatile software interface, e.g., a hard drive, a network interface, etc., arranged for providing the software.

[0257]

The following clauses are not the claims, but are contemplated and nonlimiting. The Applicant hereby gives notice that new claims may be formulated to such clauses and/or combinations of such clauses and/or features taken from the description or claims, during prosecution of the present application or of any further application derived therefrom.

[0258]

Clause 1. An electronic calculating device (100; 200) arranged to calculate the product of integers, the device comprising

[0259]

a storage (110) configured to store integers (210, 220) in a multi-layer residue number system (RNS) representation, the multi-layer RNS representation having at least an upper layer RNS and a lower layer RNS, the upper layer RNS being a residue number system for a sequence of multiple upper moduli (Mi), the lower layer RNS being a residue number system for a sequence of multiple lower moduli (mi), an integer (x) being represented in the storage by a sequence of multiple upper residues (xi=xMi; 211, 221) modulo the sequence of upper moduli (Mi), upper residues (xj; 210.2, 220.2) for at least one particular upper modulus (Mj) being further-represented in the storage by a sequence of multiple lower residues (xjmi; 212, 222) of the upper residue (xj) modulo the sequence of lower moduli (mi), wherein at least one of the multiple lower moduli (mi) does not divide a modulus of the multiple upper moduli (Mj),

[0260]

a processor circuit (120) configured to compute the product of a first integer (x; 210) and a second integer (y; 220), the first and second integer being stored in the storage according to the multi-layer RNS representation, the processor being configured with at least a lower multiplication routine (131) and an upper multiplication routine (132),

[0261]

the lower multiplication routine computing the product of two further-represented upper residues (xj, yj) corresponding to the same upper modulus (Mj) modulo said upper modulus (Mj),

[0262]

the upper multiplication routine computing the product of the first and second integer by component-wise multiplication of upper residues of the first integer (xi) and corresponding upper residues of the second integer (yi) modulo the corresponding modulus (Mi), wherein the upper multiplication routine calls upon the lower multiplication routine to multiply the upper residues that are further-represented.

[0263]

Clause 2. An electronic calculating method (400) for calculating the product of integers, the method comprising

[0264]

storing (410) integers (210, 220) in a multi-layer residue number system (RNS) representation, the multi-layer RNS representation having at least an upper layer RNS and a lower layer RNS, the upper layer RNS being a residue number system for a sequence of multiple upper moduli (Mi), the lower layer RNS being a residue number system for a sequence of multiple lower moduli (mi), an integer (x) being represented in the storage by a sequence of multiple upper residues (xi=xMi; 211, 221) modulo the sequence of upper moduli (Mi), upper residues (xj; 210.2, 220.2) for at least one particular upper modulus (Mj) being further-represented in the storage by a sequence of multiple lower residues (xjmi; 212, 222) of the upper residue (xj) modulo the sequence of lower moduli (mi), wherein at least one of the multiple lower moduli (mi) does not divide a modulus of the multiple upper moduli (Mj),

[0265]

computing (420) the product of a first integer (x; 210) and a second integer (y; 220), the first and second integer being stored in the storage according to the multi-layer RNS representation, the computing comprising a at least a lower multiplication part (424) and an upper multiplication part (422),

[0266]

the lower multiplication part computing (424) the product of two further-represented upper residues (xj, yj) corresponding to the same upper modulus (Mj) modulo said upper modulus (Mj),

[0267]

the upper multiplication part computing (422) the product of the first and second integer by component-wise multiplication of upper residues of the first integer (xi) and corresponding upper residues of the second integer (yi) modulo the corresponding modulus (Mi), wherein the upper multiplication routine calls upon the lower multiplication routine to multiply the upper residues that are further-represented.

[0268]

It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments.

[0269]

In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. Use of the verb “comprise” and its conjugations does not exclude the presence of elements or steps other than those stated in a claim. The article “a” or “an” preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

[0270]

In the claims references in parentheses refer to reference signs in drawings of exemplifying embodiments or to formulas of embodiments, thus increasing the intelligibility of the claim. These references shall not be construed as limiting the claim.

LIST OF REFERENCE NUMERALS

[0000]

  • 100 an electronic calculating device
  • 110 a storage
  • 120 a processor circuit
  • 130 a storage
  • 131 a lower multiplication routine
  • 132 an upper multiplication routine
  • 150 a larger calculating device
  • 200 an electronic calculating device
  • 210, 220 an integer
  • 210.1-210.3 an upper residue
  • 210.2.1-210.2.3 a lower residue
  • 220.1-220.3 an upper residue
  • 220.2.1-220.2.3 a lower residue
  • 211, 221 a sequence of multiple upper residues
  • 212, 222 a sequence of multiple lower residues
  • 230 a storage
  • 242 a lower multiplication routine
  • 244 an upper multiplication routine
  • 245 a table storage
  • 310 an integer
  • 310.1-310.3 a first layer residue
  • 310.2.1-310.2.3 a second layer residue
  • 310.2.2.1 a third layer residue
  • 311 a sequence of multiple first layer residues
  • 312 a sequence of multiple second layer residues
  • 313 a sequence of multiple third layer residues



An electronic calculating device (100; 200) arranged to calculate the product of integers, the device comprising a storage (110) configured to store integers (210, 220) in a multi-layer residue number system (RNS) representation, the multi-layer RNS representation having at least an upper layer RNS and a lower layer RNS, the upper layer RNS being a residue number system for a sequence of multiple upper moduli (Mi), the lower layer RNS being a residue number system for a sequence of multiple lower moduli (mi), an integer (x) being represented in the storage by a sequence of multiple upper residues (xi=(x)Mi; 211, 221) modulo the sequence of upper moduli (Mi), upper residues (xj; 210.2, 220.2) for at least one particular upper modulus (Mj) being further-represented in the storage by a sequence of multiple lower residues ((xj)mj, 212, 222) of the upper residue (xj) modulo the sequence of lower moduli (mi), wherein at least one of the multiple lower moduli (mi) does not divide a modulus of the multiple upper moduli (Mj).



1. An electronic calculating device arranged to calculate the product of integers, the device comprising

a storage configured to store integers in a multi-layer residue number system (RNS) representation, the multi-layer RNS representation having at least an upper layer RNS and a lower layer RNS, the upper layer RNS being a residue number system for a sequence of multiple upper moduli (Mi), the lower layer RNS being a residue number system for a sequence of multiple lower moduli (mi), an integer (x) being represented in the storage by a sequence of multiple upper residues (xi=xMi) modulo the sequence of upper moduli (Mi), upper residues (xj) for at least one particular upper modulus (Mj) being further-represented in the storage by a sequence of multiple lower residues (xjmi) of the upper residue (xj) modulo the sequence of lower moduli (mi), wherein at least one of the multiple lower moduli (mi) does not divide a modulus of the multiple upper moduli (Mj),

a processor circuit configured to compute the product of a first integer and a second integer (y), the first and second integer being stored in the storage according to the multi-layer RNS representation, the processor being configured with at least a lower multiplication routine and an upper multiplication routine,

the lower multiplication routine computing the product of two further-represented upper residues (xj, yj) corresponding to the same upper modulus (Mj) modulo said upper modulus (Mj),

the upper multiplication routine computing the product of the first and second integer by component-wise multiplication of upper residues of the first integer (xi) and corresponding upper residues of the second integer (yi) modulo the corresponding modulus (Mi), wherein the upper multiplication routine calls upon the lower multiplication routine to multiply the upper residues that are further-represented, wherein the upper multiplication routine is configured to receive upper residues (xi, yi) that are smaller than a predefined expansion factor times the corresponding modulus (xi,yi<φMi) and is configured to produce upper residues (zi) of the product of the received upper residues (Z) that are smaller than the predefined expansion factor times the corresponding modulus (zi<φMi).

2. A calculating device as in claim 1, wherein the upper multiplication routine is further configured to compute the product of the first (x) and second integer (y) modulo a further modulus (N).

3. A calculating device as in claim 1, wherein the expansion factor is 2 or more than 2.

4. A calculating device as in claim 1, wherein the lower multiplication routine is configured to compute the arithmetical product (h) of the two further-represented upper residues modulo an upper modulus (Mi) by component-wise multiplication of lower residues of the first upper residue and corresponding lower residues of the second upper residue followed by a modular reduction modulo the corresponding modulus (Mj).

5. A calculating device as in claim 4, wherein the modular reduction comprises computing the rounded-down division └h/Mj┘ of the arithmetical product (h) and the corresponding modulus (Mj).

6. A calculating device as in claim 1, comprising a table storage wherein the lower multiplication routine comprises looking-up the product of lower residues in a modular multiplication result look-up table stored in the table storage, and wherein the look-up table for the lower moduli are at least as large as the largest lower modus.

7. A calculating device as in claim 1, wherein a further represented upper residue (X) is represented in Montgomery representation (x), the Montgomery representation (x) being said upper residue (X) multiplied with a predefined Montgomery constant (m) modulo the corresponding modulus (Mj, αj=mx mod Mj), the lower multiplication routine being configured to receive the two further-represented upper residues in Montgomery representation as two sequences of lower residues, and is configured to produce the product in Montgomery representation.

8. A calculating device as in claim 7, wherein the lower multiplication routine is configured to compute an integer u satisfying h=uMj=zm, for some z, wherein h=xy, and to compute z=(h+uMj)/m.

9. A calculating device as in claim 8, wherein the lower layer RNS is an extended residue number system wherein the sequence of multiple lower moduli (m1, . . . , mk) is the base sequence, and the extended RNS has an extension sequence of a further multiple of lower moduli (mK+1, . . . , mL), the Montgomery constant (m) being the product of the base sequence of multiple lower moduli, computing the z=(h+u)/m is done for the extension sequence, followed by base extension to the base sequence

10. A calculating device as in claim 9, wherein first the residues for z=(h+u)/m are computed with respect to the further multiple of lower moduli (mK+1, . . . , mL), and subsequently the residues for z with respect to a base sequence of lower moduli (m1, . . . , mK) are computed by base extension.

11. A calculating device as in claim 1, wherein the lower multiplication routine is configured to compute a modular sum-of-products (z=Σi=0Kxicjmod Mj) modulo an upper modulus (Mj) by first computing the sum of products (h=Σi=0Kxidj; with dj=mcj) by component-wise multiplication and addition of lower residues representing the upper residues (xi) and (di) followed by a final modular reduction modulo the corresponding modulus (Mj).

12. A calculating device as in claim 1, wherein the sequence of upper moduli comprises a redundant modulus for base-extension, the redundant modulus being the product of one or more lower moduli of the sequence of multiple lower moduli.

13. A calculating device as in claim 1, wherein a sequence of constants Hsis defined for the moduli Mmat least for the upper layer, so that a residue xsis represented as a pseudo-residue yssuch that xs=Hsysmod Ms, wherein at least one Hsdiffers from m−1mod Ms.

14. An electronic calculating method for calculating the product of integers, the method comprising

storing integers in a multi-layer residue number system (RNS) representation, the multi-layer RNS representation having at least an upper layer RNS and a lower layer RNS, the upper layer RNS being a residue number system for a sequence of multiple upper moduli (Mi), the lower layer RNS being a residue number system for a sequence of multiple lower moduli (mi), an integer (x) being represented in the storage by a sequence of multiple upper residues (xi=xMi) modulo the sequence of upper moduli (Mi), upper residues (xj) for at least one particular upper modulus (Mj) being further-represented in the storage by a sequence of multiple lower residues (xj) of the upper residue (xj) modulo the sequence of lower moduli (mi), wherein at least one of the multiple lower moduli (mi) does not divide a modulus of the multiple upper moduli (Mj),

computing the product of a first integer (x) and a second integer (y), the first and second integer being stored in the storage according to the multi-layer RNS representation, the computing comprising a at least a lower multiplication part and an upper multiplication part,

the lower multiplication part computing the product of two further-represented upper residues (xj, yi) corresponding to the same upper modulus (Mj) modulo said upper modulus (Mj),

the upper multiplication part computing the product of the first and second integer by component-wise multiplication of upper residues of the first integer (xi) and corresponding upper residues of the second integer (yi) modulo the corresponding modulus (Mi), wherein the upper multiplication routine calls upon the lower multiplication routine to multiply the upper residues that are further-represented, wherein the upper multiplication part is configured to receive upper residues (xi, yi) that are smaller than a predefined expansion factor times the corresponding modulus (xi,yi<φMi) and is configured to produce upper residues (zi) of the product of the received upper residues (z) that are smaller than the predefined expansion factor times the corresponding modulus (zi<φMi).

15. A computer readable medium comprising transitory or non-transitory data representing instructions to cause a processor system to perform the method according to claim 14.