Processing Of Finite Automata Based On Memory Hierarchy
The Open Systems Interconnection (OSI) Reference Model defines seven network protocol layers (L1-L7) used to communicate over a transmission medium. The upper layers (L4-L7) represent end-to-end communications and the lower layers (L1-L3) represent local communications. Networking application aware systems need to process, filter and switch a range of L3 to L7 network protocol layers, for example, L7 network protocol layers such as, HyperText Transfer Protocol (HTTP) and Simple Mail Transfer Protocol (SMTP), and L4 network protocol layers such as Transmission Control Protocol (TCP). In addition to processing the network protocol layers, the networking application aware systems need to simultaneously secure these protocols with access and content based security through L4-L7 network protocol layers including Firewall, Virtual Private Network (VPN), Secure Sockets Layer (SSL), Intrusion Detection System (IDS), Internet Protocol Security (IPSec), Anti-Virus (AV) and Anti-Spam functionality at “wire-speed” (i.e., a rate of data transfer over a physical medium of the network over which data is transmitted and received). Network processors are available for high-throughput L2 and L3 network protocol processing, that is, performing packet processing to forward packets at wire-speed. Typically, a general purpose processor is used to process L4-L7 network protocols that require more intelligent processing. Although a general purpose processor may perform such compute intensive tasks, it may not provide sufficient performance to process the data so that the data may be forwarded at wire-speed. An Intrusion Detection System (IDS) application may inspect content of individual packets flowing through a network, and may identify suspicious patterns that may indicate an attempt to break into or compromise a system. One example of a suspicious pattern may be a particular text string in a packet followed by 100 characters later by another particular text string. Such content aware networking may require inspection of the contents of packets at wire speed. The content may be analyzed to determine whether there has been a security breach or an intrusion. A large number of patterns and rules in the form of regular expressions (also referred to herein as regular expression patterns) may be applied to ensure that all security breaches or intrusions are detected. A regular expression is a compact method for describing a pattern in a string of characters. The simplest pattern matched by a regular expression is a single character or string of characters, for example, /c/ or /cat/. The regular expression may also include operators and meta-characters that have a special meaning Through the use of meta-characters, the regular expression may be used for more complicated searches such as, “abc.*xyz.” That is, find the string “abc” followed by the string “xyz,” with an unlimited number of characters in-between “abc” and “xyz.” Another example is the regular expression “abc..abc.*xyz;” that is, find the string “abc,” followed two characters later by the string “abc,” and an unlimited number of characters later by the string “xyz.” Content searching is typically performed using a search method such as, Deterministic Finite Automata (DFA) or Non-Deterministic Finite Automata (NFA) to process the regular expression. Embodiments of the present invention provide a method, apparatus, computer program product, and corresponding system for compilation and run time processing of finite automata. According to another embodiment, in at least one processor operatively coupled to a plurality of memories mapped to hierarchical levels in a memory hierarchy in a security appliance operatively coupled to a network, a method may include walking nodes of a respective set of nodes of a given per-pattern non-deterministic finite automaton (NFA) of at least one per-pattern NFA. The given per-pattern NFA may be generated for a respective regular expression pattern. The method may include walking nodes of the respective set of nodes of the given per-pattern NFA with segments of a payload of an input stream to match the respective regular expression pattern in the input stream. The respective set of nodes may be stored amongst one or more memories of the plurality of memories based on a node distribution. The node distribution may be determined as a function of hierarchical levels mapped to the plurality of memories and per-pattern NFA storage allocation settings configured for the hierarchical levels. The respective set of nodes of the given per-pattern NFA may be statically stored amongst the one or more memories of the plurality of memories based on the node distribution. Walking nodes of the respective set of nodes with segments of the payload of the input stream may include walking from a given node to a next node of the respective set of nodes based on (i) a positive match of a given segment of the payload at the given node and (ii) a next node address associated with the given node. The next node address may be configured to identify the next node and a given memory of the plurality of memories in which the next node is stored. Each per-pattern NFA storage allocation setting, of the per-pattern NFA storage allocation settings, may be configured for a respective hierarchical level, of the hierarchical levels, to denote a target number of unique nodes of each at least one per-pattern NFA, to distribute for storing in a given memory, of the plurality of memories, that is mapped to the respective hierarchical level. The unique nodes of the respective set of nodes may be arranged in a consecutive manner within the given per-pattern NFA. The respective regular expression pattern may be a given pattern in a set of regular expression patterns. Each per-pattern NFA storage allocation setting may be configured for the respective hierarchical level in a manner enabling the given memory to provide a sufficient storage capacity for storing the target number of unique nodes denoted from each of the at least one per-pattern NFA in an event a per-pattern NFA is generated for each regular expression pattern in the set of regular expression patterns. The target number of unique nodes may be denoted via an absolute value and may be a common value for each respective set of nodes of each of the at least one per-pattern NFA, enabling each respective set of nodes to have a same value for the target number of unique nodes for storing in the given memory that is mapped to the respective hierarchical level. The target number of unique nodes may be denoted via a percentage value for applying to a respective total number of nodes of each respective set of nodes of each at least one per-pattern NFA, enabling each respective set of nodes to have a separate value for the target number of unique nodes for storing in the given memory that is mapped to the respective hierarchical level. Each memory of the plurality of memories may be mapped to a respective hierarchical level, of the hierarchical levels, consecutively, in descending order, based on a performance ranking of the plurality of memories and a highest performance ranked memory of the plurality of memories is mapped to a highest ranked hierarchical level of the hierarchical levels. The per-pattern NFA storage allocation settings may include a first per-pattern NFA storage allocation setting and a second per-pattern NFA storage allocation setting. The hierarchical levels may include a highest ranked hierarchical level and a next highest ranked hierarchical level. The first per-pattern NFA storage allocation setting may be configured for the highest ranked hierarchical level and the second per-pattern NFA storage allocation setting may be configured for the next highest ranked hierarchical level. The node distribution may be determined based on a first distribution, of the nodes of the respective set of nodes, for storing in a first memory of the plurality of memories, the first memory mapped to a highest ranked hierarchical level of the hierarchical levels, and at least one second distribution, of the nodes of the respective set of nodes, based on at least one undistributed node remaining in the respective set of nodes after a previous distribution. Each at least one second distribution for storing in a given memory of the plurality of memories, the given memory mapped to a given hierarchical level of the hierarchical levels, may be consecutively lower, per distribution of the nodes of the respective set of nodes, than the highest ranked hierarchical level. A given distribution of the at least one second distribution may include all undistributed nodes remaining in the respective set of nodes if the given hierarchical level is a lowest ranked hierarchical level of the hierarchical levels. A number of nodes in the first distribution may be maximized and the number maximized may be limited by a respective per-pattern NFA storage allocation setting, of the per-pattern NFA storage allocation settings, configured for the highest ranked hierarchical level. A number of nodes in each at least one second distribution may be maximized and the number maximized may be limited, per distribution, by a respective per-pattern NFA storage allocation setting, of the per-pattern NFA storage allocation settings, configured for the given hierarchical level. The node distribution may be determined by the at least one processor during a compilation stage of the given per-pattern NFA. Walking may include progressing the walk based on individual nodes of the respective set of nodes matching segments of the payload. The at least one processor may be configured to determine the per-pattern NFA storage allocation settings during a compilation stage of the at least one per-pattern NFA based on a total number of regular expression patterns in a set of regular expression patterns and a desired performance metric associated with the walk. The method may include walking nodes of a unified deterministic finite automaton (DFA), stored in a given memory of the plurality of memories and generated based on at least one subpattern selected from each pattern in a set of regular expression patterns, with segments from the input stream. Walking the respective set of nodes of the given per-pattern NFA may be based on a partial match of the respective regular expression pattern in the input stream determined via the DFA node walking The plurality of memories may include a first memory, a second memory, and a third memory, and the first and second memories may be co-located on a chip with the at least one processor and the third memory may be an external memory located off the chip and mapped to a lowest ranked hierarchical level of the hierarchical levels. The method may include configuring a node cache in the security appliance to store at least a threshold number of nodes of the at least one finite automaton and operatively coupling the at least one processor to the node cache. The method may include caching one or more nodes, of the respective set of nodes, based on a cache miss of a given node of the one or more nodes read from a given memory of the plurality of memories and a respective hierarchical node transaction size associated with a respective hierarchical level of the hierarchical levels that is mapped to the given memory. The hierarchical node transaction size associated with the respective hierarchical level may denote a maximum number of nodes to fetch from the given memory mapped to the respective hierarchical level for a read access of the given memory. A highest ranked hierarchical level of the hierarchical levels may be associated with a smallest hierarchical node transaction size of hierarchical node transaction sizes associated with the hierarchical levels. The respective hierarchical node transaction size may enable the at least one processor to cache the threshold number of nodes from the given memory if the respective hierarchical level is a lowest ranked hierarchical level of the hierarchical levels. Caching the one or more nodes may include evicting the threshold number of nodes cached in the node cache if the respective hierarchical level is a lowest ranked hierarchical level of the hierarchical levels. Caching the one or more nodes may include employing a least recently used (LRU) or round-robin replacement policy to evict one or more cached nodes from the node cache, if the respective hierarchical level is higher than a lowest ranked hierarchical level of the hierarchical levels. A number of the one or more cache nodes evicted may be determined based on the hierarchical level. Another example embodiment disclosed herein includes an apparatus corresponding to operations consistent with the method embodiments disclosed herein. Further, yet another example embodiment may include a non-transitory computer-readable medium having stored thereon a sequence of instructions which, when loaded and executed by a processor, causes a processor to perform methods disclosed herein. The foregoing will be apparent from the following more particular description of example embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments of the present invention. Before describing example embodiments of the present invention in detail, an example security application in which the embodiments may be implemented and typical processing using deterministic finite automata (DFA) and non-deterministic finite automata (NFA) are described immediately below to help the reader understand the inventive features disclosed herein. The network services processor 100 may be configured to process Open System Interconnection (OSI) network L2-L7 layer protocols encapsulated in received packets. As is well-known to those skilled in the art, the OSI reference model defines seven network protocol layers (L1-7). The physical layer (L1) represents the actual interface, electrical and physical that connects a device to a transmission medium. The data link layer (L2) performs data framing. The network layer (L3) formats the data into packets. The transport layer (L4) handles end to end transport. The session layer (L5) manages communications between devices, for example, whether communication is half-duplex or full-duplex. The presentation layer (L6) manages data formatting and presentation, for example, syntax, control codes, special graphics and character sets. The application layer (L7) permits communications between users, for example, file transfer and electronic mail. The network services processor 100 may schedule and queue work (e.g., packet processing operations) for upper level network protocols, for example L4-L7, and enable processing of upper level network protocols in received packets to be performed to forward packets at wire-speed. By processing the protocols to forward the packets at wire-speed, the network services processor 100 does not slow down the network data transfer rate. The network services processor 100 may receive packets from the network interfaces 103 The network services processor 100 may deliver high application performance using a plurality of processors (i.e., cores). Each of the cores (not shown) may be dedicated to performing data plane, control plane operations, or a combination thereof. A data plane operation may include packet operations for forwarding packets. A control plane operation may include processing of portions of complex higher level protocols such as Internet Protocol Security (IPSec), Transmission Control Protocol (TCP), Secure Sockets Layer (SSL), or any other suitable higher level protocol. The data plane operation may include processing of other portions of these complex higher level protocols. The network services processor 100 may also include application specific co-processors that may offload the cores so that the network services processor 100 achieves high-throughput. For example, the network services processor 100 may include an acceleration unit 106 that may include a hyper non-deterministic automata (HNA) co-processor 108 for hardware acceleration of NFA processing and a hyper finite automata (HFA) co-processor 110 for hardware acceleration of DFA processing. The HNA 108 and HFA 110 co-processors may be configured to offload the network services processor 100 general purpose cores (not shown) from the heavy burden of performing compute and memory intensive pattern matching methods. The network services processor 100 may perform pattern searching, regular expression processing, content validation, transformation, and security accelerate packet processing. The regular expression processing and the pattern searching may be used to perform string matching for AV and IDS applications and other applications that may require string matching. A memory controller (not shown) in the network services processor 100 may control access to a memory 104 that is operatively coupled to the network services processor 100. The memory 104 may be internal (i.e., on-chip) or external (i.e., off chip), or a combination thereof, and may be configured to store data packets received, such as packets 101 Typical content aware application processing may use either a DFA or an NFA to recognize patterns in content of received packets. DFA and NFA are both finite state machines, that is, models of computation each including a set of states, a start-state, an input alphabet (set of all possible symbols) and a transition function. Computation begins in the start-state and changes to new states dependent on the transition function. The pattern is commonly expressed using a regular expression that includes atomic elements, for example, normal text characters such as, A-Z and 0-9, and meta-characters, such as, *, ̂ and |. The atomic elements of a regular expression are the symbols (single characters) to be matched. Atomic elements may be combined with meta-characters that allow concatenation, alternation (|), and Kleene-star (*). The meta-character for concatenation may be used to create multiple character matching patterns from a single character (or sub-strings) while the meta-character for alternation (|) may be used to create a regular expression that can match any of two or more sub-strings. The meta-character Kleene-star (*) allows a pattern to match any number of times, including no occurrences of the preceding character or string of characters. Combining different operators and single characters allows complex subpatterns of expressions to be constructed. For example, a subpattern such as (th(is|at)*) may match multiple character strings, such as: th, this, that, thisis, thisat, thatis, or thatat. Another example of a complex subpattern of an expression may be one that incorporates a character class construct [ . . . ] that allows listing of a list of characters for which to search. For example, gr[ea]y looks for both grey and gray. Other complex subpattern examples are those that may use a dash to indicate a range of characters, for example, [A-Z], or a meta-character “.” that matches any one character. An element of the pattern may be an atomic element or a combination of one or more atomic elements in combination with one or more meta-characters. The input to the DFA or NFA state machine typically includes segments, such as a string of (8-bit) bytes, that is, the alphabet may be a single byte (one character or symbol), from an input stream (i.e., received packets). Each segment (e.g., byte) in the input stream may result in a transition from one state to another state. The states and the transition functions of the DFA or NFA state machine may be represented by a graph of nodes. Each node in the graph may represent a state and arcs (also referred to herein as transitions or transition arcs) in the graph may represent state transitions. A current state of the state machine may be represented by a node identifier that selects a particular node in the graph. Using DFA to process a regular expression and to find a pattern or patterns described by a regular expression in an input stream of characters may be characterized as having deterministic run time performance. A next state of a DFA may be determined from an input character (or symbol), and a current state of the DFA, because there is only one state transition per DFA state. As such, run time performance of the DFA is said to be deterministic and the behavior can be completely predicted from the input. However, a tradeoff for determinism is a graph in which the number of nodes (or graph size) may grow exponentially with the size of a pattern. In contrast, the number of nodes (or graph size) of an NFA graph may be characterized as growing linearly with the size of the pattern. However, using NFA to process the regular expression, and to find a pattern or patterns described by the regular expression in the input stream of characters, may be characterized as having non-deterministic run time performance. For example, given an input character (or symbol) and a current state of the NFA, it is possible that there is more than one next state of the NFA to which to transition. As such, a next state of the NFA cannot be uniquely determined from the input and the current state of the NFA. Thus, run time performance of the NFA is said to be non-deterministic as the behavior cannot be completely predicted from the input. According to embodiments disclosed herein, content searching may be performed using DFA, NFA, or a combination thereof. According to one embodiment, a run time processor, co-processor, or a combination thereof, may be implemented in hardware and may be configured to implement a compiler and a walker. The compiler may compile a pattern or an input list of patterns (also known as signatures or rules) into the DFA, NFA, or combination thereof. The DFA and NFA may be binary data structures, such as DFA and NFA graphs and tables. The walker may perform run time processing, i.e. actions for identifying an existence of a pattern in an input stream, or matching the pattern to content in the input stream. Content may be a payload portion of an Internet Protocol (IP) datagram, or any other suitable payload in an input stream. Run time processing of DFA or NFA graphs may be referred to as walking the DFA or NFA graphs, with the payload, to determine a pattern match. A processor configured to generate DFA, NFA, or a combination thereof, may be referred to herein as a compiler. A processor configured to implement run time processing of a payload using the generated DFA, NFA, or combination thereof, may be referred to herein as a walker. According to embodiments disclosed herein, the network services processor 100 may be configured to implement a compiler and a walker in the security appliance 102. According to embodiments disclosed herein, the compiler 306 may generate the binary image 112 by processing a rule set 310 that may include a set of one or more regular expression patterns 304 and optional qualifiers 308. From the rule set 310, the compiler 306 may generate a unified DFA 312 using subpatterns selected from all of the one or more regular expression patterns and at least one NFA 314 for at least one pattern in the set of one or more regular expression patterns 304 for use by the walker 320 during run time processing, and metadata (not shown) including mapping information for transitioning the walker 320 between states (not shown) of the unified DFA 312 and states of the at least one NFA 314. The unified DFA 312 and the at least one NFA 314 may be represented data structure-wise as graphs, or in any other suitable form, and the mapping in the metadata may be represented data structure-wise as one or more tables, or in any other suitable form. According to embodiments disclosed herein, if a subpattern selected from a pattern is the pattern, no NFA is generated for the pattern. According to embodiments disclosed herein, each NFA that is generated may be for a particular pattern in the set, whereas a unified DFA may be generated based on all subpatterns from all patterns in the set. The walker 320 walks the unified DFA 312 and the at least one NFA 314 with a payload by transitioning states of the unified DFA 312 and the at least one NFA based on consuming (i.e., processing) segments, such as bytes from the payload in the received packets 101 The rule set 310 may include a set of one or more regular expression patterns 304 and may be in a form of a Perl Compatible Regular Expression (PCRE) or any other suitable form. PCRE has become a de facto standard for regular expression syntax in security and networking applications. As more applications requiring deep packet inspections have emerged or more threats have become prevalent in the Internet, corresponding signatures/patterns to identify virus/attacks or applications have also become more complex. For example, signature databases have evolved from having simple string patterns to regular expression (regex) patterns with wild card characters, ranges, character classes, and advanced PCRE signatures. As shown in According to embodiments disclosed herein, the compiler 306 may generate a unified DFA 312 using subpatterns 302 selected from all patterns in the set of one or more regular expression patterns 304. The compiler 306 may select subpatterns 302 from each pattern in the set of one or more regular expression patterns 304 based on at least one heuristic, as described further below. The compiler 306 may also generate at least one NFA 314 for at least one pattern 316 in the set, a portion (not shown) of the at least one pattern 316 used for generating the at least one NFA 314, and at least one walk direction for run time processing (i.e. walking) of the at least one NFA 314, may be determined based on whether a length of the subpattern selected 318 is fixed or variable and a location of the subpattern selected 318 within the at least one pattern 316. The compiler 306 may store the unified DFA 312 and the at least one NFA 314 in the at least one memory 104. The compiler may determine whether length of the potential subpatterns selected is fixed or variable. For example, length of a subpattern such as “cdef” may be determined to have a fixed length of 4 as “cdef” is a string, whereas complex subpatterns including operators may be determined as having a variable length. For example, a complex subpattern such as “a.*cd[̂\n] {0,10}.*y” may have “cd[̂\n]{0,10}” as the subpattern selected, that may have a variable length of 2 to 12. According to embodiments disclosed herein, subpattern selection may be based on at least one heuristic. A subpattern is a set of one or more consecutive elements from a pattern, wherein each element from the pattern may be represented by a node in a DFA or NFA graph, for purposes of matching bytes or characters from the payload. An element, as described above, may be a single text character represented by a node or a character class represented by a node. The compiler 306 may determine which subpatterns in the pattern are better suited for NFA based on whether or not a subpattern is likely to cause excessive DFA graph explosion, as described above in reference to As disclosed above, selecting a subpattern from each pattern in the set of one or more regular expressions 304 may be based on at least one heuristic. According to one embodiment, the at least one heuristic may include maximizing a number of unique subpatterns selected and length of each subpattern selected. For example, a pattern such as “ab.*cdef.*mn” may have multiple potential subpatterns, such as “ab.*,” “cdef,” and “.*mn”. The compiler may select “cdef” as the subpattern for the pattern because it is a largest subpattern in the pattern “ab.*cdef.*mn” that is unlikely to cause DFA graph explosion. However, the compiler may select an alternate subpattern for the pattern “ab.*cdef.*mn” if the subpattern “cdef” has already been selected for another pattern. Alternatively, the compiler may replace the subpattern “cdef” with another subpattern for the other pattern, enabling the subpattern “cdef” to be selected for the pattern “ab.*cdef*mn.” As such, the compiler 306 may select subpatterns for the patterns 304 based on a context of possible subpatterns for each of the patterns 304, enabling maximization of the number of unique subpatterns selected and length of each subpattern selected. As such, the compiler 306 may generate a unified DFA 312 from the subpatterns selected 302 that minimizes a number of false positives (i.e., no match or partial match) in pattern matching of the at least one NFA 314 by increasing the probability of a pattern match in the at least one NFA 314. By maximizing subpattern length, false positives in NFA processing may be avoided. False positives in NFA processing may result in non-deterministic run time processing and, thus, may reduce run time performance. Further, by maximizing a number of unique subpatterns selected, the compiler 306 enables a 1:1 transition between the unified DFA to the at least one NFA 314 generated from a pattern in the set given a match of a subpattern (from the pattern) in the unified DFA. For example, if the subpattern selected was shared by multiple patterns, then a walker of the unified DFA would need to transition to multiple at least one NFAs because each at least one NFA is a per-pattern NFA, and the subpattern match from the unified DFA signifies a partial match for each of the multiple patterns. As such, maximizing the number of unique subpatterns reduces a number of DFA:NFA 1:N transitions, reducing run time processing by the walker 320. To enable maximizing the number of unique subpatterns, the compiler 302 may compute a hash value 326 of the subpattern selected 318 and store the hash value computed 326 in association with an identifier (not shown) of a pattern 316 from which the subpattern 318 was selected. For example, the compiler 306 may, for each pattern in the set 304, compute a hash value of the subpattern selected. The hash values computed 324 may be stored in the at least one memory 104 as a table, or in any suitable manner. The hash method used may be any suitable hash method. The compiler may compare the hash value computed to a list of hash values of subpatterns selected for other patterns in the set, in order to determine whether or not the subpattern selected is unique. If the hash value computed is found in the list, the compiler may determine whether to replace (i) the subpattern selected with another subpattern from the pattern or (ii) the subpattern selected for another pattern in the set with an alternate subpattern selected from the other pattern in the set. The other pattern in the set may be identified based on an association with the hash value computed in the list. The determination for whether to replace (i) or (ii) may be based on comparing lengths of subpatterns being considered for the replacement in order to maximize lengths of the unique subpatterns being selected, as described above. Replacing a subpattern selected may include selecting a next longest subpattern identified for a given pattern, or a next highest prioritized subpattern. For example, potential subpatterns may be prioritized based on likely of resulting in DFA explosion or a magnitude of the DFA explosion expected. According to embodiments disclosed herein, the at least one heuristic may include identifying subpatterns of each pattern and disregarding a given subpattern of the subpatterns identified of each pattern, if the given subpattern has a length less than a minimum threshold. For example, to reduce false positives in the at least one NFA, the compiler may disregard subpatterns with lengths less than the minimum threshold because such subpatterns may result in higher probability of a false positive in the at least one NFA. The at least one heuristic may include accessing a knowledge base (not shown) of subpatterns associated with historical frequency of use indicators and disregarding a given subpattern of the subpatterns identified of each pattern, if a historical frequency of use indicator for the given subpattern in the knowledge base accessed is greater than or equal to a frequency use threshold. For example, application or protocol specific subpatterns may have a high frequency of use, such as for HyperText Transfer Protocol (HTTP) payloads, “carriage return line feed”, or clear traffic such as multiple consecutive Os from binary files, or any other frequently used subpattern. The at least one heuristic may include identifying subpatterns of each pattern and for each pattern, maximizing a number of consecutive text characters in the subpattern selected by selecting a given subpattern of the subpatterns identified based on the given subpattern having a largest number of consecutive text characters of the subpatterns identified and based on the given subpattern being unique among all subpatterns selected for the set of one or more regular expressions. As disclosed above, maximizing length of the subpattern selected may enable higher probability of a match in the at least one NFA. The at least one heuristic may include prioritizing given subpatterns of each pattern based on a subpattern type of each of the given subpatterns and lengths of the given subpatterns. The subpattern type may be text only, alternation, single character repetition, or multi-character repetition, and a priority order from highest to lowest for the subpattern type may be text only, alternation, single character repetition, and multi-character repetition. As such, subpatterns that are text strings having a length of at least a minimum length threshold may be prioritized higher than complex subpatterns of variable length. The compiler 306 may prioritize a longer length subpattern over another subpattern of lesser length. The compiler 306 may select a unique subpattern as the subpattern selected, based on the prioritizing. As described above, the unique subpattern selected may have a length of at least a minimum length threshold. The compiler 306 may select a non-unique subpattern as the subpattern selected, based on the prioritizing, if none of the given subpatterns are unique and have a length of at least the minimum length threshold. As such, the compiler 306 may select a subpattern from a pattern that is a duplicate of a subpattern selected from another pattern rather than select a subpattern having a length less than the minimum threshold. To facilitate finalizing of subpatterns, the compiler 306 may perform multiple passes over the patterns and sort possible subpatterns by length. As such, compiler subpattern selection for a given pattern in the set of one or more regular expressions 304 may be performed within a context of subpattern selection for other patterns in the set of one or more regular expressions 304. As described above, the qualifiers 322 may indicate that reporting of a start offset is desired. However, the start offset may not be easily discernible. For example, finding a start offset in a payload matching patterns such as “a.*b” or “a.*d” may be difficult given a payload such as “axycamb” because two patterns may be matching, “axycamb” and “amb.” As such, offsets for both instances of “a” in the payload may need to be tracked as potential start offsets. According to embodiments disclosed herein, potential start offsets need not be tracked, as the start offset is not determined until a match of the entire pattern is determined to have been found in a payload. Determining the match of the entire pattern may be found utilizing match results from the unified DFA, the at least one NFA, or a combination thereof. According to embodiments disclosed herein, if a payload in the received packets 101 includes content that matches a subpattern selected 318 from a pattern 316, the walker may transition to walk at least one NFA for the pattern 318. The walker 320 may report a match of the subpattern selected 318 and an offset that identifies a location in the received packets of the last character of the matching subpattern as an end offset for the subpattern in the payload. A subpattern match may be a partial match for the pattern if the subpattern is a subset of the pattern. As such, the walker 320 may continue the search for the remainder of the pattern in the payload by walking at least one NFA for the pattern, in order to determine a final match for the pattern. It should be understood that the pattern may traverse one or more payloads in the received packets 101 As disclosed above, the compiler 306 may generate the unified DFA 312 and the at least one NFA 314 to enable the walker 320 to search for matches of one or more regular expression patterns 304 in received packets 101 According to embodiments disclosed herein, the HNA 108 may be configured to read at least one instruction 453 from an instruction queue 454. The instruction queue 454 may be configured to store the at least one instruction 453 that may be sent by a host (not shown) to be processed by the HNA 108. The at least one instruction 453 may include at least one job, such as S1 459 A given job of the at least one job may indicate a given NFA of the at least one NFA 314, at least one given node of the given NFA, at least one given offset in a given payload, as well as at least one walk direction, each at least one walk direction corresponding to one node of the at least one given node. Each at least one job may include results of processing by the HFA, enabling the HNA to advance a match in the given NFA for a given pattern of the at least one pattern 304 that corresponds to the given subpattern. As such, each job represents partial match results determined by the HFA co-processor 110 in order to advance the match of the given pattern by the HNA co-processor 108. The HNA 108 may process the at least one instruction 453 by reading at least one pointer (not shown), or other suitable instruction information, stored therein. The at least one pointer may include an input buffer pointer (not shown) to an input buffer 458. The at least one instruction 453 may also include a payload pointer (not shown) to a payload 462, a result buffer pointer (not shown) to a match result buffer 466, a save buffer pointer (not shown) to a save buffer 464, and a run stack pointer (not shown) to a run stack 460. The input buffer 458, run stack 460, and the save buffer 464 may be referred to herein as an input stack, run stack, and save stack, respectively, although the input buffer 458, run stack 460, and save buffer 464 may or may not exhibit the Last In First Out (LIFO) properties of a stack. The input buffer 458, run stack 460, and save buffer 464 may be located within a same or different physical buffer. If located within the same physical buffer, entries of the input stack 458, run stack 460, and save stack 464 may be differentiated based on a field setting of the entries, or differentiated in any other suitable manner. The input stack 458 and the run stack 460 may be located in the same physical buffer that may be on-chip and the save buffer 464 may be located in another physical buffer that may be off-chip. The at least one job, such as S1 459 The HNA 108 may be configured to load (i.e., fetch or retrieve) at least one job from the input buffer 458, such as jobs S1 459 The HNA 108 may load (i.e., fetch) the graph 457 from the graph memory 456 that may be included in the binary image 112 of As the HNA 108 may process the graph 457 using payload segments from the payload 462, pushing and popping entries to/from the run stack 460 to save and resume its place in the graph 457. For example, the HNA 108 may need to save its place in the graph if a walked node presents multiple options for a next node to walk. For example, the HNA 108 may walk a node that presents multiple processing path options, such as a fork represented in the graph. According to embodiments disclosed herein, nodes of a DFA or NFA may be associated with a node type. Nodes associated with a split type may present multiple processing path options. The split node type is further disclosed below in reference to According to embodiments disclosed herein, the HNA 108 may be configured to select a given path, of the multiple processing paths, and push an entry to the run stack 460 that may enable the HNA 108 to return and proceed along the unselected path, of the multiple processing paths, based on determining a mismatch (i.e., negative) result at walked node along the selected path. As such, pushing the entry on the run stack 460 may save a place in the graph 457 that represents unexplored context. The unexplored context may indicate a given node of the graph 457 and a corresponding payload offset to enable the HNA 108 to return to the given node and walk the given node with the given segment of the payload 462, as the given segment may be located at the corresponding payload offset in the payload 462. As such, the run stack 460 may be used to enable the engine 462 to remember and later walk an unexplored path of the graph 457. Pushing or storing an entry that indicates a given node and a corresponding offset in a given payload may be referred to herein as storing unexplored context, thread or inactive thread. Popping, fetching, or loading an entry that indicates the given node and the corresponding offset in the given payload in order to walk the given node with a segment located at the corresponding offset in the given payload may be referred to herein as activating a thread. Discarding an entry that indicates the given node and the corresponding offset in the given payload may be referred to herein as flushing an entry or retiring a thread. The run stack 460 may enable the HNA 108 to save its place in the graph 457 in an event that an end of the payload 462 is reached while walking segments of the payload 462 with the graph 457. For example, the HNA 108 may determine that the payload or a portion of the payload 462 is partially matching a given pattern and that a current payload offset of the payload 462 is an end offset of the payload 462. As such, the HNA 108 may determine that only a partial match of the given pattern was found and that the entire payload 462 was processed. As such, the HNA 108 may save the run stack 460 content to the save buffer 464 to continue a walk with a next payload corresponding to a same flow as the payload 462 that was processed. The save buffer 464 may be configured to store at least one run stack entry of the run stack 460, mirroring a running state of the run stack 460 in an event the entire payload 462 is processed. Based on finding a final (i.e., entire or complete) match of the pattern, the HNA may pop and discard entries in the run stack 460 that are associated with the current job, for example the job loaded from the input buffer, such as S1 459 The match results may include a node address associated with a node at which the final match of the pattern was determined. The node at which the final match of the pattern was determined may be referred to herein as a marked node. The node address, or other identifier of a final match location in the graph 457, identifier of the matching pattern, length of the matching pattern, or any other suitable match results or a combination thereof, may be included in the match results. Based on processing all of the run stack entries associated with the current job, the HNA 108 may load a next job from the run stack that has been previously loaded from the input buffer 458 (e.g., S2 459 Based on finding a mismatch of the payload 462 while walking the graph 457 with the payload 462, the HNA 108 may pop an entry from the run stack 460 that is associated with the current job (e.g., S1 459 In the example embodiment, the input stream may include a packet (not shown) with a payload 542. The regular expression pattern 502 is a pattern “h[̂\n]*ab” that specifies the character “h” followed by an unlimited number of consecutive characters not matching a newline character (i.e., [̂\n]*). The unlimited number may be zero or more. The pattern 502 further includes the characters “a” and “b” consecutively following the unlimited number of characters not matching the newline character. In the example embodiment, the payload 542 includes segments 522 It should be understood that the regular expression pattern 502, NFA graph 504, payload 542, segments 522 In the example embodiment, the NFA graph 504 is a per-pattern NFA graph configured to match the regular expression pattern 502 to the input stream. For example, the NFA graph 504 may be a graph including a plurality of nodes generated by the compiler 306, such as nodes N0 506, N1 508, N2 510, N3 512, N4 514, and N5 515. The node N0 506 may represent a starting node for the pattern 502, and the node N5 515 may represent a marked node for the pattern 502. The marked node N5 515 may be associated with an indicator (not shown) that reflects a final (i. e., entire or complete) match of the pattern 502 matched to the input stream. As such, the walker 302 may determine that the pattern 502 is matching in the input stream based on traversing the marked node N5 515 and detecting the indicator. The indicator may be a flag or field setting of metadata (not shown) associated with the marked node or any other suitable indicator. According to embodiments disclosed herein, the walker 320 may walk the segments 522 The nodes N0 506, N2 510, N3 512, and N4 514, may be configured to match a respective element to a given segment of the payload 542, whereas nodes N1 508 and N5 515 may be nodes of a node type indicating no matching functionality, and, thus, would not process from the payload 542. In the example embodiment, node N1 508 is split node presenting multiple transition path options to the walker 320. For example, walking the split node N1 508 presents epsilon paths 530 According to embodiments disclosed herein, the split node 508 may be associated with split node metadata (not shown) to present the multiple path options. For example, the split node metadata may indicate, either directly or indirectly, multiple next nodes, such as the nodes N2 510 and N3 512, in the example embodiment. If the multiple next nodes are indicated directly, the metadata may include absolute addresses or pointers to the next nodes N2 510 and N3 512. If the multiple next nodes are indicated indirectly, the metadata may include indices or offsets that may be used to resolve absolute addresses of the next nodes N2 510 and N3 512 or pointers to the next nodes N2 510 and N3 512. Alternatively, other suitable forms for directly or indirectly indicating next node addresses of the multiple next nodes may be used. The implicit understanding may include configuring the walker 320 to select a given next node of multiple next nodes based on node metadata included in a particular entry location within the split node metadata. The compiler 306 may be configured to generate the split node metadata including an indication of the given next node at the designated entry location. As such, the implicit understanding that a given path, such as the upper epsilon path 530 As shown in the table 538, the processing cycles 540 The walker 320 may determine that the match result 534 is a positive match result as the segment 522 As the split node N1 508 presents multiple transition path options, such as the epsilon paths 530 Since the split node N1 508 presents multiple path options, the action 536 may include storing unexplored context, such as by storing an indirect or direct identifier of the node N3 512 and the current offset 520 Storing the unexplored context may enable the walker 320 to remember to return to the node N3 512 to walk the node N3 512 with the segment “1” at the offset 520 For example, based on reaching the marked node N5 515 that indicates the final (i.e., complete or entire) match for the pattern 502 in the input stream, the walker 320 may utilize the DUP indicator to determine whether to process the unexplored context by walking the node N3 512 with the segment “x” at the offset 520 Whether or not a stored thread is traversed may be determined by the compiler 306. For example, the compiler 306 may control whether or not the DUP indicator is set by configuring a setting in corresponding metadata for each node. Alternatively, the compiler 306 may configure a global setting included in global metadata associated with the finite automata, specifying that all stored threads are to be traversed, enabling all possible matches to be identified. In the example embodiment, the selection of the epsilon transition path 530 Storing the untraversed transition path may include pushing an entry on a stack, such as the run stack 460 of According to the example embodiment, based on selecting the upper path (i.e., the epsilon transition path 530 The walker 320 may transition and walk the node N3 512 and with the segment “x” located at the offset 520 Since all arcs transitioning from the split node 508 are epsilon transitions, the walker 320 may again select a path of the multiple path options and does not consume (i.e., process) a segment from the payload 542 as the current offset is not updated for the processing cycle 540 As such, for the processing cycle 540 Embodiments disclosed herein may enable optimized match performance due to the combined DFA and NFA type processing disclosed above. For example, embodiments disclosed above may reduce a number of false positives in NFA processing as the NFA processing may be based on partial matches identified via the DFA processing. Further, because embodiments disclosed herein include per-rule (i.e., per-pattern) NFAs that may be identified by the DFA processing, embodiments disclosed herein further optimize match performance. As disclosed above, the DFA 312 is a unified DFA and each at least one NFA 314 is a per-pattern NFA. Walking payload through the unified DFA 312 by the HFA 110 may be considered a first parsing block that marks starting points of patterns (intermediate matches) and provides the starting point to the at least one NFA 314 that may continue the walk from the mark to determine a final match. For example, based on the partial match results determined by processing segments of payloads of an input stream through the unified DFA 312, the walker 320 may determine that a given number of rules (i.e. patterns) of the rule set 310 need to be processed further, and the HFA 110 may produce pattern match results that may be converted into the given number of NFA walks as each at least one NFA 314 is a per-pattern NFA. The packets 101 The HNA 108 may enable a determination that partial matches 618 For example, as disclosed above with regard to In addition to such pre-screening of packets by the HFA 110 that may reduce a number of false positives for NFA processing, embodiments disclosed herein may further optimize match performance by distributing nodes of each per-pattern NFA to memories in a memory hierarchy based on node locality. Since each NFA may be a per-pattern NFA, embodiments disclosed herein may advantageously distribute nodes of each per-pattern NFA to memories in a hierarchy based on an understanding that the longer the rule (i.e., pattern) the less likely it is that nodes generated from portions at the end of the rule (i.e., pattern) are to be accessed (i.e., walked or traversed). By storing earlier nodes of each of the per-pattern NFA in relatively faster (i.e., higher performance) memories, embodiments disclosed herein may further optimize match performance. It should be understood that because such node distribution may be based on a hierarchical level to memory mapping, nodes may be advantageously distributed based on the hierarchical levels mapped, enabling any suitable distribution that optimizes match performance to be utilized. As disclosed above, the at least one NFA 314, such as the per-pattern NFA 504 of For example, match performance of the walker 320 may be optimized based on storing consecutive nodes, such as the nodes N0 506, N1 508, N2 510, and N3 512, of the section 509 of the per-pattern NFA 504 of Embodiments disclosed herein may be based on an understanding that earlier nodes of a per-pattern NFA graph, such as the per-pattern NFA graph 504, such as the nodes N0 506, N1 508, N2 510 and N3 512, may have a higher likelihood of being traversed than the nodes N4 514 and N5 515 because the nodes N4 514 and N5 515 are located towards the end of the rule (i.e. pattern) 502, and thus, require that more of the payload be matched in order to be walked (i.e. traversed). As such, earlier nodes of a per-pattern NFA, such as the NFA 504, or any other suitable per-pattern NFA graph, may be considered to be “high touch” nodes that may be accessed on a more frequent basis due to false positives than “low touch” nodes that are more likely only to be accessed in an event a complete match of the pattern occurs. According to embodiments disclosed herein, the compiler 306 may distribute nodes of each per-pattern NFA to memories in a hierarchy based on the understanding of which nodes in each per-pattern NFA are considered “high touch” nodes and which are considered to be “low touch” nodes. Such an understanding may be used to “pre-cache” (i.e., statically store) nodes of each per-pattern NFA by distributing the nodes to memories in a memory hierarchy enabling an improved match performance. For example, “high touch” nodes may be distributed to faster memories based on the understanding that the “high touch” nodes will be accessed (i.e., walked or traversed) more frequently due to their locality within the per-pattern NFA. In general, regular expression access patterns of a unified NFA, generated based on a set of regular expression patterns, may be random as such access patterns may be based on the particular payload. Thus, a history of regular expression access patterns cannot be used to predict further regular expression access patterns. For example, caching a most recently traversed node of a unified NFA may provide no performance benefit to a walker because a next node accessed within the unified NFA may not be the cached node. According to embodiments disclosed herein, the unified DFA 312 may be statically stored in a given memory of the graph memories 456 whereas at least one NFA 314 may have nodes distributed and statically stored across the graph memories 456 as the compiler 306 may target distributions of particular NFA nodes for storing in particular memories for optimizing walker match performance. According to embodiments disclosed herein the graph memories 456 may be in a memory hierarchy 743 that may include a plurality of hierarchical levels 708 The compiler 306 may map the hierarchical levels 708 The RAM memory may be mapped to the highest ranked hierarchical level 708 As disclosed above, locality of nodes of a per-pattern NFA may be taken advantage of by the smart compiler 306 by storing NFA nodes generated from earlier portions of a given pattern in faster memories. Further, since the probability of a match of the given pattern is already higher since a partial match of the given pattern was determined by the DFA processing of the HFA 110, such embodiments combine to optimize match performance. For example, as disclosed above, DFA processing may be used to reduce a number of false positives found by NFA processing. Since each NFA may be per-pattern NFA, nodes of each per-pattern NFA may be advantageously distributed across a plurality of memories based on a mapping of the plurality of memories to hierarchical levels of the memory hierarchy 743. For example, smaller NFAs generated from relatively shorter length patterns may have all nodes distributed to a first level and stored in a first memory that is mapped to the first level, whereas larger NFAs generated from relatively longer patterns may have a first portion of nodes distributed to the first level and remaining portions distributed amongst remaining levels. The first level may be a highest ranked level that is mapped to a highest performance memory. As such, earlier nodes of the per-pattern NFAs may be stored in the highest performance memory. Since earlier nodes may have a higher likelihood of being traversed due to a false positive, embodiments disclosed herein may enable a majority of false positives to be handled via accesses to memories mapped to higher levels in the memory hierarchy 743. According to embodiments disclosed herein, match performance may be optimized by enabling a number of accesses to the memory 756 The memory 756 According to embodiments disclosed herein, per-pattern NFA storage allocation settings 710 The per-pattern NFA storage allocation settings 710 Based on the per-pattern NFA storage allocation setting 710 In the example embodiment, the per-pattern NFA storage allocation setting 710 It should be understood that the hierarchical level to memory mapping may be inherently understood by the compiler and, as such, may obviate the specific hierarchical levels 708 The highest ranked memory 756 A respective hierarchical node transaction size 723 A first portion of nodes 804 As shown in The compiler 306 may distribute node of each per-pattern NFA as part of generating each per-pattern NFA. As disclosed above, transition in the NFA from a first node to a second node may be specified via first node metadata that identifies the second node via a next node address. According to embodiments disclosed herein, the next node address may be configured by the compiler 306 to include a portion that indicates a given memory of the plurality of memories to which the second node has been distributed for storing. The consecutive manner may include distributing nodes from a plurality of nodes of a given per-pattern NFA of the at least one per-pattern NFA that represent a given number of consecutive elements of a given regular expression pattern for which the given per-pattern NFA was generated. Further, according to embodiments disclosed herein, each at least one second distribution includes at least one next node identified via a next node address included in metadata associated with at least one previous node that was distributed in an immediately preceding second distribution. The method may begin (1102) and set a given hierarchical level to a highest ranked hierarchical level in a memory hierarchy (1104). The method may set a given per-pattern NFA to a first per-pattern NFA of at least one NFA generated from a set of one or more regular expression patterns (1106). The method may check for a number of undistributed nodes of the given per-pattern NFA (1108). If the number of undistributed nodes of the given per-pattern NFA is null, the method may check if the given per-pattern NFA is a last NFA generated from the set of one of more regular expression patterns (1116). If the given per-pattern NFA is the last per-pattern NFA generated, the method may check if the given hierarchical level is a lowest ranked hierarchical level (1120) and if the given hierarchical level is the lowest ranked hierarchical level the method thereafter ends (1126) in the example embodiment. However, if the check for whether the given hierarchical level is a lowest ranked hierarchical level (1120) is no, the method may set the given hierarchical level to a next consecutively lower hierarchical level (1124) and again set the given per-pattern NFA to the first per-pattern NFA of at least one NFA generated from the set of one or more regular expression patterns (1106) and proceed to check for a number of undistributed nodes of the given per-pattern NFA (1108). If the number of undistributed nodes of the given per-pattern NFA is null, the method may proceed as disclosed above. If the check for the number of undistributed nodes of the given per-pattern NFA (1108) is non-zero, the method may check if the given hierarchical level is the lowest ranked hierarchical level (1110). If yes, the method may distribute the number undistributed nodes to a given memory that is mapped to the given hierarchical level (1114) and the method may check if the given per-pattern NFA is a last NFA generated from the set of one of more regular expression patterns (1116). If yes, the method may proceed as disclosed above. If no, the method the method may set the given per-pattern NFA to the next per-pattern NFA generated (1118) and the method may iterate to check again for the number of undistributed nodes of the given per-pattern NFA (1108) which was updated to the next per-pattern NFA generated. If the check for whether the given hierarchical level is the lowest ranked hierarchical level (1110) is no, the method may check if the number of undistributed nodes of the given per-pattern NFA exceeds a number of nodes denoted by a per-pattern NFA storage allocation setting configured for the given hierarchical level (1112). If yes, the method may distribute the number of nodes denoted by the per-pattern NFA storage allocation setting configured for the given hierarchical level for storing in the given memory that is mapped to the given hierarchical level (1122) and check whether the given per-pattern NFA is a last NFA generated from the set of one of more regular expression patterns (1116). If yes, the method may proceed as disclosed above. If the check for whether the given per-pattern NFA is the last per-pattern NFA generated (1116) is no, the method may set the given per-pattern NFA to the next per-pattern NFA generated (1118) and the method may iterate to check again for the number of undistributed nodes of the given per-pattern NFA (1108) which was updated to the next per-pattern NFA generated. If however, the check for whether the number of undistributed nodes of the given per-pattern NFA exceeds a number of nodes denoted by a per-pattern NFA storage allocation setting configured for the given hierarchical level (1112) is no, the method may distribute the number of undistributed nodes to the given memory that is mapped to the given hierarchical level (1114) and proceed as disclosed above. According to embodiments disclosed herein, the per-pattern NFA storage allocation settings may denote a target number of unique nodes via an absolute value. The absolute value may be a common value for each respective set of nodes enabling each respective set of nodes to have a same value for the target number of unique nodes for storing in the given memory that is mapped to the respective hierarchical level. For example, as shown in Alternatively, the target number of unique nodes may be denoted via a percentage value for applying to a respective total number of nodes of each respective set of nodes enabling each respective set of nodes to have a separate value for the target number of unique nodes for storing in the given memory that is mapped to the respective hierarchical level. For example, if a number such as 25% were configured for the per-pattern NFA storage allocation setting 1010 The per-pattern NFA storage allocation settings may include a first per-pattern NFA storage allocation setting and a second per-pattern NFA storage allocation setting. The hierarchical levels may include a highest ranked hierarchical level and a next highest ranked hierarchical level. The first per-pattern NFA storage allocation setting may be configured for the highest ranked hierarchical level. The second per-pattern NFA storage allocation setting may be configured for the next highest ranked hierarchical level. The first per-pattern NFA storage allocation setting may be less than the second per-pattern NFA storage allocation setting. For example, a number of nodes from each per-pattern NFA that are denoted for distribution to a highest performance memory may be less than a number of nodes denoted for a lowest performance memory, such as a system memory, that may have an infinite number denoted. Embodiments disclosed herein may maximize a number of nodes in a given distribution and the number maximized may be limited by a respective per-pattern NFA storage allocation setting, of the per-pattern NFA storage allocation settings, configured for a given hierarchical level. For example, a number of nodes denoted by a per-pattern NFA storage allocation setting may be ten. As such, each per-pattern NFA that includes ten or more undistributed nodes would have ten nodes distributed. Each per-pattern NFA that includes less than ten undistributed nodes would distribute a respective number of undistributed number of nodes. As disclosed above, a walker, such as the walker 320 of As disclosed above, with regard to As such, the walker 320 may be configured to walk nodes of the respective set of nodes of a per-pattern NFA 314 that may be distributed and stored amongst one or more memories of the plurality of memories 756 The walker 320 may be configured to walk from a given node to a next node of the respective set of nodes based on (i) a positive match of a given segment of the payload at the given node and (ii) a next node address associated with the given node. The next node address may be configured to identify the next node and a given memory of the plurality of memories, such as the plurality of memories 756 For example, the metadata associated with the node N2 510 may include a next node address that is an address of the node N4 514 or a pointer or index or any other suitable identifier that identifies the next node N4 514 to traverse based on the positive match at the node N2 510. The metadata associated with the node N2 510 may further identify a given memory of the plurality of memories in which the next node N4 514 is stored. The given memory may be identified in any suitable manner, such as by configuration of particular bits stored in conjunction with and as part of the next node address (not shown) of the next node 514. As such, the walker 320 may be configured to fetch the next node N4 514 from the given memory identified via the next node address associated with the given node N2 510 in order to walk the next node N4 514 with a next segment at a next offset, such as the next segment 522 According to embodiments disclosed herein, the next node N4 514 may be cached in a node cache. Turning back to If a fetch of the node N4 514 results in a cache miss, the HNA 108 may fetch the node N4 514 from the given memory that has the node N4 514 statically stored and also cache the node N4 514 in the node cache 451. Based on a hierarchical node transaction size associated with a hierarchical level of the given memory, the HNA 108 may cache additional nodes from the given memory. The node N4 514 and any additional nodes cached may be arranged in a consecutive manner in a respective per-pattern NFA. For example, based on the hierarchical node transaction size associated with the hierarchical level of the given memory, the HNA 108 may cache the node N5 515 that is arranged in a consecutive manner with the node N4 514 in the per-pattern NFA 504. According to embodiments disclosed herein, a respective hierarchical node transaction size (not shown) may be associated with each of the hierarchical levels 708 The hierarchical node transaction size may be denoted in any suitable manner, such as by specifying a maximum number of nodes directly, or by specifying a number of bits that may be a multiple of a size of the maximum number of nodes denoted. According to embodiments disclosed herein, the node cache 451 may be organized as multiple lines. Each line may be sized based on a node bit size and may include additional bits for the use by the HNA 108. Each line may be a minimum quantum (i.e., granularity) of a transaction from each of the plurality of memories. According to embodiments disclosed herein, a highest ranked memory may be a memory that is co-located on chip with the HNA 108. The highest ranked memory may be a highest performance memory relative to other memories of the plurality of memories. The highest performance memory may have the fastest read and write access times. A transaction size, for example, a size of the quantum of data read from the highest performance memory may be one or two lines, the one or two lines may include one or two nodes, respectively. In contrast, a lowest ranked hierarchical level may be mapped to a lowest performance memory of the plurality of memories. The lowest performance memory may be a slowest performance memory having relatively longer read and write access times in comparison with other memories of the plurality of memories. For example, the slowest performance memory may be a largest memory such as an external memory that is not located on a chip with the HNA 108. As such, a number of read accesses to such a memory may be advantageously reduced by having a larger transaction size, such as four lines, per read access. According to embodiments disclosed herein, the hierarchical node transaction size associated with the lowest ranked hierarchical level may be configured such that one or more lines from the node cache 451 are evicted and replaced by one or more lines fetched from the respective memory that is mapped to the lowest ranked hierarchical level. The one or more lines may be determined based on the one or more lines storing the threshold number of nodes. As such, the respective hierarchical node transaction size may enable the HNA 108 to cache the threshold number of nodes from the given memory if the respective hierarchical level is a lowest ranked hierarchical level of the hierarchical levels. As such, the HNA 108 may be configured to evict the threshold number of nodes cached in the node cache 451 if the respective hierarchical level is a lowest ranked hierarchical level of the hierarchical levels. According to embodiments disclosed herein, the node cache 451 may be configured to cache a threshold number of nodes. The threshold number of nodes may be a largest number of nodes that may be read based on a largest transaction size over all transactions sizes associated with the plurality of memories. For example, the largest transaction size over all transaction sizes of the plurality of memories may be a given transaction size that is associated with a lowest ranked hierarchical level that may be mapped, for example, to an external memory that is not co-located on a chip with the HNA 108. Caching the one or more nodes in the node cache 451 may be based on a cache miss of a given node of the one or more nodes read from a given memory of the plurality of memories and a respective hierarchical node transaction size associated with a respective hierarchical level of the hierarchical levels that is mapped to the given memory. The hierarchical node transaction size associated with the respective hierarchical level may denote a maximum number of nodes to fetch from the given memory mapped to the respective hierarchical level for a read access of the given memory. As disclosed above, the HNA 108 may be configured to employ the LRU or round-robin replacement policy to evict one or more cached nodes from the node cache 451. According to embodiments disclosed herein, if the respective hierarchical level mapped to the given memory is higher than a lowest ranked hierarchical level of the hierarchical levels, a total number of the one or more cached nodes evicted may be determined based on the hierarchical level. For example, if the hierarchical level is associated with a hierarchical node transaction size of one, the total number of cached nodes evicted by the node cache may be one, and the entry evicted may be determined based on the LRU or round-robin replacement policy. The total number of one is for illustrative purpose and it should be understood that any suitable hierarchical node transaction sizes may be used. A plurality of nodes of the per-pattern NFA 504 may be stored in a plurality of memories, such as the memories 756 As illustrated in The NFA processing by the HNA 108 results in determination by the walker 320 that the match result 1334 is a positive match result as the segment 1322 As the split node N1 508 presents multiple transition path options, such as the epsilon paths 530 Since the split node N1 508 presents multiple path options, the action 1336 may include storing unexplored context, such as by storing an indirect or direct identifier of the node N3 512 and the current offset 1320 In the example embodiment, the selection of the epsilon transition path 530 The entry popped may be a most recently pushed entry, such as a stored entry pushed in the processing cycle 1340 The walker 320 may transition and walk the node N3 512 with the segment “y” located at the offset 1320 Since all arcs transitioning from the split node 508 are epsilon transitions, the walker 320 may again select a path of the multiple path options and does not consume (i.e., process) a segment from the payload 1342 as the current offset is not updated for the processing cycle 1340 Since “x” does not match at the node N2 510, the walker 320 may again pop an entry from the run stack 460. The entry popped may be a most recently pushed entry, such as a stored entry pushed in the processing cycle 1340 The walker 320 may continue to walk segments of the payload 1342 through the per-pattern NFA 504 as indicated by the subsequent processing cycles 1340 In the example embodiment, walking segments of the payload 1342 through the per-pattern NFA graph 504 may include identifying a mismatch at the node N3 512, selecting the lazy path at the split node N1 508 by selecting the upper epsilon path 530 According to embodiments disclosed herein, employing a node cache, such as the node cache 451 of As disclosed above, earlier nodes, such as the nodes N0 506, N1 508, N2 510, and N3 512 included in the section 509 of the per-pattern NFA 504 of In the example embodiment, a hierarchical node transaction size associated with the highest ranked hierarchical level 708 In the example embodiment, traversing the node N0 506 for the processing cycle 1340 As a result, the walker 320 may access the nodes N1 508, N2 510, and N3 512 from the node cache 451 until the processing cycle 1340 As such, in addition to the pre-screening of packets by the HFA 110 that may reduce a number of false positives for NFA processing by the HNA 108, embodiments disclosed herein may further optimize match performance by caching nodes during a walk of nodes of per-pattern NFAs that have nodes distributed to memories in a memory hierarchy based on node locality within a respective per-pattern NFA. As disclosed above, embodiments disclosed herein may advantageously distribute nodes of each per-pattern NFA to memories in a memory hierarchy based on an understanding that the longer the rule (i.e., pattern) the less likely it is that nodes generated from portions at the end of the rule (i.e., pattern) are to be accessed (i.e., walked or traversed). Further, according to embodiments disclosed herein, a node cache may be advantageously sized based on a maximum transaction size granularity of a plurality of memories to further optimize match performance by reducing a number of accesses to slower performing memories. In addition, embodiments disclosed herein with regard to a hierarchical node transaction size further optimize match performance by enabling efficient use of a limited number of entries in a node cache, by enabling a total number of cache node entries to be determined based on a given transaction (i.e., read access) size associated with a hierarchical level. Further example embodiments of disclosed herein may be configured using a computer program product; for example, controls may be programmed in software for implementing example embodiments disclosed herein. Further example embodiments of the disclosed herein may include a non-transitory computer-readable medium containing instructions that may be executed by a processor, and, when executed, cause the processor to complete methods described herein. It should be understood that elements of the block and flow diagrams described herein may be implemented in software, hardware, firmware, or other similar implementation determined in the future. In addition, the elements of the block and flow diagrams described herein may be combined or divided in any manner in software, hardware, or firmware. It should be understood that the term “herein” is transferrable to an application or patent incorporating the teachings presented herein such that the subject matter, definitions, or data carries forward into the application or patent making the incorporation. If implemented in software, the software may be written in any language that can support the example embodiments disclosed herein. The software may be stored in any form of computer readable medium, such as random access memory (RAM), read only memory (ROM), compact disk read-only memory (CD-ROM), and so forth. In operation, a general purpose or application-specific processor loads and executes software in a manner well understood in the art. It should be understood further that the block and flow diagrams may include more or fewer elements, be arranged or oriented differently, or be represented differently. It should be understood that implementation may dictate the block, flow, and/or network diagrams and the number of block and flow diagrams illustrating the execution of embodiments of the invention. While this invention has been particularly shown and described with references to example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims. At least one processor may be operatively coupled to a plurality of memories and a node cache and configured to walk nodes of a per-pattern non-deterministic finite automaton (NFA). Nodes of the per-pattern NFA may be stored amongst one or more of the plurality of memories based on a node distribution determined as a function of hierarchical levels mapped to the plurality of memories and per-pattern NFA storage allocation settings configured for the hierarchical levels, optimizing run time performance of the walk. 1. A security appliance operatively coupled to a network, the security appliance comprising:
a plurality of memories configured to store nodes of at least one finite automaton, the at least one finite automaton including a given per-pattern non-deterministic finite automaton (NFA) of at least one per-pattern NFA, the given per-pattern NFA generated for a respective regular expression pattern and including a respective set of nodes; and at least one processor operatively coupled to the plurality of memories and configured to walk nodes of the respective set of nodes with segments of a payload of an input stream to match the respective regular expression pattern in the input stream, the respective set of nodes stored amongst one or more memories of the plurality of memories based on a node distribution determined as a function of hierarchical levels mapped to the plurality of memories and per-pattern NFA storage allocation settings configured for the hierarchical levels. 2. The security appliance of 3. The security appliance of 4. The security appliance of 5. The security appliance of 6. The security appliance of 7. The security appliance of 8. The security appliance of 9. The security appliance of 10. The security appliance of 11. The security appliance of a first distribution, of the nodes of the respective set of nodes, for storing in a first memory of the plurality of memories, the first memory mapped to a highest ranked hierarchical level of the hierarchical levels; and at least one second distribution, of the nodes of the respective set of nodes, based on at least one undistributed node remaining in the respective set of nodes after a previous distribution, each at least one second distribution for storing in a given memory of the plurality of memories, the given memory mapped to a given hierarchical level of the hierarchical levels, consecutively lower, per distribution of the nodes of the respective set of nodes, than the highest ranked hierarchical level. 12. The security appliance of 13. The security appliance of 14. The security appliance of 15. The security appliance of 16. The security appliance of 17. The security appliance of 18. The security appliance of 19. The security appliance of 20. The security appliance of 21. The security appliance of 22. The security appliance of 23. The security appliance of 24. The security appliance of 25. The security appliance of 26. A method comprising:
in at least one processor operatively coupled to a plurality of memories mapped to hierarchical levels in a memory hierarchy in a security appliance operatively coupled to a network:
walking nodes of a respective set of nodes of a given per-pattern non-deterministic finite automaton (NFA) of at least one per-pattern NFA generated for a respective regular expression pattern with segments of a payload of an input stream to match the respective regular expression pattern in the input stream, the respective set of nodes stored amongst one or more memories of the plurality of memories based on a node distribution determined as a function of hierarchical levels mapped to the plurality of memories and per-pattern NFA storage allocation settings configured for the hierarchical levels. 27. The method of 28. The method of 29. The method of 30. The method of 31. The method of 32. The method of 33. The method of 34. The method of 35. The method of 36. The method of a first distribution, of the nodes of the respective set of nodes, for storing in a first memory of the plurality of memories, the first memory mapped to a highest ranked hierarchical level of the hierarchical levels; and at least one second distribution, of the nodes of the respective set of nodes, based on at least one undistributed node remaining in the respective set of nodes after a previous distribution, each at least one second distribution for storing in a given memory of the plurality of memories, the given memory mapped to a given hierarchical level of the hierarchical levels, consecutively lower, per distribution of the nodes of the respective set of nodes, than the highest ranked hierarchical level. 37. The method of 38. The method of 39. The method of 40. The method of 41. The method of 42. The method of 43. The method of 44. The method of 45. The method of 46. The method of 47. The method of 48. The method of 49. The method of 50. The method of 51. A non-transitory computer-readable medium having stored thereon a sequence of instructions which, when loaded and executed by a processor operatively coupled to a plurality of memories, causes the processor to:
walk nodes of a respective set of nodes of at least one per-pattern non-deterministic finite automaton (NFA) generated for a single regular expression pattern with segments of a payload of an input stream to match the respective regular expression pattern in the input stream, the respective set of nodes stored amongst one or more memories of the plurality of memories based on a node distribution determined as a function of hierarchical levels mapped to the plurality of memories and per-pattern NFA storage allocation settings configured for the hierarchical levels.BACKGROUND
SUMMARY
BRIEF DESCRIPTION OF THE DRAWINGS
DETAILED DESCRIPTION





















