US20070220607A1 - Determining whether to quarantine a message - Google Patents

Determining whether to quarantine a message Download PDF

Info

Publication number
US20070220607A1
US20070220607A1 US11/635,921 US63592106A US2007220607A1 US 20070220607 A1 US20070220607 A1 US 20070220607A1 US 63592106 A US63592106 A US 63592106A US 2007220607 A1 US2007220607 A1 US 2007220607A1
Authority
US
United States
Prior art keywords
message
virus
quarantine
messages
exit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/635,921
Inventor
Craig Sprosts
Scot Kennedy
Daniel Quinlan
Larry Rosenstein
Charles Slater
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US11/635,921 priority Critical patent/US20070220607A1/en
Publication of US20070220607A1 publication Critical patent/US20070220607A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/107Computer-aided management of electronic mailing [e-mailing]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/21Monitoring or handling of messages
    • H04L51/212Monitoring or handling of messages using filtering or selective blocking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/21Monitoring or handling of messages
    • H04L51/234Monitoring or handling of messages for tracking messages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/12Applying verification of the received information
    • H04L63/123Applying verification of the received information received data contents, e.g. message integrity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/12Applying verification of the received information
    • H04L63/126Applying verification of the received information the source of the received data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • H04L63/145Countermeasures against malicious traffic the attack involving the propagation of malware through the network, e.g. viruses, trojans or worms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/45Network directories; Name-to-address mapping
    • H04L61/4505Network directories; Name-to-address mapping using standardised directories; using standardised directory access protocols
    • H04L61/4511Network directories; Name-to-address mapping using standardised directories; using standardised directory access protocols using domain name system [DNS]

Definitions

  • the present invention generally relates to detecting threats in electronic messages such as computer viruses, spam, and phishing attacks.
  • the invention relates more specifically to techniques for responding to new occurrences of threats in electronic messages, managing a quarantine queue of threat-bearing messages, and scanning messages for threats.
  • executable attachments now serve as a carrier of virus code. For example, of 17 leading virus outbreaks in the last three years, 13 viruses were sent through email attachments. Twelve of the 13 viruses sent through email attachments were sent through dangerous attachment types. Thus, some enterprise network mail gateways now block all types of executable file attachments.
  • virus writers are now hiding executables.
  • virus writers are hiding known dangerous file types in files that appear to be innocent.
  • a virus writer may embed executables within .zip files of the type generated by WinZIP and other archive utilities.
  • Such .zip files are very commonly used by enterprises to compress and share larger files, so most enterprises are unwilling or unable to block .zip files. It is also possible to embed executables in Microsoft Word and some versions of Adobe Acrobat.
  • FIG. 1 is a block diagram of a system for managing computer virus outbreaks, according to an embodiment.
  • FIG. 2 is a flow diagram of a process of generating a count of suspicious messages, as performed by a virus information source, according to an embodiment.
  • FIG. 3 is a data flow diagram illustrating processing of messages based on virus outbreak information, according to an embodiment.
  • FIG. 4 is a flow diagram of a method of determining a virus score value, according to an embodiment.
  • FIG. 5 is a flow diagram illustrating application of a set of rules for managing virus outbreaks according to an embodiment.
  • FIG. 6 is a block diagram that illustrates a computer system upon which an embodiment may be implemented.
  • FIG. 7 is a block diagram of a system that may be used in approaches for blocking “spam” messages, and for other kinds of email scanning processes.
  • FIG. 8 is a graph of time versus the number of machines infected in a hypothetical example virus outbreak.
  • FIG. 9 is a flow diagram of an approach for rescanning messages that may contain viruses.
  • FIG. 10 is a block diagram of message flow model in a messaging gateway that implements the logic described above.
  • FIG. 11 is a flow diagram of a process of performing message threat scanning with an early exit approach.
  • a method comprising receiving an electronic mail message having a destination address for a recipient account; determining a virus score value for the message based upon one or more rules that specify attributes of messages that are known to contain computer viruses, wherein the attributes comprise a type of file attachment to the message, a size of the file attachment, and one or more heuristics based on the message sender, subject or body and other than file attachment signatures; when the virus score value is greater than or equal to a specified threshold, storing the message in a quarantine queue without immediately delivering the message to the recipient account.
  • the invention provides a method comprising receiving an electronic mail message having a destination address for a recipient account; determining a threat score value for the message; when the threat score value is greater than or equal to a specified threat threshold, storing the message in a quarantine queue without immediately delivering the message to the recipient account; releasing the message from the quarantine queue in other than first-in-first-out order upon any of a plurality of quarantine exit criteria, wherein each quarantine exit criterion is associated with one or more exit actions; and upon a particular exit criterion, selecting and performing the associated one or more exit actions.
  • the invention provides a method comprising receiving and storing a plurality of rules specifying characteristics of electronic messages that indicate threats associated with the messages, wherein each rule has a priority value, wherein each rule is associated with a message element type; receiving an electronic mail message having a destination address for a recipient account, wherein the message comprises a plurality of message elements; extracting a first message element; determining a threat score value for the message by matching only the first message element to only selected rules having a message element type corresponding to the first message element, and according to an order of the priorities of the selected rules; when the threat score value is greater than a specified threshold, outputting the threat score value.
  • a messaging gateway can suspend delivery of messages early in a virus outbreak, providing sufficient time for updating an anti-virus checker that can strip virus code from the messages.
  • a dynamic and flexible threat quarantine queue is provided with a variety of exit criteria and exit actions that permits early release of messages in other than first in, first-out order.
  • a message scanning method is described in which early exit from parsing and scanning can occur by matching threat rules only to selected message elements and stopping rule matching as soon as a match on one message element exceeds a threat threshold.
  • the invention encompasses a computer apparatus and a machine-readable medium configured to carry out the foregoing steps.
  • FIG. 1 is a block diagram of a system for managing computer virus outbreaks, according to an embodiment.
  • a virus sender 100 whose identity and location are typically unknown, sends a message infected with a virus, typically in an electronic message, or email, with a virus-bearing executable file attachment, to public network 102 , such as the Internet.
  • the message is either addressed to, or propagates by action of the virus to, a plurality of destinations such as virus information source 104 and spamtrap 106 .
  • a sparntrap is an email address or an email mailbox used to collect information about unsolicited email messages.
  • the operation and implementation of virus information source 104 and spamtrap 106 is discussed in further detail below.
  • FIG. 1 shows only two destinations in the form of virus information source 104 and spamtrap 106 , but in a practical embodiment there may be any number of such sources of virus information.
  • the virus sender 100 may obtain network addresses of virus information source 104 and spamtrap 106 from public sources, or by sending the virus to a small number of known addresses and letting the virus propagate.
  • a virus information processor 108 is communicatively coupled to public network 102 and can receive information from the virus information source 104 and spamtrap 106 .
  • Virus information processor 108 implements certain functions described further herein including collecting virus information from virus information source 104 and spamtrap 106 , generating virus outbreak information, and storing the virus outbreak information in a database 112 .
  • a messaging gateway 107 is coupled, directly or indirectly through a firewall 111 or other network elements, from public network 102 to a private network 110 that includes a plurality of end stations 120 A, 120 B, 120 C.
  • Messaging gateway 107 may be integrated with a mail transfer agent 109 that processes email for private network 110 , or the mail transfer agent may be deployed separately.
  • an IronPort Messaging Gateway Appliance such as model C60, C30, or C10, commercially available from IronPort Systems, Inc., San Bruno, Calif., may implement mail transfer agent 109 , firewall 111 , and the functions described herein for messaging gateway 107 .
  • messaging gateway 107 includes virus information logic 114 for obtaining virus outbreak information from virus information processor 108 and processing messages destined for end stations 120 A, 120 B, 120 C according to policies that are set at the messaging gateway.
  • the virus outbreak information can include any of a number of types of information, including but not limited to, a virus score value and one or more rules that associate virus score values with message characteristics that are associated with viruses.
  • virus information logic may be integrated with a content filter function of messaging gateway 107 .
  • virus information logic 114 is implemented as an independent logical module in messaging gateway 107 .
  • Messaging gateway 107 invokes virus information logic 114 with message data and receives a verdict in response.
  • the verdict may be based on message heuristics.
  • Message heuristics score messages and determine the likelihood that a message is a virus.
  • Virus information logic 114 detects viruses based in part on parameters of messages.
  • virus detection is performed based upon any one or more of: heuristics of mail containing executable code; heuristics of mismatched message headers; heuristics of mail from known Open Relays; heuristics of mail having mismatched content types and extensions; heuristics of mail from dynamic user lists, blacklisted hosts, or senders known to have poor reputations; and sender authenticity test results.
  • Sender authenticity tests results may be generated by logic that receives sender ID values from public networks.
  • Messaging gateway 107 may also include an anti-virus checker 116 , a content filter 118 , and anti-spam logic 119 .
  • the anti-virus checker 116 may comprise, for example, Sophos anti-virus software.
  • the content filter 118 provides logic for restricting delivery or acceptance of messages that contain content in a message subject or message body that is unacceptable according to a policy associated with private network 110 .
  • the anti-spam logic 119 scans inbound messages to determine if they are unwanted according to a mail acceptance policy, such as whether the inbound messages are unsolicited commercial email, and the anti-spam logic 119 applies policies to restrict delivery, redirect, or refuse acceptance of any unwanted messages.
  • anti-spam logic 119 scans messages and returns a score of between 0 and 100 for each message indicating a probability that the message is spam or another type of unwanted email. Score ranges are associated with an threshold, definable by an administrator, of possible spam and likely spam against which users can apply a specified set of actions described further below.
  • messages scoring 90 or above are spam and messages scoring 75-89 are suspected spam.
  • anti-spam logic 119 determines a spam score based at least in part upon reputation information, obtained from database 112 or an external reputation service such as SenderBase from IronPort Systems, Inc., that indicates whether a sender of the message is associated with spam, viruses, or other threats.
  • Scanning may comprise recording an X-header in the scanned message that verifies that the message was successfully scanned, and includes an obfuscated string that identifies rules that matched for the message.
  • Obfuscation may comprise creating a hash of rule identifiers based on a private key and a one-way hash algorithm. Obfuscation ensures that only a specified party, such as service provider 700 of FIG. 7 , can decode the rules that matched, improving security of the system.
  • the private network 110 may be an enterprise network associated with a business enterprise or any other form of network for which enhanced security or protection is desired.
  • Public network 102 and private network 110 may use open standard protocols such as TCP/IP for communication.
  • Virus information source 104 may comprise another instance of a messaging gateway 107 that is interposed between public network 102 and another private network (not shown for clarity) for purposes of protecting that other private network.
  • virus information source 104 is an IronPort MGA.
  • Spamtrap 106 is associated with one or more email addresses or email mailboxes associated with one or more domains. Spamtrap 106 is established for the purpose of receiving unsolicited email messages, or “spam,” for analysis or reporting, and is not typically used for conventional email communication.
  • a spamtrap can be an email address such as “dummyaccountforspam@mycompany.com,” or the spamtrap can be a collection of email addresses that are grouped into a mail exchange (MX) domain name system (DNS) record for which received email information is provided.
  • Mail transfer agent 109 or the mail transfer agent of another IronPort MGA, may host spamtrap 106 .
  • virus information source 104 generates and provides information to virus information processor 108 for use in managing computer virus outbreaks, and the virus information processor 108 can obtain information from spamtrap 106 for the same purpose.
  • virus information source 104 generates counts of received messages that have suspicious attachments, and provides the counts to virus information processor 108 , or allows an external process to retrieve the counts and store them in a specialized database.
  • Messaging gateway 107 also may serve as a virus information source by detecting messages that have indications that are associated with viruses or that are otherwise suspicious, creating a count of suspicious messages received in a particular time period, and periodically providing the count to virus information processor 108 .
  • virus information processor 108 can retrieve or receive information from virus information source 104 and spamtrap 106 , generate counts of messages that have suspicious attachments or other virus indicators, and update database 112 with the counts and generate virus outbreak information for later retrieval and use by virus information logic 114 of messaging gateway 107 .
  • Methods and apparatus relating to the SenderBase service are described in co-pending application Ser. No.
  • virus information source 104 may comprise the SpamCop information service that is accessible at domain “spamcop.net” on the World Wide Web, or users of the SpamCop service.
  • Virus information source 104 may comprise one or more Internet service providers or other high-volume mail receivers.
  • the SenderBase and SpamCop services provide a powerful data source for detecting viruses.
  • the services track information about millions of messages per day through spamtrap addresses, end-user complaint reporters, DNS logs, and third-party data sources. This data can be used to detect viruses in a rapid manner using the approaches herein.
  • the number of messages with specific attachment types, relative to normal levels, sent to legitimate or spamtrap addresses, and not identified as viruses by anti-virus scanners provides an early warning indicator that a virus outbreak has occurred based on a new virus that is not yet known and detectable by the anti-virus scanners.
  • virus information source 104 may comprise the manual review of data that is obtained by information services consultants or analysts, or external sources.
  • a human administrator monitoring alerts from anti-virus vendors, third-party vendors, security mailing lists, spamtrap data and other sources can detect viruses well in advance of when virus definitions are published in most cases.
  • a network element such as messaging gateway 107 can provide various options for handling a message based on the probability that it is a virus.
  • the gateway can act on this data immediately.
  • the mail transfer agent 109 can delay message delivery into private network 110 until a virus update is received from an anti-virus vendor and installed on messaging gateway 107 so that the delayed messages can be scanned by anti-virus checker 116 after the virus update is received.
  • Delayed messages may be stored in a quarantine queue 316 .
  • Messages in quarantine queue 316 may be released and delivered according to various policies as further described, deleted, or modified prior to delivery.
  • a plurality of quarantines 316 are established in messaging gateway 107 , and one quarantine is associated with each recipient account for a computer 120 A, 120 B, etc., in the managed private network 110 .
  • virus information processor 108 can include or be communicatively coupled to a virus outbreak operation center (VOOC), a receiving virus score (RVS) processor, or both.
  • VOOC virus outbreak operation center
  • RVS receiving virus score
  • the VOOC and RVS processor can be separate from virus information processor 108 but communicatively coupled to database 112 and public network 102 .
  • the VOOC can be implemented as a staffed center with personnel available 24 hours a day, 7 days a week to monitor the information collected by virus information processor 108 and stored in database 112 .
  • the personnel staffing the VOOC can take manual actions, such as issuing virus outbreak alerts, updating the information stored in database 112 , publishing virus outbreak information so that messaging gateways 107 can access the virus outbreak information, and manually initiating the sending of virus outbreak information to messaging gateway 107 and other messaging gateways 107 .
  • the personnel staffing the VOOC may configure the mail transfer agent 109 to perform certain actions, such as delivering a “soft bounce.”
  • a soft bounce is performed when the mail transfer agent 109 returns a received message based on a set of rules accessible to the mail transfer agent 109 . More specifically, when the mail transfer agent 109 completes a SMTP transaction by accepting an email message from a sender, the mail transfer agent 109 determines, based on a set of stored software rules accessible to the mail transfer agent 109 , that the received message is unwanted or undeliverable. In response to the determination that the received message is unwanted or undeliverable, the mail transfer agent 109 returns the message to the bounce email address specified by the sender. When the mail transfer agent 109 returns the message to the sender, the mail transfer agent 109 may strip the message of any attachments.
  • virus outbreak information is made available, or published, in response to a manual action taken by personnel, such as those staffing the VOOC.
  • virus outbreak information is automatically made available according to the configuration of the virus information processor, VOOC, or RVS, and then the virus outbreak information and the automated actions taken are subsequently reviewed by personnel at the VOOC who can make modifications, if deemed necessary or desirable.
  • the staffing personnel at a VOOC or components of a system may determine whether a message contains a virus based on a variety of factors, such as (a) patterns in receiving messages with attachments, (b) risky characteristics of attachments to received messages, (c) published vendor virus alerts, (d) increased mailing list activity, (e) risky source-based characteristics of messages, (f) the percentage of dynamic network addresses associated with sources of received messages, (g) the percentage of computerized hosts associated with sources of received messages, and (h) the percentage of suspicious volume patterns.
  • factors such as (a) patterns in receiving messages with attachments, (b) risky characteristics of attachments to received messages, (c) published vendor virus alerts, (d) increased mailing list activity, (e) risky source-based characteristics of messages, (f) the percentage of dynamic network addresses associated with sources of received messages, (g) the percentage of computerized hosts associated with sources of received messages, and (h) the percentage of suspicious volume patterns.
  • the risky characteristics of attachments to received messages may be based on a consideration of how suspicious the filename of the attachment is, whether the file is associated with multiple file extensions, the amount of similar file sizes attached to received messages, the amount of similar file names attached to received messages, and the names of attachments of known viruses.
  • the patterns in receiving messages with attachments may be based on a consideration of the current rate of the number of messages containing attachments, the trend in the number of messages received with risky attachments, and the number of customer data sources, virus information source 104 , and spamtraps 106 that are reporting increases in messages with attachments.
  • the determination of whether a message contains a virus may be based on information sent from a client, e.g., information may be reported from a user to a system using an email message that is received at the system in a safe environment, such that the message receptor of the system is configured, as best possible, to prevent the spread of a computer virus to other parts of the system if the message receptor is infected with a virus.
  • the RVS processor can be implemented as an automated system that generates the virus outbreak information, such as in the form of virus score values for various attachment types or in the form of a set of rules that associate virus score values with message characteristics, to be made available to messaging gateway 107 and other messaging gateways 107 .
  • messaging gateway 107 comprises a verdict cache 115 that provides local storage of verdict values from anti-virus checker 116 and/or anti-spam logic 119 for re-use when duplicate messages are received.
  • verdict cache 115 provides local storage of verdict values from anti-virus checker 116 and/or anti-spam logic 119 for re-use when duplicate messages are received.
  • the structure and function of verdict cache 115 is described further below.
  • messaging gateway 107 comprises a log file 113 that can store statistical information or status messages relating to functions of the messaging gateway. Examples of information that can be logged include message verdicts and actions taken as a result of verdicts; rules that matched on messages, in obfuscated format; an indication that scanning engine updates occurred; an indication that rule updates occurred; scanning engine version numbers, etc.
  • FIG. 2 is a flow diagram of a process of generating a count of suspicious messages, according to an embodiment.
  • the steps of FIG. 2 may be performed by a virus information source, such as virus information source 104 in FIG. 1 .
  • a message is received.
  • virus information source 104 or messaging gateway 107 receives the message sent by virus sender 100 .
  • a message is determined to be risky if a virus checker at the virus information source 104 or messaging gateway 107 scans the message without identifying a virus, but the message also includes a file attachment having a file type or extension that is known to be risky.
  • a virus checker at the virus information source 104 or messaging gateway 107 scans the message without identifying a virus, but the message also includes a file attachment having a file type or extension that is known to be risky.
  • MS Windows (XP Pro) file types or extensions of COM, EXE, SCR, BAT, PIF, or ZIP may be considered risky since virus writers commonly use such files for malicious executable code.
  • the foregoing are merely examples of file types or extensions that can be considered risky; there are more than 50 known different file types.
  • the determination that a message is suspicious also may be made by extracting a source network address from the message, such as a source IP value, and issuing a query to the SenderBase service to determine whether the source is known to be associated with spam or viruses. For example, a reputation score value provided by the SenderBase service may be taken into account in determining whether a message is suspicious.
  • a message may also be determined to be suspicious if it was sent from an IP address associated with a host known to be compromised, that has a history of sending viruses, or has only recently started sending email to the Internet.
  • the determination also may be based upon one or more of the following factors: (a) the type or extension of a file attachment that is directly attached to the message, (b) the type or extension of a file that is contained within a compressed file, an archive, a .zip file, or another file that is directly attached to the message, and (c) a data fingerprint obtained from an attachment.
  • the determination of suspicious messages can be based on the size of an attachment for a suspicious message, the contents of the subject of the suspicious message, the contents of the body of the suspicious message, or any other characteristic of the suspicious message.
  • Some file types can be embedded with other file types. For example, “.doc” files and “.pdf” files may be embedded with other image files types, such as “.gif” or .bmp”. Any embedded file types within a host file type may be considered when determining whether a message is suspicious.
  • the characteristics of the suspicious messages can be used in formulating the rules that are provided or made available to the messaging gateways 107 and that include the virus score value that is associated with one or more such characteristics.
  • step 206 if the message is suspicious, then a count of suspicious messages for the current time period is incremented. For example, if the message has an EXE attachment, a count of messages with EXE attachments is incremented by one.
  • step 208 the count of suspicious messages is reported.
  • step 208 may involve sending a report message to the virus information processor 108 .
  • virus information processor 108 receives numerous reports such as the report of step 208 , continuously in real time. As reports are received, virus information processor 108 updates database 112 with report data, and determines and stores virus outbreak information. In one embodiment, the virus outbreak information includes a virus score value that is determined according to a sub-process that is described further with reference to FIG. 4 below.
  • FIG. 3 is a data flow diagram illustrating processing of messages based on virus outbreak information, according to an embodiment.
  • the steps of FIG. 3 may be performed by an MGA, such as messaging gateway 107 in FIG. 1 .
  • an MGA such as messaging gateway 107 in FIG. 1 .
  • a message may be acted upon before it is positively determined to contain a virus.
  • a content filter is applied to the message. Applying a content filter involves, in one embodiment, examining the message subject, other message header values, and the message body, determining whether one or more rules for content filtering are satisfied by the content values, and taking one or more actions when the rules are satisfied, such as may be specified in a content policy.
  • the performance of block 302 is optional. Thus, some embodiments may perform block 302 , while other embodiments may not perform block 302 .
  • virus outbreak information is retrieved for use in subsequent processing steps.
  • a messaging gateway 107 that implements FIG. 3 can periodically request the then-current virus outbreak information from virus information processor 108 .
  • messaging gateway 107 retrieves the virus outbreak information from the virus information processor 108 approximately every five (5) minutes, using a secure communication protocol that prevents unauthorized parties from accessing the virus outbreak information. If the messaging gateway 107 is unable to retrieve the virus outbreak information, the gateway can use the last available virus outbreak information stored in the gateway.
  • an anti-spam process is applied to the message and messages that appear to be unsolicited messages are marked or processed according to a spam policy. For example, spam messages may be silently dropped, moved to a specified mailbox or folder, or the subject of the message may be modified to include a designation such as “possible spam.”
  • the performance of block 304 is optional. Thus, some embodiments may perform block 304 , while other embodiments may not perform block 304 .
  • an anti-virus process is applied to the message and messages that appear to contain viruses, in the message or in a file attachment, are marked.
  • anti-virus software from Sophos implements block 306 . If a message is determined as positive for a virus, then in block 308 , the message is deleted, quarantined in quarantine queue 316 , or otherwise processed according to an appropriate virus processing policy.
  • block 306 determines that the message is not virus positive, then in block 310 , a test is performed to determine whether the message has been scanned for viruses before. As explained further herein, block 306 can be reached again from later blocks after the message has been previously scanned for viruses.
  • a test is performed to determine whether the virus outbreak information obtained at block 302 satisfies a specified threshold. For example, if the virus outbreak information includes a virus score value (VSV), the virus score value is checked to see if the virus score value is equal to or greater than a threshold virus score value.
  • VSV virus score value
  • the threshold is specified by an administrator command, in a configuration file, or is received from another machine, process or source in a separate process.
  • the threshold corresponds to the probability that a message contains a virus or is associated with a new virus outbreak.
  • a virus that receives a score above the threshold is subject to the actions specified by an operator, such as performing a quarantine of the message in quarantine queue 316 .
  • a single specified threshold is used for all messages, whereas in other implementations, multiple thresholds are used based on different characteristics, so that the administrator can treat some messages more cautiously than others based on the type of messages that the messaging gateway receives and what is considered to be normal or less risky for the associated message recipients.
  • a default threshold value of 3 is used, based on a virus score scale of 0 to 5, where 5 is the highest risk (threat) level.
  • the virus outbreak information can include a virus score value, and a network administrator can determine an allowed threshold virus score value and broadcast the threshold virus score value to all message transfer agents or other processors that are performing the process of FIG. 3 .
  • the virus outbreak information can include a set of rules that associate virus score values with one or more message characteristics that are indicative of viruses, and based on the approach described herein with respect to FIG. 5 , a virus score value can be determined based on the matching rules for the message.
  • the value of the threshold virus score value set by the administrator indicates when to initiate delayed delivery of messages. For example, if the threshold virus score value is 1, then a messaging gateway implementing FIG. 3 will delay delivery of messages when the virus score value determined by the virus information processor 108 is low. If the threshold virus score value is 4, then a messaging gateway implementing FIG. 3 will delay delivery of messages when the virus score value determined by the virus information processor 108 is high.
  • the message is placed in an outbreak quarantine queue 316 .
  • Each message is tagged with a specified holding time value, or expiration date-time value, representing a period of time during which the message is held in the outbreak quarantine queue 316 .
  • the purpose of the outbreak quarantine queue 316 is to delay delivery of messages for an amount of time that is sufficient to enable updating of anti-virus process 306 to account for a new virus that is associated with the detected virus outbreak.
  • the holding time may have any desired duration.
  • Example holding time values could be between one (1) hour and twenty four (24) hours.
  • a default holding time value of twelve (12) hours is provided.
  • An administrator may change the holding time at any time, for any preferred holding time value, by issuing a command to a messaging gateway that implements the processes herein.
  • the holding time value is user-configurable.
  • One or more tools, features, or user interfaces may be provided to allow an operator to monitor the status of the outbreak quarantine queue and the quarantined messages. For example, the operator can obtain a list of messages currently quarantined, and the list can identify the reason why each message in the queue was quarantined, such as the applicable virus score value for the message that satisfied the specified threshold or the rule, or rules, in a set of rules that matched for the message. Summary information can be provided by message characteristic, such as the types of file attachments, or by the applicable rule if a set of rules are being used.
  • a tool can be provided to allow the operator to review each individual message in the queue. Another feature can be provided to allow the operator to search for quarantined messages that satisfy one or more criteria. Yet another tool can be provided to simulate a message being processed, which can be referred to as “tracing” a message, to make sure that the configuration of the messaging gateway has been correctly performed and that the inbound messages are being properly processed according the virus outbreak filter.
  • a tool can be provided showing general alert information from virus information processor, a VOOC, or an RVS concerning special or significant virus risks or threats that have been identified.
  • tools can be included in the MGA to contact one or more personnel associated with the MGA when alerts are issued. For example, an automated telephone or paging system can contact specified individuals when messages are being quarantined, when a certain number of messages have been quarantined, or when the capacity of the quarantine queue has been filled or has reached a specified level.
  • a message may exit the outbreak quarantine queue 316 in three ways indicated by paths designated 316 A, 316 B, 316 C in FIG. 3 .
  • a message may expire normally when the specified holding time expires for that message.
  • the outbreak quarantine queue 316 operates as a FIFO (first in, first out) queue.
  • the message is then transferred back to anti-virus process 306 for re-scanning, on the assumption that after expiration of the holding time, the anti-virus process has been updated with any pattern files or other information necessary to detect viruses that may be in the message.
  • a message may be manually released from outbreak quarantine queue 316 .
  • one or more messages can be released from outbreak quarantine queue 316 .
  • an operator decision to re-scan or delete the message is performed, such as when the operator may have received off-line information indicating that a particular kind of message is definitely virus-infected; in that case, the operator could elect to delete the message at block 320 .
  • the operator may have received, before expiration of the holding time value, off-line information indicating that anti-virus process 306 has just been updated with new patterns or other information in response to a virus outbreak. In that case the operator may elect to re-scan the message by sending it back to the anti-virus process 306 for scanning, without waiting for the holding time to expire, as shown by path 319 .
  • the operator can perform a search of the messages currently held in outbreak quarantine queue 316 to identify one or more messages.
  • a message thus identified can be selected by the operator for scanning by anti-virus process 306 , such as to test whether anti-virus process 306 has been updated with information sufficient to detect the virus that is involved in the virus outbreak. If the rescan of the selected message is successfully at identifying the virus, the operator can manually release some or all of the messages in outbreak quarantine queue so that the released messages can be rescanned by anti-virus process 306 .
  • the operator can wait until a later time and retest a test message or another message to determine if anti-virus process 306 has been updated to be able to detect the virus, or the operator can wait and let the messages be released when the messages' expiration times expire.
  • a message also may expire early, for example, because the outbreak quarantine queue 316 is full.
  • An overflow policy 322 is applied to messages that expire early.
  • the overflow policy 322 may require that the message be deleted, as indicated in block 320 .
  • the overflow policy 322 may require that the subject of the message be appended with a suitable warning of the risk that the message is likely to contain a virus, as indicated by block 324 .
  • a message such as “MAY BE INFECTED” or “SUSPECTED VIRUS” can be appended to the subject, such as at the end or beginning of the message's subject line.
  • the message with the appended subject is delivered via anti-virus process 306 , and because the message has been scanned before, the process continues from anti-virus process 306 through block 310 , and the message is then delivered as indicated by block 314 .
  • the overflow policy 322 may require removal of file attachments to the message followed by delivery of the message with the file attachments stripped.
  • the overflow policy 322 may require stripping only those file attachments that exceed a particular size.
  • the overflow policy 322 may require that when the outbreak quarantine queue 316 is full, the MTA is allowed to receive a new message, but before the message is accepted during the SMTP transaction, the message is rejected with a 4xx temporary error.
  • treatment of a message according to path 316 A, 316 B, 316 C is user configurable for the entire contents of the quarantine queue.
  • a policy is user configurable for each message.
  • block 312 also may involve generating and sending an alert message to one or more administrators when the virus outbreak information obtained from virus information processor 108 satisfies a specified threshold, such as when a virus score value meets or exceeds a specified threshold virus score value.
  • a specified threshold such as when a virus score value meets or exceeds a specified threshold virus score value.
  • an alert message sent at block 312 may comprise an email that specifies the attachment types for which the virus score has changed, current virus score, prior virus score, current threshold virus score, and when the last update of the virus score for that type of attachment was received from the virus information processor 108 .
  • the process of FIG. 3 may involve generating and sending an alert message to one or more administrators whenever the overall number of messages in the quarantine queue exceeds a threshold set by the administrator, or when a specific amount or percentage of quarantine queue storage capacity has been exceeded.
  • an alert message may specify the quarantine queue size, percentage of capacity utilized, etc.
  • the outbreak quarantine queue 316 may have any desired size.
  • the quarantine queue can store approximately 3 GB of messages.
  • virus outbreak information is generated that indicates the likelihood of a virus outbreak based on one or more message characteristics.
  • the virus outbreak information includes a numerical value, such as a virus score value.
  • the virus outbreak information can be associated with one or more characteristics of a message, such as the type of attachment with a message, the size of the attachment, the contents of the message (e.g., the content of the subject line of the message or the body of the message), the sender of the message, the IP address or domain of the sender of the message, the recipient of the message, the SenderBase reputation score for the sender of the message, or any other suitable message characteristic.
  • the virus outbreak information includes one or more rules that each associates the likelihood of a virus outbreak with one or more message characteristics.
  • a rule of the form “if EXE and size ⁇ 50 k, then 4” indicates that for messages with attachments of type EXE and size less than 50 k, the virus score value is “4.”
  • a set of rules can be provided to the messaging gateway to be applied to determine if an inbound message matches the message characteristics of a rule, thereby indicating that the rule is applicable to the inbound message and therefore should be handled based on the associated virus score value. The use of a set of rules is described further with respect to FIG. 5 below.
  • FIG. 4 is a flow diagram of a method of determining a virus score value, according to an embodiment.
  • the steps of FIG. 4 may be performed by virus information processor 108 based on information in database 112 received from virus information source 104 and spamtrap 106 .
  • Step 401 of FIG. 4 indicates that certain computational steps 402 , 404 are performed for each different source of virus information that is accessible to virus information processor 108 , such as virus information source 104 or spamtrap 106 .
  • Step 402 involves generating a weighted current average virus score value, for a particular email file attachment type, by combining one or more prior virus score values for prior time periods, using a weighting approach that accords greater weight for more recent prior virus score values.
  • a virus score value for a particular time period refers to a score value based on the number of messages received at a particular source that have suspicious file attachments.
  • a message is considered to have a suspicious attachment if the attachment satisfies one or more metrics, such as a particular file size, file type, etc., or if the network address of the sender is known to be associated with prior virus outbreaks. The determination may be based on attachment file size or file type or extension.
  • the determination of the virus score value also may be made by extracting a source network address from the message, such as a source IP address value, and issuing a query to the SenderBase service to determine whether the source is known to be associated with spam or viruses.
  • the determination also may be based upon (a) the type or extension of a file attachment that is directly attached to the message, (b) the type or extension of a file that is contained within a compressed file, an archive, a .zip file, or another file that is directly attached to the message, and (c) a data fingerprint obtained from an attachment.
  • a separate virus score value may be generated and stored for each attachment type found in any of the foregoing. Further, the virus score value may be generated and stored based upon the most risky attachment type found in a message.
  • step 402 involves computing a combination of virus score values for the last three 15-minute periods, for a given file attachment type.
  • a weighting value is applied to the three values for the 15-minute periods, with the most recent 15-minute time period being weighted more heavily than earlier 15-minute time periods. For example, in one weighting approach, a multiplier of 0.10 is applied to the virus score value for the oldest 15-minute period (30-45 minutes ago), a multiplier of 0.25 is applied to the second-oldest value (15-30 minutes ago), and a multiplier of 0.65 is applied to the most recent virus score value for the period 0-15 minutes ago.
  • a percent-of-normal virus score value is generated for a particular file attachment type, by comparing the current average virus score value determined at step 402 to a long-term average virus score value.
  • the current percent of normal level may be computed with reference to a 30-day average value for that file attachment type over all 15-minute time periods within the 30-day period.
  • step 405 all of the percent-of-normal virus score values for all sources, such as virus information source 104 and spamtrap 106 , are averaged to result in creating an overall percent-of-normal value for a particular file attachment type.
  • the overall percent-of-normal value is mapped to a virus score value for a particular file attachment type.
  • the virus score value is an integer between 0-5, and the overall percent-of-normal value is mapped to a virus score value.
  • Table 1 presents an example of a virus score scale. TABLE 1 Example Virus Score Scale Percent of normal Score Level of Threat 0-150 0 No known threat/very low threat 150-300 1 Possible threat 300-900 2 Small threat 900-1500 3 Moderate threat >1500 4 High threat/extremely risky
  • mappings to score values of 0 to 100, 0 to 10, 1 to 5, or any other desired range of values may be used.
  • non-integer values can be used.
  • a probability value can be determined, such as a probability in the range of 0% to 100% in which the higher probabilities indicate a stronger likelihood of a virus outbreak, or such as a probability in the range of 0 to 1 in which the probability is expressed as a fraction or decimal, such at 0.543.
  • the process of FIG. 4 can add one to the baseline averages computed in step 402 .
  • adding one raises the noise level of the values slightly in a beneficial way, by dampening some of the data.
  • Table 2 presents example data for the EXE file type in a hypothetical embodiment: TABLE 2
  • the processes of FIG. 2 , FIG. 3 , FIG. 4 also may include logic to recognize trends in the reported data and identify anomalies in virus score computations.
  • a virus score value could be developed by considering other message data and metadata, such as Universal Resource Locators (URLs) in a message, the name of a file attachment, source network address, etc. Further, in an alternative embodiment, a virus score value may be assigned to individual messages rather than to file attachment types.
  • URLs Universal Resource Locators
  • virus score value may be considered to determine the virus score value. For example, if a large number of messages are suddenly received from new hosts that have never sent messages to virus information processor 108 or its information sources before, a virus may be indicated. Thus, the fact that the date that a particular message has been first seen is recent, and a spike in message volume detected by virus information processor 108 , may provide an early indication of a virus outbreak.
  • virus outbreak information can simply associate a virus score value with a message characteristic, such as an attachment type, or virus outbreak information can include a set of rules that each associates a virus score value with one or more characteristics of messages that are indicative of viruses.
  • An MGA can apply the set of rules to incoming messages to determine which rules match a message. Based on the rules that match an incoming message, the MGA can determine the likelihood that the message includes a virus, such as by determining a virus score value based on one or more of the virus score values from the matching rules.
  • a rule can be “if ‘exe’, then 4” to denote a virus score of 4 for messages with EXE attachments.
  • a rule can be “if ‘exe’ and size ⁇ 50 k, then 3” to denote a virus score of 3 for messages with EXE attachments with a size of less than 50 k.
  • a rule can be “if SBRS ⁇ 5, then 4” to denote a virus score of 4 if the SenderBase Reputation Score (SBRS) is less than “ ⁇ 5”.
  • SBRS SenderBase Reputation Score
  • a rule can be “if ‘PIF’ and subject contains FOOL, then 5” to denote a virus score of 5 if the message has a PIF type of attachment and the subject of the message includes the string “FOOL.”
  • a rule can associate any number of message characteristics or other data that can be used to determine a virus outbreak with an indicator of the likelihood that a message matching the message characteristics or other data includes a virus.
  • a messaging gateway can apply exceptions, such as in the form of one or more quarantine policies, to determine whether a message, which otherwise satisfies the specified threshold based on the virus score value determined based on the matching rules, such as is determined in block 312 of FIG. 3 , is to be placed into the outbreak quarantine queue or whether the message is to be processed without being placed into the outbreak quarantine queue.
  • the MGA can be configured to apply one or more policies for applying the rules, such as a policy to always allow messages to be delivered to an email address or group of email addresses regardless of the virus scores, or to always deliver messages with a specified type of attachment, such as ZIP files containing PDF files.
  • each MGA can apply some or all of the rules in a manner determined by the administrator of the MGA, thereby providing additional flexibility to meet the needs of the particular MGA.
  • the ability to configure the application of the rules by the administrator of each MGA means that each MGA can process the same message and obtain a different result in terms of the determined likelihood that a virus attack is occurring, and each MGA can process the same message and take different actions, depending on the configuration established by the administrator for the MGA.
  • FIG. 5 is a flow diagram illustrating application of a set of rules for managing virus outbreaks, according to an embodiment.
  • the functions illustrated in FIG. 5 can be performed by the messaging gateway as part of block 312 or at any other suitable position during the processing of the incoming message.
  • the messaging gateway identifies the message characteristics of an incoming message. For example, messaging gateway 107 can determine whether the message has an attachment, and if so, the type of attachment, the size of the attachment, and the name of the attachment. As another example, messaging gateway 107 can query the SenderBase service based on the sending IP address to obtain a SenderBase reputation score. For the purposes of describing FIG. 5 , assume that that message has an EXE type of attachment with a size of 35 k and that sending host for the message has a SenderBase reputation score of ⁇ 2.
  • the messaging gateway determines which rules of the rule set are matched based on the message characteristics for the message. For example, assume that for the purposes of describing FIG. 5 , the rule set consists of the following five rules that associate the example characteristics with the provided hypothetical virus score values:
  • Rule 1 indicates that ZIP attachments are more likely to include a virus than EXE attachments because the virus score is 4 in Rule 2 but only 3 in Rule 1. Furthermore, the example rules above indicate that EXE attachments with a size of greater than 50 k are the most likely to have a virus, but EXE attachments with a size of less than 50 k but greater than 20 k are a little less likely to include a virus, perhaps because most of the suspicious messages with EXE attachments are greater than 50 k in size.
  • the messaging gateway determines a virus score value to be used for the message based on the virus score values from the matching rules.
  • the determination of the virus score value to be used for the message can be performed based on any of a number of approaches. The particular approach used can be specified by the administrator of the messaging gateway and modified as desired.
  • the rule that is matched first when applying the list of rules in the order listed can be used, and any other matching rules are ignored.
  • the first rule to match is Rule 1, and therefore the virus score value for the message is 3.
  • the matching rule with the highest virus score value is used.
  • Rule 3 has the highest virus score value among the matching rules, and therefore, the virus score value for the message is 5.
  • the matching rule with the most specific set of message characteristics is used.
  • Rule 4 is the most specific matching rule because Rule 4 includes three different criteria, and therefore the virus score value for the message is 4.
  • virus score values from the matching rules can be combined to determine the virus score value to apply to the message.
  • a weighted average of the virus score values of the matching rules can be used, so as to give more weight to the more specific rules.
  • the messaging gateway uses the virus score value determined in block 506 to determine whether the specified threshold virus score value is satisfied. For example, assume that in this example the threshold is a virus score value of 4. As a result, the virus score value determined in block 506 by all the example approaches would satisfy the threshold value, except for the first example that uses the first rule to match and for which block 506 determines the virus score value to be 3.
  • one or more quarantine policies are applied to determine whether to add the message to the outbreak quarantine queue.
  • the administrator of the messaging gateway may determine that one or more users or one or more groups of users should never have their messages quarantined even if a virus outbreak has been detected.
  • the administrator can establish a policy that messages with certain characteristics (e.g., messages with XLS attachments with a size of at least 75 k) are to always be delivered instead of being quarantined when the virus outbreak information indicates a virus attack based on the specified threshold.
  • the members of the organizations legal department may frequently receive ZIP files containing important legal documents that should not be delayed by being placed in the outbreak quarantine, even if the messaging gateway determines that a virus outbreak is occurring.
  • the mail administrator for the messaging gateway can establish a policy to always deliver messages with ZIP attachments to the legal department, even if the virus score value for ZIP attachments meets or exceeds the specified threshold.
  • the mail administrator may wish to always have messages delivered that are addressed to the email address for the mail administrator, since such messages could provide information for dealing with the virus outbreak.
  • the mail administrator is a sophisticated user, the risk in delivering a virus infected message is low since the mail administrator will likely be able to identify and deal with an infected message before the virus can act.
  • the messaging gateway can be configured to be in one of two states: “calm” and “nervous.”
  • the calm state applies if no messages are being quarantined. However, when virus outbreak information is updated and indicates that a specified threshold is exceeded, the state changes from calm to nervous, regardless of whether any messages being received by the messaging gateway are being quarantined. The nervous state persists until the virus outbreak information is updated and indicates that the specified threshold is not longer exceeded.
  • an alert message is sent to an operator or administrator whenever a change in the system state occurs (e.g., calm to nervous or nervous to calm).
  • alerts can be issued when a previously low virus score value that did not satisfy the threshold now does meet or exceed the threshold, even if the overall state of the system does not change (e.g., the system previously changed from calm to nervous, and while in the nervous state, another virus score was received from the virus information processor that also meets or exceeds the threshold).
  • an alert can be issued when a previously high virus score that did satisfy the threshold has dropped and now is less than the specified threshold.
  • Alert messages can include one or more types of information, including but not limited to, the following: the attachment type for which the virus outbreak information changed, the current virus score, the prior virus score, the current threshold, and when the last update for the virus outbreak information occurred.
  • SenderBase can provide virus threat data that is specific for the connecting IP address.
  • the virus threat data is based on data collected by SenderBase for the IP address and reflects the history of the IP address in terms of how often viruses are detected in messages originating from the IP address or the company associated with the IP address. This can allow the MGA to obtain a virus score from SenderBase based solely on the sender of the message without any information or knowledge about the content of a particular message from the sending IP address.
  • the data on the virus threat for the sender can be used in place of, or in addition to, a virus score as determined above, or the data on the virus threat for the sender can be factored into the calculation of the virus score.
  • the MGA could increase or decrease a particular virus score value based on the virus threat data for the sender.
  • Another feature is to use a dynamic or dial-up blacklist to identify messages that are likely infected with a virus when a dynamic or dial-up host connects directly to an external SMTP server.
  • dynamic and dial-up hosts that connect to the Internet are expected to send outgoing messages through the hosts' local SMTP server.
  • the virus can cause the host to connect directly to an external SMTP server, such as an MGA.
  • an MGA an external SMTP server
  • the likelihood that the host is infected with a virus that is causing the host to establish the direct connection to the external SMTP server is high. Examples include spam and open relay blocking system (SORBS) dynamic hosts and not just another bogus list (NJABL) dynamic hosts.
  • SORBS spam and open relay blocking system
  • NJABL just another bogus list
  • the direct connection is not virus initiated, such as when a novice user is making the direct connection or when the connection is from a broadband host that is not dynamic, such as DSL or cable modems. Nevertheless, such direct connections from a dial-up or dynamic host to an external SMTP server can result in determining a high virus score or increasing an already determined virus score to reflect the increased likelihood that the direct connection is due to a virus.
  • Another feature is to use as a virus information source an exploited host blacklist that track hosts that have been exploited by viruses in the past.
  • a host can be exploited when the server is an open relay, an open proxy or has another vulnerability that allows anybody to deliver email to anywhere.
  • Exploited host blacklists track exploited hosts using one of two techniques: the content that infected hosts are sending and locating hosts that have been infected via connect-time scanning. Examples include the Exploits Block List (XBL), which uses data from the Composite Blocking List (CBL) and the Open Proxy Monitor (OPM), and the Distributed Server Boycott List (DSBL).
  • XBL Exploits Block List
  • CBL Composite Blocking List
  • OPM Open Proxy Monitor
  • DSBL Distributed Server Boycott List
  • virus information processor to develop a blacklist of senders and networks that have a past history of sending viruses. For example, the highest virus score can be assigned to individual IP addresses that are known to send only viruses. Moderate virus scores can be associated with individual IP addresses that are known to send both viruses and legitimate messages that are not virus infected. Moderate to low virus scores can be assigned to networks that contain one or more individual infected hosts.
  • a generic body test can be used to test the message body by searching for a fixed string or a regular expression, such as in the following examples: body HEY_PAL /hey pal
  • a function test can be used to craft custom tests to test very specific aspects of a message, such as in the following examples: eval EXTENSION_EXE message_attachment_ext(“.exe”) eval MIME_BOUND_FOO mime_boundary(“--/d/d/d/d[a-f]”) eval XBL_IP connecting_ip(exploited host)
  • a meta test can be used to build on multiple features, such as those above, to create a meta rule of rules, such as in the following examples: meta VIRUS_FOO ((SUBJECT_FOO1
  • Another feature that can be used is to extend the virus score determination approach above to one or more machine learning techniques so that not all rules need to be run and to provide accurate classification by minimizing false positives and false negatives.
  • one or more of the following methods can be employed: a decision tree, to provide discrete answers; perception, to provide additive scores; and Bayes-like analysis, to map probabilities to scores.
  • Another feature is to factor into the virus score determination the severity of the threat from a virus outbreak based on the consequences of the virus. For example, if the virus results in the infected computer's hard drive having all its contents deleted, the virus score can be increased, whereas a virus that merely displays a message can have the virus score left unchanged or even reduced.
  • a suspicious message can be tagged to indicate that the message is suspicious, such as by adding to the message (e.g., in the subject or body) the virus score so that the user can be alerted to the level of virus risk determined for the message.
  • a new message can be generated to either alert the recipient of the attempt to send to them a virus infected message or to create a new and uninfected message that includes the non-virus infected portions of the message.
  • the administrator accesses the IronPort C60 and manually flushes the queue, sending all messages with Excel files attached through Sophos anti-virus checking.
  • the administrator finds that 249 of these messages were virus positive, and 1 was not caught by Sophos, because it wasn't infected.
  • the messages are delivered with a total delay of 1 1/2 hours.
  • FIG. 7 is a block diagram of a system that may be used in approaches for blocking “spam” messages, and for other kinds of email scanning processes.
  • spam refers to any unsolicited email
  • ham refers to legitimate bulk email.
  • TI refers to threat identification, that is, determining that virus outbreaks or spam communications are occurring.
  • one or more TI development computers 702 are coupled to a corpus server cluster 706 , which hosts a corpus or master repository for threat identification rules, and which applies threat identification rules to messages on an evaluation basis to result in generating score values.
  • a mail server 704 of the service provider 700 contributes ham email to the corpus server cluster 706 .
  • One or more spamtraps 716 contribute spam email to the corpus. Spamtraps 716 are email addresses that are established and seeded to spammers so that the addresses receive only spam email. Messages received at spamtraps 716 may be transformed into message signatures or checksums that are stored in corpus server cluster 706 .
  • One or more avatars 714 contribute unclassified email to the corpus for evaluation.
  • Scores created by the corpus server cluster 706 are coupled to a rules/URLs server 707 , which publishes the rules and URLs associated with viruses, spam, and other email threats to one or more messaging gateways 107 located at customers of the service provider 700 . Messaging gateways 107 periodically retrieve new rules through HTTPS transfers.
  • a threat operations center (TOC) 708 may generate and send the corpus server cluster 706 tentative rules for testing purposes. Threat operations center 708 refers to staff, tools, data and facilities involved in detecting and responding to virus threats.
  • the TOC 708 also publishes rules that are approved for production use to the rules/URLs server 707 , and sends the rules-URLs server whitelisted URLs that are known as not associated with spam, viruses or other threats.
  • a TI team 710 may manually create other rules and provide them to the rules/URLs server.
  • FIG. 7 shows one messaging gateway 107 .
  • service provider 700 is coupled to a large number of field-deployed messaging gateways 107 at various customers or customer sites.
  • Messaging gateways 107 , avatar 714 , and spamtrap 716 connect to service provider 700 through a public network such as the Internet.
  • each of the customer messaging gateways 107 maintains a local DNS URL blacklist module 718 comprising executable logic and a DNS blacklist.
  • the structure of the DNS blacklist may comprise a plurality of DNS type A records that map network addresses, such as IP addresses, to reputation score values associated with the IP addresses.
  • the IP addresses may represent IP addresses of senders of spam messages, or server addresses associated with a root domain of a URL that has been found in spam messages or that is known to be associated with threats such as phishing attacks or viruses.
  • each messaging gateway 107 maintains its own DNS blacklist of IP addresses.
  • DNS information is maintained in a global location that must receive all queries through network communications.
  • the present approach improves performance, because DNS queries generated by an MGA need not traverse a network to reach a centralized DNS server. This approach also is easier to update; a central server can send incremental updates to the messaging gateways 107 periodically.
  • other logic in the messaging gateway 107 can extract one or more URLs from a message under test, provide input to the blacklist module 718 as a list of (URL, bitmask) pairs and receive output as a list of blacklist IP address hits. If hits are indicated, then the messaging gateway 107 can block delivery of the email, quarantine the email, or apply other policy, such as stripping the URLs from the message prior to delivery.
  • the blacklist module 718 also tests for URL poisoning in an email.
  • URL poisoning refers to a technique used by spammers of placing malicious or disruptive URLs within an unsolicited email message that also contains non-malicious URLs, so that an unsuspecting user who clicks on the URLs may unwittingly trigger malicious local action, displays of advertisements, etc.
  • the presence of the “good” URLs is intended to prevent spam detection software from marking the message as spam.
  • the blacklist module 718 can determine when a particular combination of malicious and good URLs provided as input represents a spam message.
  • An embodiment provides a system for taking DNS data and moving it into a hash-type local database that can accept several database queries and then receive a DNS response.
  • SpamAssassin consists of a set of Perl modules that can be used with a core program that provides a network protocol for performing message checks, such as “spamd,” which is shipped with SpamAssassin.
  • SpamAssassin's plug-in architecture is extensible through application programming interfaces; a programmer can add new checking heuristics and other functions without changing the core code.
  • the plug-ins are identified in a configuration file, and are loaded at runtime and become a functional part of SpamAssassin.
  • the APIs define the format of heuristics (rules to detect words or phrases that are commonly used in spam) and message checking rules.
  • the heuristics are based on dictionaries of words
  • messaging gateway 107 supports a user interface that enables an administrator to edit the contents of the dictionaries to add or remove objectionable words or known good words.
  • an administrator can configure anti-spam logic 119 to scan a message against enterprise-specific content dictionaries before performing other anti-spam scanning. This approach enables messages to first receive a low score if they contain enterprise-specific terms or industry-standard terms, without undergoing other computationally expensive spam scanning.
  • the foregoing approaches enable a spam checking engine to receive and use information that has formed a basis for reputation determinations, but has not found direct use in spam checking.
  • the information can be used to modify weight values and other heuristics of a spam checker. Therefore, a spam checker can determine with greater precision whether a newly received message is spam. Further, the spam checker becomes informed by a large volume of information in the corpus, also improving accuracy.
  • Anti-spam logic 119 normally operates on each message in a complete fashion, meaning that every element of each message is completely parsed, and then every registered test is performed. This gives a very accurate total assessment of whether a piece of mail is ham or spam. However, once a message is “spammy” enough, it can be flagged and treated as spam. There is no additional information necessary to contribute to the binary disposition of the mail. When an embodiment implements thresholds of spam and ham, then performance of anti-spam logic 119 increases by exiting from a message scan function once the logic determines that a message is “spammy” enough to be sure it is spam. In this description, such an approach is termed Early Exit from anti-spam parsing or scanning.
  • Rule Ordering and Execution is a mechanism using indicators allow certainty to be reached quickly. Rules are ordered and placed into test groups. After each group is executed the current score is checked, and a decision is made whether a message is “spammy” enough. If so, then logic 119 discontinues rule processing and announces the verdict that a message is spam.
  • Parse on Demand performs message parsing as part of anti-spam logic 119 only when required. For example, if parsing only message headers results in a determination that a message is spam, then no other parsing operations are performed. In particular, rules applicable to message headers can be very good indicators of spam; if anti-spam logic 119 determines that a message is spam based on header rules, then the body is not parsed. As a result, performance of anti-spam logic 119 increases, because parsing headers is computationally expensive than parsing the message body.
  • the message body is parsed but HTML elements are excluded if rules applied to non-HTML body elements result in a verdict of spam. Parsing the HTML or testing for URI blacklisting (as described further below) is performed only when required.
  • FIG. 11 is a flow diagram of a process of performing message threat scanning with an early exit approach.
  • a plurality of rules is received.
  • the rules specify characteristics of electronic messages that indicate threats associated with the messages. Thus, when a rule matches a message element, the message probably has a threat or is spam.
  • Each rule has a priority value, and each rule is associated with a message element type.
  • an electronic mail message is received, having a destination address for a recipient account.
  • the message comprises a plurality of message elements.
  • the elements typically include headers, a raw body, and HTML body elements.
  • step 1106 a next message element is extracted.
  • step 1106 can involve extracting the headers, raw body, or HTML body elements.
  • Extracting typically involves making a transient copy into a data structure.
  • step 1108 a next rule is selected among a set of rules for the same element type, based on the order of the priorities of the rules.
  • step 1108 reflects that for the current message element extracted at step 1106 , only rules for that element type are considered, and the rules are matched according to the order of their priorities. For example, if the message headers were extracted at step 1106 , then only header rules are matched. Unlike past approaches, the entire message is not considered at the same time and all the rules are not considered at the same time.
  • a threat score value for the message is determined by matching only the current message element to only the current rule.
  • steps 1108 and 1109 can involve selecting all rules that correspond to the current message element type and matching all such rules to the current message element.
  • FIG. 11 encompasses performing an early exit by testing after each rule, or matching all rules for a particular message element type and then determining if early exit is possible.
  • the threat score value is greater than a specified threshold, as tested in step 1110 , an exit from scanning, parsing and matching is performed at step 1112 , and the threat score value is output at step 1114 .
  • a specified threshold as tested in step 1110 .
  • an exit from scanning, parsing and matching is performed at step 1112 , and the threat score value is output at step 1114 .
  • early exit from the scanning process is accomplished and the threat score value may be output far more rapidly when the threshold is exceeded early in the scanning, extracting and rule matching process.
  • the computationally costly process of rendering HTML message elements and matching rules to them can be skipped if header rules result in a threat score value that exceeds the threshold.
  • step 1111 determines if all rules for the current message element have been matched. In the alternative noted above in which all rules for a message element are matched before the test of step 1110 , step 1111 is not necessary. If other rules exist for the same message element type, then control returns to step 1108 to match those rules. If all rules for the same message element type have been matched, then control returns to step 1106 to consider the next message element.
  • the process of FIG. 11 may be implemented in an anti-spam scanning engine, an anti-virus scanner, or a generic threat-scanning engine that can identify multiple different kinds of threats.
  • the threats can comprise any one of a virus, spam, or a phishing attack.
  • a logical engine that performs anti-spam, anti-virus, or other message scanning operations does not perform tests or operations on a message once certainty about the message disposition has been reached.
  • the engine groups rules into priority sets, so that the most effective and least costly tests are performed early.
  • the engine is logically ordered to avoid parsing until a specific rule or group of rules requires parsing.
  • rule priority values are assigned to rules and allow rules to be ordered in execution. For example, a rule with a priority of ⁇ 4 runs before a rule with priority 0, and a rule with priority 0 runs before a rule with priority 1000.
  • rule priority values are assigned by an administrator when rule sets are created.
  • Example rule priorities include ⁇ 4, ⁇ 3, ⁇ 2, ⁇ 1, BOTH, VOF and are assigned based on the efficacy of the rule, the rule type, and the profiled overhead of the rule. For example, a header rule that is very effective and is a simple regular expression comparison may be a ⁇ 4 (run first) priority.
  • BOTH indicates that a rule is effective for detecting both spam and viruses.
  • VOF indicates a rule that is performed to detect a virus outbreak.
  • threat identification team 710 determines rule grouping and ordering and assigns priorities.
  • TI team 710 also can continuously evaluate the statistical effectiveness of the rules to determine how to order them for execution, including assigning different priorities.
  • first the message headers are parsed and header rules run.
  • message body decoding is performed and raw body rules are run.
  • HTML elements are rendered, and body rules and URI rules are run.
  • a test is performed to determine if the current spam score is greater than a spam positive threshold. If so, then the parser exits and subsequent steps are not performed. Additionally or alternatively, the test is performed after each rule is run.
  • Table 3 is a matrix stating an example operational order of events within anti-spam logic 119 in an implementation of Early Exit.
  • the HEAD row indicates the message HEAD is parsed, and header tests are run, and such tests support early exit, and are allowed to have the full priority range ( ⁇ 4 . . . VOF).
  • Certain spam messages may case anti-spam logic 119 to require an extensive amount of time to determine a verdict about whether the message is spam.
  • spam senders may use “poison message” attacks that repeatedly send such a difficult message in an attempt to force the system administrator to disable anti-spam logic 119 .
  • message anti-spam verdicts that anti-spam logic 119 generates are stored in a verdict cache 115 in messaging gateway 107 , and anti-spam logic 119 reuses cached verdicts for processing messages that have identical bodies.
  • the verdict retrieved from the cache is the same as the verdict that would be returned by an actual scan
  • the verdict is termed a “true verdict”.
  • a verdict from the cache that does not match the verdict from a scan is referred to as a “false verdict”.
  • some performance gains are traded off to assure reliability.
  • the digest of the message “Subject” line is included as part of the key to the cache, which reduces the cache hit rate, but also reduces the chance of a false verdict.
  • a spam sender may attempt to defeat the use of a verdict cache by including a non-printing, invalid URL tag that varies in form in the body successive messages that are otherwise identical in content.
  • the use of such tags within the message body will cause a message digest of the body to be different among such successive messages.
  • a fuzzy digest generating algorithm can be used in which HTML elements are parsed and non-displayed bytes are eliminated from the input to the digest algorithm.
  • verdict cache 115 is implemented as a Python dictionary of verdicts from anti-spam logic 119 .
  • the key to the cache is a message digest.
  • anti-spam logic 119 comprises Brightmail software and the cache key comprises a DCC “fuz2” message digest.
  • Fuz2 is an MD5 hash or digest of those portions of a message body that are meaningfully unique. Fuz2 parses HTML and skips over bytes in the message that do not affect what the user sees when viewing the message. Fuz2 also attempts to skip portions of the message that are frequently changed by spam senders. For example a Subject line that begins with “Dear” is excluded from the input to the digest.
  • anti-spam logic 119 when anti-spam logic 119 begins processing a message that is eligible for spam or virus scanning, a message digest is created and stored. If creating a message digest fails or if use of verdict cache 115 is disabled, the digest is set to “None.” The digest is used as a key to perform a lookup in verdict cache 115 , to determine whether a previously computed verdict has been stored for a message with an identical message body. The term“identical” means identical in the parts of the message that the reader sees as meaningful in deciding whether or not the message is spam. If a hit occurs in the cache, then the cached verdict is retrieved and further message scanning is not performed. If no digest is present in the cache, then the message is scanned using anti-spam logic 119 .
  • verdict cache 115 has a size limit. If the size limit is reached, the least recently used entry is deleted from the cache. In an embodiment, each cache entry expires at the end of a configurable entry lifetime. The default value for the lifetime is 600 seconds. The size limit is set to 100 times the entry lifetime. Therefore, the cache requires a relatively small amount of memory of about 6 MB. In an embodiment, each value in the cache is a tuple comprising the time entered, a verdict, and the time that anti-spam logic 119 took to complete the original scan.
  • the time entered of the value is compared to current time. If the entry is still current, then the value of the item in the cache is returned as the verdict. If the entry has expired, it is deleted from the cache.
  • fuz2 is used if available, otherwise fuz1 is used if available, and otherwise “all mime parts” is used as a digest if available, otherwise no cache entry is created.
  • An “all mime part” digest comprises, in one embodiment, a concatenation of digests of the message's MIME parts. If there are no MIME parts, a digest of the entire message body is used. In an embodiment, the “all mime parts” digest is computed only if anti-spam logic 119 performs a message body scan for some other reason. Body scanning extracts the MIME parts, and the marginal cost of computing the digest is negligible; therefore, the operations can be combined efficiently.
  • the verdict cache is flushed whenever messaging gateway 107 receives a rule update from rules-URLs server 707 ( FIG. 7 ). In an embodiment, the verdict cache is flushed whenever a change in the configuration of anti-spam logic 119 occurs, for example, by administrative action or by loading a new configuration file.
  • anti-spam logic 119 can scan multiple messages in parallel. Therefore, two or more identical messages could be scanned at the same time, causing a cache miss because the verdict cache is not yet updated based on one of the messages.
  • the verdict is cached only after one copy of the message is fully scanned. Other copies of the same message that are currently being scanned are cache misses.
  • anti-spam logic 119 periodically scans the entire verdict cache and deletes expired verdict cache entries. In that event, anti-spam logic 119 writes a log entry in log file 113 that reports counts of cache hits, misses, expires and adds. Anti-spam logic 119 or verdict cache 115 may maintain counter variables for the purpose of performing logging or performance reporting.
  • cached digests may be used for message filters or anti-virus verdicts.
  • multiple checksums are used to create a richer key that provides both a higher hit rate and a lower rate of false verdicts.
  • other information may be stored in the verdict cache such as the amount of time required to scan a long message for spam.
  • Brightmail creates a tracker string and returns the tracker string with a message verdict; the tracker string can be added to the message as an X-Brightmail-Tracker header.
  • the tracker string can be used by Brightmail's plug-in to Microsoft Outlook to implement language identification.
  • the tracker string is also sent back to Brightmail when the plug-in reports a false positive.
  • Both the verdict and the tracker string can be different for messages that have identical bodies.
  • the body is non-spam, but spam is encoded in the subject.
  • the message Subject line is included with the message body as input to the message digest algorithm.
  • the Subject line can be different when the body of the message is clearly spam or clearly a virus of both.
  • two messages can contain the same virus and be considered spam by Brightmail, but the Subject header may be different.
  • Each message may have a brief text attachment that is different from the other message, and may have different names. The name of the files in the attachments may be different. However, when both messages are scanned, the same verdict will result.
  • cache hit rate is improved using a virus-positive rule. If the digest of an attachment matches a virus positive verdict and spam positive verdict, then the previous spam verdict is reused, even if the Subject and prologue are different.
  • From header and the Message-ID header are deleted from the second message and the message is re-scanned, and the tracker header becomes is the same as for the first message.
  • detecting viruses using heuristic approaches is provided.
  • Basic approaches for detecting virus outbreaks are described in copending application Ser. No. 11/006,209, filed Dec. 6, 2004, “Method and apparatus for managing computer virus outbreaks,” of Michael Olivier et al.
  • message heuristics refers to a set of factors that are used to determine the likelihood that a message is a virus, when no signature information about the message is available. Heuristics may comprise rules to detect words or phrases that are commonly used in spam. Heuristics may vary according to a language used in the message text. In an embodiment, administrative users can select which language heuristics to use in anti-spam scanning. Message heuristics may be used to determine a VSV value. Heuristics of a message may be determined by a scanning engine that performs basic anti-spam scanning and anti-virus scanning.
  • a message can be placed in quarantine storage, because it may contain a virus, based on the results of heuristic operations rather than or in addition to definitions of virus outbreaks. Such definitions are described in the application of Olivier et al. referenced above.
  • the corpus server cluster 706 contains a past history of viruses, and if a message matches a pattern in that past history as a result of the heuristics, then the message may be quarantined regardless of whether it matches the definitions of a virus outbreak.
  • Such early quarantining provides a beneficial delay in message processing while the TOC prepares a definition of a virus outbreak.
  • FIG. 8 is a graph of time versus the number of machines infected in a hypothetical example virus outbreak.
  • the horizontal axis 814 represents time and vertical axis 812 represents a number of infected machines.
  • Point 806 represents a time at which an anti-virus software vendor, such as Sophos, publishes an updated virus definition that will detect a virus-laden message and prevent further infection on machines in networks protected by messaging gateways 107 that are using that anti-virus software.
  • Point 808 represents a time when the TOC 708 publishes a rule identifying a virus outbreak for the same virus.
  • Curve 804 varies as indicated in FIG.
  • Variable quarantine time is used in one embodiment.
  • the quarantine time may be increased when the heuristics indicate a higher likelihood that a message contains a virus. This provides maximum time for a TOC or anti-virus vendor to prepare rules or definitions, while applying minimum quarantine delay to messages that are less likely to contain a virus.
  • the quarantine time is coupled to the probability that a message contains a virus, resulting in optimum use of quarantine buffer space, as well as minimizing the time of quarantining a message that is not viral.
  • a virus score is determined and stored in a database in association with an IP address value of a sender of the message.
  • the score thus indicates the likelihood that a message originating from the associated address will contain a virus.
  • the premise is that machines that send one virus are likely to become infected with another virus or to become re-infected with the same virus or an updated virus, because those machines are not well protected. Further, if a machine is sending spam then it is more likely to be sending a virus.
  • the IP address may specify a remote machine, or may specify a machine that is within a corporate network that a messaging gateway 107 is protecting.
  • the IP address may specify a machine within the corporate network that inadvertently became infected with a virus. Such an infected machine is likely to send other messages that contain the virus.
  • a virus outbreak detection check can be performed at the same time in overall message processing as a spam check within the messaging gateway 107 .
  • virus outbreak detection can be performed at the same time that a message is parsed and subjected to spam detection.
  • one thread performs the foregoing operations in an ordered serial manner. Further, the results of certain heuristic operations can be used to inform both an anti-spam detection operation and an anti-virus detection operation.
  • the VSV value is determined based upon any one or more of: filename extension; volume spikes in message volume on a local basis, on a global basis, identified per sender and per content; based on attachment content, such as Microsoft executables; and sender-based threat identification information.
  • filename extension a volume spikes in message volume on a local basis, on a global basis, identified per sender and per content; based on attachment content, such as Microsoft executables
  • sender-based threat identification information is used. Examples include dynamic or dial-up host blacklists, exploited host blacklists, and virus hot zones.
  • Dynamic and dial-up hosts connecting to the Internet generally send outgoing mail through a local SMTP server.
  • an external SMTP server such as messaging gateway 107
  • the host probably has been compromised and is sending either spam messages or an email virus.
  • messaging gateway 107 comprises logic that maintains a blacklist of dynamic hosts that have operated in the preceding manner in the past, or connects to a dynamic host blacklist may be obtained at an external source such as the NJABL dynamic hosts list and SORBS dynamic hosts list.
  • identifying message characteristics of an incoming message at step 502 of FIG. 5 further comprises determining if a sender of the message is in the dynamic host blacklist. If so, then a higher VSV value is determined or assigned.
  • Step 502 also may comprise connecting to or managing an exploited host blacklist and determining if the sender of the message is on the exploited host blacklist.
  • An exploited host blacklist tracks hosts that are known to be infected by viruses or that are known to send spam based on the content that infected hosts are sending and locating hosts that have been infected by connect time scanning. Examples include XBL (CBL and OPM) and DSBL.
  • service provider 700 creates and stores an internal blacklist of senders and networks that have a past history of sending viruses, based on sender information received from customer messaging gateways 107 .
  • customer messaging gateways 107 periodically initiate network communications to corpus server cluster 706 and report the network addresses (e.g., IP addresses) of senders of messages that internal logic of the messaging gateways 107 determined to be spam or associated with viruses or other threats.
  • Logic at service provider 700 can periodically scan the internal blacklist and determine if any network addresses are known to send only viruses or spam. If so, the logic can store high threat level values or VSVs in association with those addresses. Moderate threat level values can be stored in association with network addresses that are known to send both viruses and legitimate email. Moderate or low threat level values can be associated with networks that contain one or more individual infected hosts.
  • Testing against the blacklists can be initiated using rules of the type described above. For example, the following rules can initiate blacklist testing: eval DYNAMIC_IP connecting_ip(dynamic) eval HOTZONE_NETWORK connecting_ip(hotzone) eval XBL_IP connecting_ip(exploited host)
  • messages are released from quarantine in first-in-first-out order.
  • a first-to-exit algorithm may be used, in another embodiment.
  • an ordering mechanism determines which messages should be released first.
  • messages that are deemed least dangerous are released first. For example, messages that have been quarantined as a result of heuristics are released first, and messages that have been quarantined as a result of matching virus outbreak tests are released second.
  • each quarantined message is stored in the quarantine of a messaging gateway 107 in association with information indicating a reason for the quarantine. Thereafter, a process in the messaging gateway 107 can release messages based on the reasons.
  • the ordering may be configured in a data-driven fashion by specifying the order in a configuration file that is processed by the messaging gateway 107 .
  • publishing a new configuration file containing the ordering from the service provider to customer messaging gateways 107 automatically causes those messaging gateways 107 to adopt the new ordering.
  • different actions can be taken on quarantined messages when those messages leave the quarantine based on the threat level associated with the messages when they leave the quarantine. For example, messages that appear extremely threatening but may leave the quarantine as a result of overflow can be subjected to a strip-and-deliver operation in which attachments are stripped and the message is delivered to the recipient without the attachments. Alternatively, a message with a lower threat level is delivered as normal.
  • an X-header could be added to lower threat level messages.
  • a client email program e.g., Eudora, Microsoft Outlook
  • a rule to recognize the X-header and place messages with the X-header in a special folder e.g., “Potentially Dangerous Messages”.
  • a file attachment of a message with a particular threat level is renamed (the message is “de-fanged”), requiring the receiving user to affirmatively rename the file attachment again to make it usable with an application. This approach is intended to cause the user to examine the file carefully before renaming and opening it. The message could be forwarded to an administrator for evaluation. Any of these alternatives can be combined in an embodiment.
  • FIG. 9 is a flow diagram of an approach for rescanning messages that may contain viruses.
  • each messaging gateway rescans messages in its quarantine against the new rules.
  • This approach offers the advantage that messages may be released from the quarantine earlier, because in later-stage processing the messages will be detected, using the new rules, as containing viruses.
  • release refers to removing a message from quarantine and sending it to an anti-virus scanning process.
  • rescanning might reduce or increase the quarantine time of a message. This minimizes the number of messages in the quarantine and reduces the likelihood of releasing infected messages.
  • inadvertent release could occur, for example, if the quarantine had a fixed release time, and the fixed release timer expired before an anti-virus vendor or other source released a virus definition that would trap the released message. In that scenario, a malicious message would be automatically released and downstream processing would not trap it.
  • any of several events may trigger rescanning messages in a message quarantine.
  • the approach of FIG. 9 applies to processing messages that are in a quarantine as a result of viruses, spam, or other threats or undesired characteristics of messages.
  • a re-scanning timer is started and runs until expiration, and upon expiration re-scanning all messages in the quarantine queue is triggered at step 906 .
  • the messaging gateway 107 receives one or more new virus threat rules, anti-spam rules, URLs, scores, or other message classification information from Rules-URLs server 707 . Receiving such information also can trigger re-scanning at step 906 .
  • the new rules, scores and other information are used in the re-scanning step to generate a new VSV for each message in the quarantine.
  • the TOC server 708 may publish, through rules-URL server 707 , a set of rules for a virus outbreak that are initially broad, and later narrow the scope of the rules as more information about the outbreak becomes known. As a result, messages that matched the earlier rule set may not match the revised rules, and become known false positives.
  • the approach herein attempts to release known false positives automatically in response to a rule update, without intervention by an administrator of messaging gateway 107 .
  • each message in the quarantine queue 316 has a stored time value indicating when the message entered the quarantine, and re-scanning at step 906 is performed in order of quarantine entry time, oldest message first.
  • step 908 a test is performed to determine if the new VSV for a message is greater than or equal to a particular threshold value, as in step 312 of FIG. 3 .
  • the VSV threshold value is set by an administrator of a messaging gateway 107 to determine tolerance for quarantining messages. If the VSV is below the threshold, then the message probably can be released from the quarantine. Therefore control passes to step 910 at which a normal quarantine exit delivery policy is applied.
  • a messaging gateway 107 may implement a separate reporting threshold. When a message has a VSV that exceeds the reporting threshold, as tested at step 907 , the messaging gateway 107 notifies the service provider 700 at step 909 and continues processing the message. Such notifications may provide important input to determining the occurrence of new virus outbreaks.
  • reporting is an aspect of “SenderBase Network Participation” (SBNP) and can be selectively enabled by an administrator using a configuration setting.
  • Applying a delivery policy at step 910 may comprise immediately queuing the message for delivery to a recipient in unmodified form, or stripping attachments, or performing content filtering, or performing other checks on the message.
  • Applying a delivery policy may comprise adding an X-header to the message indicating a virus scan result. All applicable X-headers may be added to the message in the order in which actions occurred.
  • Applying a delivery policy may comprise modifying a Subject line of the message to indicate the possible presence of a virus, spam or other threat.
  • Applying a delivery policy may comprise redirecting the message to an alternate recipient, and storing an archived copy of the message for subsequent analysis by other logic, systems or persons.
  • applying a delivery policy at step 910 comprises stripping all attachments from the message before delivering it when the message is in any of several quarantines and one quarantine determines that stripping attachments is the correct action.
  • a messaging gateway 107 may support a virus outbreak quarantine queue 316 and a separate quarantine queue that holds messages that appear to violate a policy of the gateway, such as the presence of disallowed words. Assume that the virus outbreak quarantine queue 316 is configured to strip attachments upon overflow before delivery. Assume the message is in both the virus outbreak quarantine queue 316 and the separate policy quarantine queue, and happens to overflow the virus outbreak quarantine queue 316 . If an administrator then manually releases the same message from the policy quarantine queue, then the attachments are stripped again before delivery.
  • the message is delivered.
  • step 909 If the test of step 909 is true, then the message is problematic and probably needs to be retained in the quarantine.
  • each message may be assigned an expiration time value, and the expiration time value is stored in a database of messaging gateway 107 in association with quarantine queue 316 .
  • the expiration time value is equal to the time at which the message entered the quarantine queue 316 and a specified retention time.
  • the expiration time value may vary based upon message contents or heuristics of a message.
  • step 914 a test is performed to determine if the message expiration time has expired. If so, then the message is removed from the quarantine, but the removal of a message at that point is deemed an abnormal or early exit, and therefore an abnormal exit delivery policy is applied at step 918 . Thereafter the message can be delivered in step 912 subject to the delivery policy of step 918 .
  • the delivery policy that is applied at step 918 may be different than the policy that is applied at step 910 .
  • the policy of step 910 could provide for unrestricted delivery, whereas at step 918 (for delivery of messages that are suspect, but have been in the quarantine for longer than the expiration time) removing attachments could be required.
  • the message time has not expired at step 914 , then the message is retained in the quarantine as shown at step 916 . If the rule that causes the VSV to exceed the threshold changes, then the rule name and description are updated in the message database.
  • alerts can be generated at steps 904 , 912 or 916 .
  • Example alert events include reaching specified quarantine fill levels or space limits; quarantine overflow; receiving a new outbreak rule, e.g. a rule that if matched sets a VSV higher than the quarantine threshold value that is configured in the messaging gateway; receiving information removing an outbreak rule; and a failure in an attempt to update new rules in the messaging gateway.
  • Information removing an outbreak rule may comprise receiving a new rule that reduces a threat level of a particular type of message below the quarantine threshold value that is configured in the messaging gateway.
  • log file entries can be written when messages are released abnormally or in an early exit.
  • Alerts or log entries can be sent or written as the quarantine fills at specified levels.
  • alerts or log entries are sent or written when the quarantine reaches 5% full, 50% full, 75% full, etc.
  • Log entries may include quarantine receipt time, quarantine exit time, quarantine exit criteria, quarantine exit actions, number of messages in quarantine, etc.
  • alert messages can indicate scanning engine update failures; rule update failures; failure to receive a rule update in a specified time period; rejection of a specified percentage of messages; rejection of a specified number of messages; etc.
  • FIG. 10 is a block diagram of message flow model in a messaging gateway that implements the logic described above.
  • Message heuristics 1002 and virus outbreak rules 1004 are provided to a scanning engine, such as anti-virus checker 116 , which generates a VSV value or virus threat level (VTL) value 1005 . If the VSV value exceeds a specified threshold, messages enter quarantine 316 .
  • VTL virus threat level
  • a plurality of exit criteria 1006 can enable a message to leave the quarantine 316 .
  • Example exit criteria 1006 include expiration of a time limit 1008 , overflow 1010 , manual release 1012 , or a rule update 1014 .
  • Example exit actions 1018 then occur.
  • Example exit actions 1018 include strip and deliver 1020 , delete 1022 , normal delivery 1024 , tagging the message subject with keywords (e.g., [SPAM]) 1026 , and adding an X-header 1028 .
  • exit actions can include altering the specified recipient of the message.
  • messaging gateway 107 maintains a data structure that defines, for each sending host associated with a message, policies for acting on messages received from that host.
  • a Host Access Table comprises a Boolean attribute value indicating whether to perform for that host virus outbreak scanning as described herein for FIG. 3 , FIG. 9 .
  • each message processed in messaging gateway 107 may be stored in a data structure that carries metadata indicating what message processing to perform within the messaging gateway.
  • metadata include: the VSV value of the message; the name of the rule that resulted in the VSV value and the corresponding rule description; the message quarantine time and overflow priority; flags to specify whether to perform anti-spam and anti-virus scanning and virus outbreak scanning; and a flag to enable content filters to be bypassed.
  • a set of configuration information stored in messaging gateway 107 specifies additional program behavior for virus outbreak scanning for each potential recipient of a message from the gateway. Since messaging gateway 107 typically controls message traffic to a finite set of users, e.g., employees, contractors or other users in an enterprise private network, such configuration information may be managed for all potential recipients. For example, a per-recipient configuration value may specify a list of message attachment file extension types (“.doc”, “.ppt”, etc.) that are excluded from consideration by the scanning described herein, and a value indicating that a message should not be quarantined.
  • the configuration information can include a particular threshold value for each recipient. Thus, the tests of step 312 and step 908 may have a different outcome for different recipients depending upon the associated threshold values.
  • Messaging gateway 107 may also manage a database table that counts messages that have been filtered using the techniques of FIG. 3 , FIG. 9 , the VSV of such messages, and a count of messages that were sent to the message quarantine 316 .
  • each message quarantine 316 has a plurality of associated programmatic actions that control how messages exit the quarantine.
  • exit actions may include manual release of a message from the message quarantine 316 based on operator decision 318 .
  • Exit actions may include automatic release of a message from the message quarantine 316 when an expiration timer expires, as in FIG. 9 .
  • Exit actions may include an early exit from the message quarantine 316 when the quarantine is full, as an implementation of overflow policy 322 . “Early exit” refers to prematurely releasing a message before the end of an expiration time value associated with the message based on a resource limitation such as queue overflow.
  • Normal message exit actions and early exit actions may be organized as a primary action and a secondary action of the type described above for delivery policy step 910 .
  • Primary actions may include Bounce, Delete, Strip Attachments and Deliver, and Deliver.
  • Secondary actions may include Subject tag, X-header, Redirect, or Archive. The secondary actions are not associated with a primary action of Delete.
  • the secondary action of Redirect enables sending messages to a secondary “off box” quarantine queue that is hosted at corpus server cluster 706 or another element within service provider 700 rather than on the messaging gateway 107 . This approach enables TI team 710 to examine quarantined messages.
  • early exit actions from the quarantine resulting from quarantine queue overflow may include any of the primary actions, including Strip Attachments and Deliver. Any of the secondary actions may be used for such early exit.
  • An administrator of the messaging gateway 107 may select the primary action and the secondary action for use upon early exit by issuing a configuration command to the messaging gateway using a command interface or GUI. Additionally or alternatively, message heuristics determined as a result of performing anti-virus scanning or other message scanning may cause different early exit actions to be performed in response.
  • a local database in messaging gateway 107 stores names of file attachments of received messages that are in the message quarantine 316 , and the size of the file attachment.
  • Re-scanning at step 906 may occur for a particular message in response to other actions of the messaging gateway 107 .
  • messaging gateway 107 implements a content filter that can change the content of a received message according to one or more rules. If a content filter changes the content of a received message that was previously scanned for viruses, then the VSV value of that message could change upon re-scanning. For example, if the content filter strips attachments from the message, and a virus was in an attachment, the stripped message may no longer have a virus threat. Therefore, in an embodiment, when a content filter changes the content of a received message, re-scanning at step 906 is performed.
  • an administrator of messaging gateway 107 can search the contents of quarantine 316 using console commands or other user interface commands.
  • searches can be performed based on attachment names, attachment types, attachment size, and other message attributes.
  • searching by file type can be performed only on messages that are in quarantine 316 and not in a policy quarantine or other quarantine, because such searching requires a scan of the message body that may negatively impact performance.
  • the administrator can display the contents of the virus outbreak quarantine 316 in a sorted order according to any of the foregoing attributes.
  • the messaging gateway 107 automatically displays a view of the virus outbreak quarantine.
  • the view includes for each message in the quarantine the following attribute values: outbreak identifier or rule name; sender name; sender domain; recipient name; recipient domain; subject name; attachment name; attachment type; attachment size; VSV; quarantine entry time; quarantine remaining time.
  • messaging gateway 107 stores a reinsertion key, comprising an optional unique text string that can be associated with messages that have been manually released from the quarantine 316 .
  • a reinsertion key comprising an optional unique text string that can be associated with messages that have been manually released from the quarantine 316 .
  • Message rules are abstract statements, which if matched in comparison to a message in the anti-spam logic 119 , result in a higher spam score.
  • Rules may have rule types.
  • Example rule types include compromised host, suspected spam source, header characteristics, body characteristics, URI, and learning.
  • specific outbreak rules can be applied. For example, a virus outbreak detection mechanism might determine that a certain type of message with a ZIP file attachment of 20 kb in size represents a virus. The mechanism can create a rule under which customer messaging gateways 107 will quarantine messages with 20 kb ZIP attachments, but not messages with 1 MB ZIP attachments. As a result, fewer false quarantine operations occur.
  • function tests can test specific aspects of a message. Each function executes custom code to examine messages, information already captured about messages, etc. The tests cannot be formed using simple logical combinations of generic header or body tests. For example, an effective test for matching viruses without examining file content is comparing the extension of the “filename” or “name” MIME field to the claimed MIME Content-Type. If the extension is “doc” and the Content-Type is neither application/octet-stream nor application/.*word, then the content is suspicious. Similar comparisons can be performed for PowerPoint, Excel, image files, text files, and executables.
  • tests include: testing whether the first line of base64-type content matches the regular expression/ ⁇ TV[nopqr]/indicating a Microsoft executable; testing whether email priority is set to High, but there is no X-Mailer or User-Agent header; testing whether the message is multipart/altemative, but alternative parts are very different in content; testing whether the message is multipart, but contains only HTML text; looking for specific MIME boundary formats for new outbreaks.
  • virus information logic 114 comprises logic that supports establishing meta-rules that comprise a plurality of linked rules. Examples include: meta VIRUS_FOO ((SUBJECT_FOO1
  • virus information logic 114 comprises logic that supports establishing and testing messages against rules that are based upon file attachment size, file name keywords, encrypted files, message URLs, and anti-virus logic version values.
  • rules relating to file attachment size are established based on discrete values rather than every possible size value; for example, rules can specify file size in 1K increments for files from 0-5K; in 5K increments for files that are sized from 5K to 1 MB; and in 1 MB increments.
  • File name keyword rules match on a message when a file attachment to the message has a name that includes one or more keywords in the rules.
  • Encrypted file rules test whether or not a file attachment is encrypted. Such rules may be useful to quarantine messages that have encrypted containers, such as encrypted ZIP files, as attachments to messages.
  • Message URL rules match on a message when the message body contains one or more URLs specified in the rules. In an embodiment, a message is not scanned to identify URLs unless at least one message URL is installed in the system.
  • Rules based on anti-virus logic version values match a message when the messaging gateway 107 is running anti-virus logic having a matching version.
  • a rule may specify an AV signature version of “7.3.1” and would match on messages if a messaging gateway is running AV software with a signature file having that version number.
  • a messaging gateway 107 automatically reduces a stored VSV for a message upon receiving a new rule that is more specific for a set of messages than a previously received rule. For example, assume that the TOC 708 initially distributes a rule that any message with a .ZIP file attachment is assigned VSV “3”. The TOC 708 then distributes a rule that .ZIP file attachments between 30 KB and 35 KB have VSV “3”. In response, messaging gateway 107 reduces the VSVs of all messages with ZIP attachments of different file sizes to a default VSV, e.g., “1”.
  • anti-spam logic 119 can learn to identify legitimate email specific to an organization based on outbound message characteristics such as recipient addresses, recipient domains and frequently used words or phrases.
  • an outbound message is a message composed by a user account associated with computers 120 A, 120 B, 120 C on private network 110 and directed through messaging gateway 107 to a recipient account that is logically outside the messaging gateway.
  • Such a recipient account typically is on a computer that is connected to public network 102 . Since all outbound messages pass through messaging gateway 107 before delivery into network 102 , and such outbound messages are nearly never spam, the messaging gateway can scan such messages and automatically generate heuristics or rules that are associated with non-spam messages.
  • learning is accomplished by training a Bayesian filter in anti-spam logic 119 on the text of outbound messages, and then using the Bayesian filter to test inbound messages. If the trained Bayesian filter returns a high probability, then the inbound message probably is not spam according to the probability that the outbound messages are not spam.
  • messaging gateway 107 periodically polls the rules-URLs server 707 to request any available rule updates.
  • HTTPS may be used to deliver rule updates.
  • an administrator of messaging gateway 107 can access and examine rule updates by entering URLs of the rule updates and connecting to rules-URLs server 707 using a browser and a proxy server or fixed address. An administrator can then delivery the updates to selected messaging gateways 107 within a managed network.
  • Receiving a rule update may comprise displaying a user notification in an interface of messaging gateway 107 , or writing an entry in log file 113 stating that a rule update was received or that the messaging gateway successfully connected to the rules-URLs server 707 .
  • Customer messaging gateways 107 in FIG. 1 may implement a “phone home” or “SenderBase Network Participation” service in which the messaging gateways 107 can open connections to the service provider 700 and provide information about the messages that the messaging gateways 107 have processed, so that such information from the field can be added to the corpus and otherwise used at the service provider to improve scoring, outbreak detection, and heuristics.
  • a “phone home” or “SenderBase Network Participation” service in which the messaging gateways 107 can open connections to the service provider 700 and provide information about the messages that the messaging gateways 107 have processed, so that such information from the field can be added to the corpus and otherwise used at the service provider to improve scoring, outbreak detection, and heuristics.
  • a tree data structure and processing algorithm are used to provide efficient data communication from messaging gateways 107 to the service provider.
  • Data from service provider generated as part of anti-spam and anti-virus checks is sent to messaging gateways 107 in the field.
  • the service provider creates metadata describing what data the service provider wants the messaging gateways 107 to return to the service provider.
  • the messaging gateways 107 collate data matching the metadata for a period of time, e.g., 5 minutes.
  • the messaging gateways 107 then connect back to the service provider and provide field data according to the specifications of the metadata.
  • a tree is implemented as a hash of hashes.
  • a standard mapping of nested hashes (or dictionaries in Python) to trees existed.
  • Certain nodes are named in a way that the data returns from the MGA about which things are which.
  • the MGA merely needs to locate the correct data by name, and send a copy of the data back to the service provider.
  • the only thing the MGA needs to know is the type of the data, that is, whether the data is a numeric value or string. The MGA does not need to perform computations or transformations of the data to suit the service provider.
  • Constraints are placed on the structure of the data. Rules are that endpoints of the tree are always one of two things. If the target data is a number, then the leaf node is a counter. When the MGA sees the next message that comes through, it increments or decrements the counter for that node. If the target data is a string, then the leaf node is overwritten with that string value.
  • any form of data can be communicated. For example, if the MGA needs to communicate an average score value back to the service provider, rather than having the service provider inform the MGA that the service provider wants the MGA to return a particular value as an average score, two counters are used, one for the top value and one for the bottom value. The MGA need not know which is which. It simply counts the prescribed values and returns them. Logic at the service provider knows that the values received from the MGA are counters and need to be averaged and stored.
  • this approach provides a method for transparent collation and transfer of data in which the device transferring the data does not know the specific use of the data, but can collate and provide the data. Further, the service provider can update its software to request additional values from messaging gateways 107 , but no update to the MGA software is required. This enables the service provider to collect data without having to change hundreds or thousands of messaging gateways 107 in the field.
  • Example data that can be communicated from a messaging gateway 107 to service provider 700 includes X-header values containing obfuscated rules that matched on a particular message and resulted in a spam verdict.
  • customer messaging gateways 107 can be deployed in a customer network so that they receive and process both inbound and outbound message traffic. Therefore, a messaging gateway 107 can be configured with an outbound message whitelist.
  • the destination network addresses of designated messages leaving the messaging gateway 107 are placed in an outbound message whitelist with a weight value.
  • the outbound message whitelist is consulted when an inbound message is received, and inbound messages having source network addresses in the outbound whitelist are delivered if the weight value is appropriate. That is, the weight value is considered in determining if the message should be delivered; the presence of an address in the outbound whitelist does not necessarily mandate delivery.
  • the rationale is that a message received from an entity in the outbound whitelist should not be spam or threatening, because sending a message to that entity implicitly indicates trust.
  • the outbound whitelist may be maintained at the service provider for distribution to other customer messaging gateways 107 .
  • Determining weight values may be performed with several approaches. For example, a destination address can be processed using a reputation scoring system, and a weight value can be selected based on the resulting reputation score. Message identifiers can be tracked and compared to determine if an inbound message is actually replying to a prior message that was sent. A cache of message identifiers may be used. Thus, if the Reply-To header contains a message identifier of a message previously sent by the same messaging gateway 107 , then it is likely that the reply is not spam or a threat.
  • the approach for managing computer virus outbreaks described herein may be implemented in a variety of ways and the invention is not limited to any particular implementation.
  • the approach may be integrated into a electronic mail system or a mail gateway appliance or other suitable device, or may be implemented as a stand-alone mechanism.
  • the approach may be implemented in computer software, hardware, or a combination thereof.
  • FIG. 6 is a block diagram that illustrates a computer system 600 upon which an embodiment of the invention may be implemented.
  • Computer system 600 includes a bus 602 or other communication mechanism for communicating information, and a processor 604 coupled with bus 602 for processing information.
  • Computer system 600 also includes a main memory 606 , such as a random access memory (“RAM”) or other dynamic storage device, coupled to bus 602 for storing information and instructions to be executed by processor 604 .
  • Main memory 606 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 604 .
  • Computer system 600 further includes a read only memory (“ROM”) 608 or other static storage device coupled to bus 602 for storing static information and instructions for processor 604 .
  • ROM read only memory
  • a storage device 610 such as a magnetic disk or optical disk, is provided and coupled to bus 602 for storing information and instructions.
  • Computer system 600 may be coupled via bus 602 to a display 612 , such as a cathode ray tube (“CRT”), for displaying information to a computer user.
  • a display 612 such as a cathode ray tube (“CRT”)
  • An input device 614 is coupled to bus 602 for communicating information and command selections to processor 604 .
  • cursor control 616 is Another type of user input device
  • cursor control 616 such as a mouse, trackball, stylus, or cursor direction keys for communicating direction information and command selections to processor 604 and for controlling cursor movement on display 612 .
  • This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
  • the invention is related to the use of computer system 600 for applying heuristic tests to message content, managing a dynamic threat quarantine queue, and message scanning with early exit from parsing and scanning.
  • applying heuristic tests to message content, managing a dynamic threat quarantine queue, and message scanning with early exit from parsing and scanning is provided by computer system 600 in response to processor 604 executing one or more sequences of one or more instructions contained in main memory 606 .
  • Such instructions may be read into main memory 606 from another machine-readable medium, such as storage device 610 .
  • Execution of the sequences of instructions contained in main memory 606 causes processor 604 to perform the process steps described herein.
  • hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention.
  • embodiments of the invention are not limited to any specific combination of hardware circuitry and software.
  • machine-readable medium refers to any medium that participates in providing instructions to processor 604 for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media.
  • Non-volatile media includes, for example, optical or magnetic disks, such as storage device 610 .
  • Volatile media includes dynamic memory, such as main memory 606 .
  • Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 602 . Transmission media can also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
  • Machine-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read.
  • Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to processor 604 for execution.
  • the instructions may initially be carried on a magnetic disk of a remote computer.
  • the remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem.
  • a modem local to computer system 600 can receive the data on the telephone line and use an infrared transmitter to convert the data to an infrared signal.
  • An infrared detector can receive the data carried in the infrared signal and appropriate circuitry can place the data on bus 602 .
  • Bus 602 carries the data to main memory 606 , from which processor 604 retrieves and executes the instructions.
  • the instructions received by main memory 606 may optionally be stored on storage device 610 either before or after execution by processor 604 .
  • Computer system 600 also includes a communication interface 618 coupled to bus 602 .
  • Communication interface 618 provides a two-way data communication coupling to a network link 620 that is connected to a local network 622 .
  • communication interface 618 may be an integrated services digital network (“ISDN”) card or a modem to provide a data communication connection to a corresponding type of telephone line.
  • ISDN integrated services digital network
  • communication interface 618 may be a local area network (“LAN”) card to provide a data communication connection to a compatible LAN.
  • LAN local area network
  • Wireless links may also be implemented.
  • communication interface 618 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
  • Network link 620 typically provides data communication through one or more networks to other data devices.
  • network link 620 may provide a connection through local network 622 to a host computer 624 or to data equipment operated by an Internet Service Provider (“ISP”) 626 .
  • ISP 626 in turn provides data communication services through the worldwide packet data communication network now commonly referred to as the “Internet” 628 .
  • Internet 628 uses electrical, electromagnetic or optical signals that carry digital data streams.
  • the signals through the various networks and the signals on network link 620 and through communication interface 618 which carry the digital data to and from computer system 600 , are exemplary forms of carrier waves transporting the information.
  • Computer system 600 can send messages and receive data, including program code, through the network(s), network link 620 and communication interface 618 .
  • a server 630 might transmit a requested code for an application program through Internet 628 , ISP 626 , local network 622 and communication interface 618 .
  • one such downloaded application provides for applying heuristic tests to message content, managing a dynamic threat quarantine queue, and message scanning with early exit from parsing and scanning as described herein.
  • Processor 604 may execute the received code as it is received, and/or stored in storage device 610 , or other non-volatile storage for later execution. In this manner, computer system 600 may obtain application code in the form of a carrier wave.

Abstract

Determining whether to quarantine a message is disclosed. A dynamic and flexible threat quarantine queue is provided with a variety of exit criteria and exit actions that permits early release of messages in other than first in, first-out order.

Description

    PRIORITY CLAIM AND RELATED APPLICATION
  • This application claims benefit under 35 U.S.C. §120 as a Continuation of prior application Ser. No. 11/418,812, filed May 5, 2006, which claims benefit of Provisional Appln. 60/678,391, filed May 5, 2005, the entire contents of which are hereby incorporated by reference as if fully set forth herein.
  • FIELD OF THE INVENTION
  • The present invention generally relates to detecting threats in electronic messages such as computer viruses, spam, and phishing attacks. The invention relates more specifically to techniques for responding to new occurrences of threats in electronic messages, managing a quarantine queue of threat-bearing messages, and scanning messages for threats.
  • BACKGROUND
  • The approaches described in this section could be pursued, but are not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated herein, the approaches described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.
  • The recurring outbreak of message-borne viruses in computers linked to public networks has become a serious problem, especially for business enterprises with large private networks. Direct and indirect costs of thousands of dollars may arise from wasted employee productivity, capital investment to buy additional hardware and software, lost information because many viruses destroy files on shared directories, and violation of privacy and confidentiality because many viruses attach and send random files from a user's computer.
  • Further, damage from viruses occurs over a very short time period. A very high percentage of machines in an enterprise network can be infected between the time that the virus breaks out and the time virus definitions are published and deployed at an enterprise mail gateway that can detect and stop virus-infected messages. The window of time between “outbreak” and “rule deployment” is often five (5) hours or more. Reducing reaction time would be enormously valuable.
  • In most virus outbreaks, executable attachments now serve as a carrier of virus code. For example, of 17 leading virus outbreaks in the last three years, 13 viruses were sent through email attachments. Twelve of the 13 viruses sent through email attachments were sent through dangerous attachment types. Thus, some enterprise network mail gateways now block all types of executable file attachments.
  • Apparently in response, virus writers are now hiding executables. Increasingly, virus writers are hiding known dangerous file types in files that appear to be innocent. For example, a virus writer may embed executables within .zip files of the type generated by WinZIP and other archive utilities. Such .zip files are very commonly used by enterprises to compress and share larger files, so most enterprises are unwilling or unable to block .zip files. It is also possible to embed executables in Microsoft Word and some versions of Adobe Acrobat.
  • Based on the foregoing, there is a clear need for an improved approach for managing virus outbreaks. Present techniques for preventing delivery of mass unsolicited commercial email (“spam”) and messages that contain other forms of threats, such as phishing attacks, are also considered inadequate. Present techniques for scanning messages for threats are also considered inefficient and in need of improvement.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
  • FIG. 1 is a block diagram of a system for managing computer virus outbreaks, according to an embodiment.
  • FIG. 2 is a flow diagram of a process of generating a count of suspicious messages, as performed by a virus information source, according to an embodiment.
  • FIG. 3 is a data flow diagram illustrating processing of messages based on virus outbreak information, according to an embodiment.
  • FIG. 4 is a flow diagram of a method of determining a virus score value, according to an embodiment.
  • FIG. 5 is a flow diagram illustrating application of a set of rules for managing virus outbreaks according to an embodiment.
  • FIG. 6 is a block diagram that illustrates a computer system upon which an embodiment may be implemented.
  • FIG. 7 is a block diagram of a system that may be used in approaches for blocking “spam” messages, and for other kinds of email scanning processes.
  • FIG. 8 is a graph of time versus the number of machines infected in a hypothetical example virus outbreak.
  • FIG. 9 is a flow diagram of an approach for rescanning messages that may contain viruses.
  • FIG. 10 is a block diagram of message flow model in a messaging gateway that implements the logic described above.
  • FIG. 11 is a flow diagram of a process of performing message threat scanning with an early exit approach.
  • DETAILED DESCRIPTION
  • A method and apparatus for managing computer virus outbreaks is described. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention.
  • Embodiments are described herein according to the following outline:
      • 1.0 General Overview
      • 2.0 Virus Outbreak Control Approaches—First Embodiment—Structural and Functional Overview
        • 2.1 Network System and Virus Information Sources
        • 2.2 Counting Suspicious Messages
        • 2.3 Processing Messages Based on Virus Outbreak Information
        • 2.4 Generating Virus Outbreak Information
        • 2.5 Using Virus Outbreak Information
        • 2.6 Additional Features
        • 2.7 Example Use Cases
      • 3.0 Approaches for Blocking Spam Messages
        • 3.1 Early Exit from Spam Scanning
        • 3.2 Spam Scan Verdict Caching
      • 4.0 Methods of Detection of Viruses Based on Message Heuristics, Sender Information, Dynamic Quarantine Operation, and Fine-Grained Rules
        • 4.1 Detecting Using Message Heuristics
        • 4.2 Sender-Based Detection of Viruses
        • 4.3 Dynamic Quarantine Operations Including Rescanning
        • 4.4 Fine-Grained Rules
        • 4.5 Communication of Messaging Gateways with Service Provider
        • 4.6 Outbound Whitelist Module
      • 5.0 Implementation Mechanisms—Hardware Overview
      • 6.0 Extensions and Alternatives
  • 1.0 General Overview
  • The needs identified in the foregoing Background, and other needs and objects that will become apparent for the following description, are achieved in the present invention, which comprises, in one aspect, a method comprising receiving an electronic mail message having a destination address for a recipient account; determining a virus score value for the message based upon one or more rules that specify attributes of messages that are known to contain computer viruses, wherein the attributes comprise a type of file attachment to the message, a size of the file attachment, and one or more heuristics based on the message sender, subject or body and other than file attachment signatures; when the virus score value is greater than or equal to a specified threshold, storing the message in a quarantine queue without immediately delivering the message to the recipient account.
  • In another aspect, the invention provides a method comprising receiving an electronic mail message having a destination address for a recipient account; determining a threat score value for the message; when the threat score value is greater than or equal to a specified threat threshold, storing the message in a quarantine queue without immediately delivering the message to the recipient account; releasing the message from the quarantine queue in other than first-in-first-out order upon any of a plurality of quarantine exit criteria, wherein each quarantine exit criterion is associated with one or more exit actions; and upon a particular exit criterion, selecting and performing the associated one or more exit actions.
  • In another aspect, the invention provides a method comprising receiving and storing a plurality of rules specifying characteristics of electronic messages that indicate threats associated with the messages, wherein each rule has a priority value, wherein each rule is associated with a message element type; receiving an electronic mail message having a destination address for a recipient account, wherein the message comprises a plurality of message elements; extracting a first message element; determining a threat score value for the message by matching only the first message element to only selected rules having a message element type corresponding to the first message element, and according to an order of the priorities of the selected rules; when the threat score value is greater than a specified threshold, outputting the threat score value.
  • In these approaches, early detection of computer viruses and other message-borne threats is provided by applying heuristic tests to message content and examining sender reputation information when no virus signature information is available. As a result, a messaging gateway can suspend delivery of messages early in a virus outbreak, providing sufficient time for updating an anti-virus checker that can strip virus code from the messages. A dynamic and flexible threat quarantine queue is provided with a variety of exit criteria and exit actions that permits early release of messages in other than first in, first-out order. A message scanning method is described in which early exit from parsing and scanning can occur by matching threat rules only to selected message elements and stopping rule matching as soon as a match on one message element exceeds a threat threshold.
  • In other aspects, the invention encompasses a computer apparatus and a machine-readable medium configured to carry out the foregoing steps.
  • 2.0 Virus Outbreak Control System—First Embodiment—Structural and Functional Overview
  • 2.1 Network System and Virus Information Sources
  • FIG. 1 is a block diagram of a system for managing computer virus outbreaks, according to an embodiment. A virus sender 100, whose identity and location are typically unknown, sends a message infected with a virus, typically in an electronic message, or email, with a virus-bearing executable file attachment, to public network 102, such as the Internet. The message is either addressed to, or propagates by action of the virus to, a plurality of destinations such as virus information source 104 and spamtrap 106. A sparntrap is an email address or an email mailbox used to collect information about unsolicited email messages. The operation and implementation of virus information source 104 and spamtrap 106 is discussed in further detail below. For purposes of illustrating a simple example, FIG. 1 shows only two destinations in the form of virus information source 104 and spamtrap 106, but in a practical embodiment there may be any number of such sources of virus information.
  • The virus sender 100 may obtain network addresses of virus information source 104 and spamtrap 106 from public sources, or by sending the virus to a small number of known addresses and letting the virus propagate.
  • A virus information processor 108 is communicatively coupled to public network 102 and can receive information from the virus information source 104 and spamtrap 106. Virus information processor 108 implements certain functions described further herein including collecting virus information from virus information source 104 and spamtrap 106, generating virus outbreak information, and storing the virus outbreak information in a database 112.
  • A messaging gateway 107 is coupled, directly or indirectly through a firewall 111 or other network elements, from public network 102 to a private network 110 that includes a plurality of end stations 120A, 120B, 120C. Messaging gateway 107 may be integrated with a mail transfer agent 109 that processes email for private network 110, or the mail transfer agent may be deployed separately. For example, an IronPort Messaging Gateway Appliance (MGA), such as model C60, C30, or C10, commercially available from IronPort Systems, Inc., San Bruno, Calif., may implement mail transfer agent 109, firewall 111, and the functions described herein for messaging gateway 107.
  • In an embodiment, messaging gateway 107 includes virus information logic 114 for obtaining virus outbreak information from virus information processor 108 and processing messages destined for end stations 120A, 120B, 120C according to policies that are set at the messaging gateway. As further described herein, the virus outbreak information can include any of a number of types of information, including but not limited to, a virus score value and one or more rules that associate virus score values with message characteristics that are associated with viruses. As further described herein with respect to FIG. 3, such virus information logic may be integrated with a content filter function of messaging gateway 107.
  • In an embodiment, virus information logic 114 is implemented as an independent logical module in messaging gateway 107. Messaging gateway 107 invokes virus information logic 114 with message data and receives a verdict in response. The verdict may be based on message heuristics. Message heuristics score messages and determine the likelihood that a message is a virus.
  • Virus information logic 114 detects viruses based in part on parameters of messages. In an embodiment, virus detection is performed based upon any one or more of: heuristics of mail containing executable code; heuristics of mismatched message headers; heuristics of mail from known Open Relays; heuristics of mail having mismatched content types and extensions; heuristics of mail from dynamic user lists, blacklisted hosts, or senders known to have poor reputations; and sender authenticity test results. Sender authenticity tests results may be generated by logic that receives sender ID values from public networks.
  • Messaging gateway 107 may also include an anti-virus checker 116, a content filter 118, and anti-spam logic 119. The anti-virus checker 116 may comprise, for example, Sophos anti-virus software. The content filter 118 provides logic for restricting delivery or acceptance of messages that contain content in a message subject or message body that is unacceptable according to a policy associated with private network 110.
  • The anti-spam logic 119 scans inbound messages to determine if they are unwanted according to a mail acceptance policy, such as whether the inbound messages are unsolicited commercial email, and the anti-spam logic 119 applies policies to restrict delivery, redirect, or refuse acceptance of any unwanted messages. In an embodiment, anti-spam logic 119 scans messages and returns a score of between 0 and 100 for each message indicating a probability that the message is spam or another type of unwanted email. Score ranges are associated with an threshold, definable by an administrator, of possible spam and likely spam against which users can apply a specified set of actions described further below. In an embodiment, messages scoring 90 or above are spam and messages scoring 75-89 are suspected spam.
  • In an embodiment, anti-spam logic 119 determines a spam score based at least in part upon reputation information, obtained from database 112 or an external reputation service such as SenderBase from IronPort Systems, Inc., that indicates whether a sender of the message is associated with spam, viruses, or other threats. Scanning may comprise recording an X-header in the scanned message that verifies that the message was successfully scanned, and includes an obfuscated string that identifies rules that matched for the message. Obfuscation may comprise creating a hash of rule identifiers based on a private key and a one-way hash algorithm. Obfuscation ensures that only a specified party, such as service provider 700 of FIG. 7, can decode the rules that matched, improving security of the system.
  • The private network 110 may be an enterprise network associated with a business enterprise or any other form of network for which enhanced security or protection is desired. Public network 102 and private network 110 may use open standard protocols such as TCP/IP for communication.
  • Virus information source 104 may comprise another instance of a messaging gateway 107 that is interposed between public network 102 and another private network (not shown for clarity) for purposes of protecting that other private network. In one embodiment, virus information source 104 is an IronPort MGA. Spamtrap 106 is associated with one or more email addresses or email mailboxes associated with one or more domains. Spamtrap 106 is established for the purpose of receiving unsolicited email messages, or “spam,” for analysis or reporting, and is not typically used for conventional email communication. For example, a spamtrap can be an email address such as “dummyaccountforspam@mycompany.com,” or the spamtrap can be a collection of email addresses that are grouped into a mail exchange (MX) domain name system (DNS) record for which received email information is provided. Mail transfer agent 109, or the mail transfer agent of another IronPort MGA, may host spamtrap 106.
  • In an embodiment, virus information source 104 generates and provides information to virus information processor 108 for use in managing computer virus outbreaks, and the virus information processor 108 can obtain information from spamtrap 106 for the same purpose. For example, virus information source 104 generates counts of received messages that have suspicious attachments, and provides the counts to virus information processor 108, or allows an external process to retrieve the counts and store them in a specialized database. Messaging gateway 107 also may serve as a virus information source by detecting messages that have indications that are associated with viruses or that are otherwise suspicious, creating a count of suspicious messages received in a particular time period, and periodically providing the count to virus information processor 108.
  • As a specific example, the functions described herein may be implemented as part of a comprehensive message data collection and reporting facility, such as the SenderBase service from IronPort Systems, Inc. In this embodiment, virus information processor 108 can retrieve or receive information from virus information source 104 and spamtrap 106, generate counts of messages that have suspicious attachments or other virus indicators, and update database 112 with the counts and generate virus outbreak information for later retrieval and use by virus information logic 114 of messaging gateway 107. Methods and apparatus relating to the SenderBase service are described in co-pending application Ser. No. 10/857,641, filed May 28, 2004, entitled TECHNIQUES FOR DETERMINING THE REPUTATION OF A MESSAGE SENDER, of Robert Brahms et al., the entire contents of which are hereby incorporated by reference as if fully set forth herein.
  • Additionally or alternatively, virus information source 104 may comprise the SpamCop information service that is accessible at domain “spamcop.net” on the World Wide Web, or users of the SpamCop service. Virus information source 104 may comprise one or more Internet service providers or other high-volume mail receivers.
  • The SenderBase and SpamCop services provide a powerful data source for detecting viruses. The services track information about millions of messages per day through spamtrap addresses, end-user complaint reporters, DNS logs, and third-party data sources. This data can be used to detect viruses in a rapid manner using the approaches herein. In particular, the number of messages with specific attachment types, relative to normal levels, sent to legitimate or spamtrap addresses, and not identified as viruses by anti-virus scanners, provides an early warning indicator that a virus outbreak has occurred based on a new virus that is not yet known and detectable by the anti-virus scanners.
  • In another alternative embodiment, as a supplement to the automatic approaches herein, virus information source 104 may comprise the manual review of data that is obtained by information services consultants or analysts, or external sources. For example, a human administrator monitoring alerts from anti-virus vendors, third-party vendors, security mailing lists, spamtrap data and other sources can detect viruses well in advance of when virus definitions are published in most cases.
  • Once a virus outbreak is identified based on the virus outbreak information, a network element such as messaging gateway 107 can provide various options for handling a message based on the probability that it is a virus. When the messaging gateway 107 is integrated with a mail transfer agent or mail gateway, the gateway can act on this data immediately. For example, the mail transfer agent 109 can delay message delivery into private network 110 until a virus update is received from an anti-virus vendor and installed on messaging gateway 107 so that the delayed messages can be scanned by anti-virus checker 116 after the virus update is received.
  • Delayed messages may be stored in a quarantine queue 316. Messages in quarantine queue 316 may be released and delivered according to various policies as further described, deleted, or modified prior to delivery. In an embodiment, a plurality of quarantines 316 are established in messaging gateway 107, and one quarantine is associated with each recipient account for a computer 120A, 120B, etc., in the managed private network 110.
  • Although not shown in FIG. 1, virus information processor 108 can include or be communicatively coupled to a virus outbreak operation center (VOOC), a receiving virus score (RVS) processor, or both. The VOOC and RVS processor can be separate from virus information processor 108 but communicatively coupled to database 112 and public network 102. The VOOC can be implemented as a staffed center with personnel available 24 hours a day, 7 days a week to monitor the information collected by virus information processor 108 and stored in database 112. The personnel staffing the VOOC can take manual actions, such as issuing virus outbreak alerts, updating the information stored in database 112, publishing virus outbreak information so that messaging gateways 107 can access the virus outbreak information, and manually initiating the sending of virus outbreak information to messaging gateway 107 and other messaging gateways 107.
  • Additionally, the personnel staffing the VOOC may configure the mail transfer agent 109 to perform certain actions, such as delivering a “soft bounce.” A soft bounce is performed when the mail transfer agent 109 returns a received message based on a set of rules accessible to the mail transfer agent 109. More specifically, when the mail transfer agent 109 completes a SMTP transaction by accepting an email message from a sender, the mail transfer agent 109 determines, based on a set of stored software rules accessible to the mail transfer agent 109, that the received message is unwanted or undeliverable. In response to the determination that the received message is unwanted or undeliverable, the mail transfer agent 109 returns the message to the bounce email address specified by the sender. When the mail transfer agent 109 returns the message to the sender, the mail transfer agent 109 may strip the message of any attachments.
  • In some implementations, virus outbreak information is made available, or published, in response to a manual action taken by personnel, such as those staffing the VOOC. In other implementations, virus outbreak information is automatically made available according to the configuration of the virus information processor, VOOC, or RVS, and then the virus outbreak information and the automated actions taken are subsequently reviewed by personnel at the VOOC who can make modifications, if deemed necessary or desirable.
  • In an embodiment, the staffing personnel at a VOOC or components of a system according to an embodiment may determine whether a message contains a virus based on a variety of factors, such as (a) patterns in receiving messages with attachments, (b) risky characteristics of attachments to received messages, (c) published vendor virus alerts, (d) increased mailing list activity, (e) risky source-based characteristics of messages, (f) the percentage of dynamic network addresses associated with sources of received messages, (g) the percentage of computerized hosts associated with sources of received messages, and (h) the percentage of suspicious volume patterns.
  • Each of the above factors may include a variety of criteria. For example, the risky characteristics of attachments to received messages may be based on a consideration of how suspicious the filename of the attachment is, whether the file is associated with multiple file extensions, the amount of similar file sizes attached to received messages, the amount of similar file names attached to received messages, and the names of attachments of known viruses. The patterns in receiving messages with attachments may be based on a consideration of the current rate of the number of messages containing attachments, the trend in the number of messages received with risky attachments, and the number of customer data sources, virus information source 104, and spamtraps 106 that are reporting increases in messages with attachments.
  • In addition, the determination of whether a message contains a virus may be based on information sent from a client, e.g., information may be reported from a user to a system using an email message that is received at the system in a safe environment, such that the message receptor of the system is configured, as best possible, to prevent the spread of a computer virus to other parts of the system if the message receptor is infected with a virus.
  • The RVS processor can be implemented as an automated system that generates the virus outbreak information, such as in the form of virus score values for various attachment types or in the form of a set of rules that associate virus score values with message characteristics, to be made available to messaging gateway 107 and other messaging gateways 107.
  • In an embodiment, messaging gateway 107 comprises a verdict cache 115 that provides local storage of verdict values from anti-virus checker 116 and/or anti-spam logic 119 for re-use when duplicate messages are received. The structure and function of verdict cache 115 is described further below. In an embodiment, messaging gateway 107 comprises a log file 113 that can store statistical information or status messages relating to functions of the messaging gateway. Examples of information that can be logged include message verdicts and actions taken as a result of verdicts; rules that matched on messages, in obfuscated format; an indication that scanning engine updates occurred; an indication that rule updates occurred; scanning engine version numbers, etc.
  • 2.2 Counting Suspicious Messages
  • FIG. 2 is a flow diagram of a process of generating a count of suspicious messages, according to an embodiment. In one implementation, the steps of FIG. 2 may be performed by a virus information source, such as virus information source 104 in FIG. 1.
  • In step 202, a message is received. For example, virus information source 104 or messaging gateway 107 receives the message sent by virus sender 100.
  • In step 204, a determination is made about whether the message is risky. In one embodiment, a message is determined to be risky if a virus checker at the virus information source 104 or messaging gateway 107 scans the message without identifying a virus, but the message also includes a file attachment having a file type or extension that is known to be risky. For example, MS Windows (XP Pro) file types or extensions of COM, EXE, SCR, BAT, PIF, or ZIP may be considered risky since virus writers commonly use such files for malicious executable code. The foregoing are merely examples of file types or extensions that can be considered risky; there are more than 50 known different file types.
  • The determination that a message is suspicious also may be made by extracting a source network address from the message, such as a source IP value, and issuing a query to the SenderBase service to determine whether the source is known to be associated with spam or viruses. For example, a reputation score value provided by the SenderBase service may be taken into account in determining whether a message is suspicious. A message may also be determined to be suspicious if it was sent from an IP address associated with a host known to be compromised, that has a history of sending viruses, or has only recently started sending email to the Internet. The determination also may be based upon one or more of the following factors: (a) the type or extension of a file attachment that is directly attached to the message, (b) the type or extension of a file that is contained within a compressed file, an archive, a .zip file, or another file that is directly attached to the message, and (c) a data fingerprint obtained from an attachment.
  • In addition, the determination of suspicious messages can be based on the size of an attachment for a suspicious message, the contents of the subject of the suspicious message, the contents of the body of the suspicious message, or any other characteristic of the suspicious message. Some file types can be embedded with other file types. For example, “.doc” files and “.pdf” files may be embedded with other image files types, such as “.gif” or .bmp”. Any embedded file types within a host file type may be considered when determining whether a message is suspicious. The characteristics of the suspicious messages can be used in formulating the rules that are provided or made available to the messaging gateways 107 and that include the virus score value that is associated with one or more such characteristics.
  • In step 206, if the message is suspicious, then a count of suspicious messages for the current time period is incremented. For example, if the message has an EXE attachment, a count of messages with EXE attachments is incremented by one.
  • In step 208, the count of suspicious messages is reported. For example, step 208 may involve sending a report message to the virus information processor 108.
  • In an embodiment, virus information processor 108 receives numerous reports such as the report of step 208, continuously in real time. As reports are received, virus information processor 108 updates database 112 with report data, and determines and stores virus outbreak information. In one embodiment, the virus outbreak information includes a virus score value that is determined according to a sub-process that is described further with reference to FIG. 4 below.
  • 2.3 Processing Messages Based on Virus Outbreak Information
  • FIG. 3 is a data flow diagram illustrating processing of messages based on virus outbreak information, according to an embodiment. In one implementation, the steps of FIG. 3 may be performed by an MGA, such as messaging gateway 107 in FIG. 1. Advantageously, by performing the steps illustrated in FIG. 3, a message may be acted upon before it is positively determined to contain a virus.
  • At block 302, a content filter is applied to the message. Applying a content filter involves, in one embodiment, examining the message subject, other message header values, and the message body, determining whether one or more rules for content filtering are satisfied by the content values, and taking one or more actions when the rules are satisfied, such as may be specified in a content policy. The performance of block 302 is optional. Thus, some embodiments may perform block 302, while other embodiments may not perform block 302.
  • Further, at block 302 virus outbreak information is retrieved for use in subsequent processing steps. In one embodiment, at block 302 a messaging gateway 107 that implements FIG. 3 can periodically request the then-current virus outbreak information from virus information processor 108. In an embodiment, messaging gateway 107 retrieves the virus outbreak information from the virus information processor 108 approximately every five (5) minutes, using a secure communication protocol that prevents unauthorized parties from accessing the virus outbreak information. If the messaging gateway 107 is unable to retrieve the virus outbreak information, the gateway can use the last available virus outbreak information stored in the gateway.
  • In block 304, an anti-spam process is applied to the message and messages that appear to be unsolicited messages are marked or processed according to a spam policy. For example, spam messages may be silently dropped, moved to a specified mailbox or folder, or the subject of the message may be modified to include a designation such as “possible spam.” The performance of block 304 is optional. Thus, some embodiments may perform block 304, while other embodiments may not perform block 304.
  • In block 306, an anti-virus process is applied to the message and messages that appear to contain viruses, in the message or in a file attachment, are marked. In one embodiment, anti-virus software from Sophos implements block 306. If a message is determined as positive for a virus, then in block 308, the message is deleted, quarantined in quarantine queue 316, or otherwise processed according to an appropriate virus processing policy.
  • Alternatively, if block 306 determines that the message is not virus positive, then in block 310, a test is performed to determine whether the message has been scanned for viruses before. As explained further herein, block 306 can be reached again from later blocks after the message has been previously scanned for viruses.
  • If in block 306 the message has been scanned for viruses before, then the process of FIG. 3 assumes that the anti-virus process 306 has been updated with all patterns, rules, or other information necessary to successfully identify viruses when a virus outbreak has been identified. Therefore, control passes to block 314 in which the message that was scanned before is delivered. If the message is determined in block 310 to not have been scanned before, the process continues to block 312.
  • In block 312, a test is performed to determine whether the virus outbreak information obtained at block 302 satisfies a specified threshold. For example, if the virus outbreak information includes a virus score value (VSV), the virus score value is checked to see if the virus score value is equal to or greater than a threshold virus score value.
  • The threshold is specified by an administrator command, in a configuration file, or is received from another machine, process or source in a separate process. In one implementation, the threshold corresponds to the probability that a message contains a virus or is associated with a new virus outbreak. A virus that receives a score above the threshold is subject to the actions specified by an operator, such as performing a quarantine of the message in quarantine queue 316. In some implementations, a single specified threshold is used for all messages, whereas in other implementations, multiple thresholds are used based on different characteristics, so that the administrator can treat some messages more cautiously than others based on the type of messages that the messaging gateway receives and what is considered to be normal or less risky for the associated message recipients. In one embodiment, a default threshold value of 3 is used, based on a virus score scale of 0 to 5, where 5 is the highest risk (threat) level.
  • For example, the virus outbreak information can include a virus score value, and a network administrator can determine an allowed threshold virus score value and broadcast the threshold virus score value to all message transfer agents or other processors that are performing the process of FIG. 3. As another example, the virus outbreak information can include a set of rules that associate virus score values with one or more message characteristics that are indicative of viruses, and based on the approach described herein with respect to FIG. 5, a virus score value can be determined based on the matching rules for the message.
  • The value of the threshold virus score value set by the administrator indicates when to initiate delayed delivery of messages. For example, if the threshold virus score value is 1, then a messaging gateway implementing FIG. 3 will delay delivery of messages when the virus score value determined by the virus information processor 108 is low. If the threshold virus score value is 4, then a messaging gateway implementing FIG. 3 will delay delivery of messages when the virus score value determined by the virus information processor 108 is high.
  • If the specified threshold score value is not exceeded, then in block 314, the message is delivered.
  • If the threshold virus score value is determined to be exceeded in block 312 and the message has not yet been scanned before as determined in block 310, then the message is placed in an outbreak quarantine queue 316. Each message is tagged with a specified holding time value, or expiration date-time value, representing a period of time during which the message is held in the outbreak quarantine queue 316. The purpose of the outbreak quarantine queue 316 is to delay delivery of messages for an amount of time that is sufficient to enable updating of anti-virus process 306 to account for a new virus that is associated with the detected virus outbreak.
  • The holding time may have any desired duration. Example holding time values could be between one (1) hour and twenty four (24) hours. In one embodiment, a default holding time value of twelve (12) hours is provided. An administrator may change the holding time at any time, for any preferred holding time value, by issuing a command to a messaging gateway that implements the processes herein. Thus, the holding time value is user-configurable.
  • One or more tools, features, or user interfaces may be provided to allow an operator to monitor the status of the outbreak quarantine queue and the quarantined messages. For example, the operator can obtain a list of messages currently quarantined, and the list can identify the reason why each message in the queue was quarantined, such as the applicable virus score value for the message that satisfied the specified threshold or the rule, or rules, in a set of rules that matched for the message. Summary information can be provided by message characteristic, such as the types of file attachments, or by the applicable rule if a set of rules are being used. A tool can be provided to allow the operator to review each individual message in the queue. Another feature can be provided to allow the operator to search for quarantined messages that satisfy one or more criteria. Yet another tool can be provided to simulate a message being processed, which can be referred to as “tracing” a message, to make sure that the configuration of the messaging gateway has been correctly performed and that the inbound messages are being properly processed according the virus outbreak filter.
  • In addition, a tool can be provided showing general alert information from virus information processor, a VOOC, or an RVS concerning special or significant virus risks or threats that have been identified. Also, tools can be included in the MGA to contact one or more personnel associated with the MGA when alerts are issued. For example, an automated telephone or paging system can contact specified individuals when messages are being quarantined, when a certain number of messages have been quarantined, or when the capacity of the quarantine queue has been filled or has reached a specified level.
  • A message may exit the outbreak quarantine queue 316 in three ways indicated by paths designated 316A, 316B, 316C in FIG. 3. As shown by path 316A, a message may expire normally when the specified holding time expires for that message. As a result, with normal expiration, in one implementation, the outbreak quarantine queue 316 operates as a FIFO (first in, first out) queue. The message is then transferred back to anti-virus process 306 for re-scanning, on the assumption that after expiration of the holding time, the anti-virus process has been updated with any pattern files or other information necessary to detect viruses that may be in the message.
  • As indicated by path 316B, a message may be manually released from outbreak quarantine queue 316. For example, in response to a command issued by an administrator, operator, or other machine or process, one or more messages can be released from outbreak quarantine queue 316. Upon a manual release, in block 318 an operator decision to re-scan or delete the message is performed, such as when the operator may have received off-line information indicating that a particular kind of message is definitely virus-infected; in that case, the operator could elect to delete the message at block 320. Alternatively, the operator may have received, before expiration of the holding time value, off-line information indicating that anti-virus process 306 has just been updated with new patterns or other information in response to a virus outbreak. In that case the operator may elect to re-scan the message by sending it back to the anti-virus process 306 for scanning, without waiting for the holding time to expire, as shown by path 319.
  • As yet another example, the operator can perform a search of the messages currently held in outbreak quarantine queue 316 to identify one or more messages. A message thus identified can be selected by the operator for scanning by anti-virus process 306, such as to test whether anti-virus process 306 has been updated with information sufficient to detect the virus that is involved in the virus outbreak. If the rescan of the selected message is successfully at identifying the virus, the operator can manually release some or all of the messages in outbreak quarantine queue so that the released messages can be rescanned by anti-virus process 306. However, if the virus is not detected by anti-virus process in the selected test message, then the operator can wait until a later time and retest a test message or another message to determine if anti-virus process 306 has been updated to be able to detect the virus, or the operator can wait and let the messages be released when the messages' expiration times expire.
  • As shown by path 316C, a message also may expire early, for example, because the outbreak quarantine queue 316 is full. An overflow policy 322 is applied to messages that expire early. For example, the overflow policy 322 may require that the message be deleted, as indicated in block 320. As another example, the overflow policy 322 may require that the subject of the message be appended with a suitable warning of the risk that the message is likely to contain a virus, as indicated by block 324. For example, a message such as “MAY BE INFECTED” or “SUSPECTED VIRUS” can be appended to the subject, such as at the end or beginning of the message's subject line. The message with the appended subject is delivered via anti-virus process 306, and because the message has been scanned before, the process continues from anti-virus process 306 through block 310, and the message is then delivered as indicated by block 314.
  • Additional overflow policies can be applied, although not illustrated in FIG. 3 for clarity. For example, the overflow policy 322 may require removal of file attachments to the message followed by delivery of the message with the file attachments stripped. Optionally, the overflow policy 322 may require stripping only those file attachments that exceed a particular size. As another example, the overflow policy 322 may require that when the outbreak quarantine queue 316 is full, the MTA is allowed to receive a new message, but before the message is accepted during the SMTP transaction, the message is rejected with a 4xx temporary error.
  • In one embodiment, treatment of a message according to path 316A, 316B, 316C is user configurable for the entire contents of the quarantine queue. Alternatively, such a policy is user configurable for each message.
  • In an embodiment, block 312 also may involve generating and sending an alert message to one or more administrators when the virus outbreak information obtained from virus information processor 108 satisfies a specified threshold, such as when a virus score value meets or exceeds a specified threshold virus score value. For example, an alert message sent at block 312 may comprise an email that specifies the attachment types for which the virus score has changed, current virus score, prior virus score, current threshold virus score, and when the last update of the virus score for that type of attachment was received from the virus information processor 108.
  • In yet another embodiment, the process of FIG. 3 may involve generating and sending an alert message to one or more administrators whenever the overall number of messages in the quarantine queue exceeds a threshold set by the administrator, or when a specific amount or percentage of quarantine queue storage capacity has been exceeded. Such an alert message may specify the quarantine queue size, percentage of capacity utilized, etc.
  • The outbreak quarantine queue 316 may have any desired size. In one embodiment, the quarantine queue can store approximately 3 GB of messages.
  • 2.4 Generating Virus Outbreak Information
  • In one embodiment, virus outbreak information is generated that indicates the likelihood of a virus outbreak based on one or more message characteristics. In one embodiment, the virus outbreak information includes a numerical value, such as a virus score value. The virus outbreak information can be associated with one or more characteristics of a message, such as the type of attachment with a message, the size of the attachment, the contents of the message (e.g., the content of the subject line of the message or the body of the message), the sender of the message, the IP address or domain of the sender of the message, the recipient of the message, the SenderBase reputation score for the sender of the message, or any other suitable message characteristic. As a specific example, the virus outbreak information can associate one message characteristic with a virus score value, such as “EXE=4” to indicate a virus score value of “4” for messages with EXE type attachments.
  • In another embodiment, the virus outbreak information includes one or more rules that each associates the likelihood of a virus outbreak with one or more message characteristics. As a specific example, a rule of the form “if EXE and size <50 k, then 4” indicates that for messages with attachments of type EXE and size less than 50 k, the virus score value is “4.” A set of rules can be provided to the messaging gateway to be applied to determine if an inbound message matches the message characteristics of a rule, thereby indicating that the rule is applicable to the inbound message and therefore should be handled based on the associated virus score value. The use of a set of rules is described further with respect to FIG. 5 below.
  • FIG. 4 is a flow diagram of a method of determining a virus score value, according to an embodiment. In one implementation, the steps of FIG. 4 may be performed by virus information processor 108 based on information in database 112 received from virus information source 104 and spamtrap 106.
  • Step 401 of FIG. 4 indicates that certain computational steps 402, 404 are performed for each different source of virus information that is accessible to virus information processor 108, such as virus information source 104 or spamtrap 106.
  • Step 402 involves generating a weighted current average virus score value, for a particular email file attachment type, by combining one or more prior virus score values for prior time periods, using a weighting approach that accords greater weight for more recent prior virus score values. A virus score value for a particular time period refers to a score value based on the number of messages received at a particular source that have suspicious file attachments. A message is considered to have a suspicious attachment if the attachment satisfies one or more metrics, such as a particular file size, file type, etc., or if the network address of the sender is known to be associated with prior virus outbreaks. The determination may be based on attachment file size or file type or extension.
  • The determination of the virus score value also may be made by extracting a source network address from the message, such as a source IP address value, and issuing a query to the SenderBase service to determine whether the source is known to be associated with spam or viruses. The determination also may be based upon (a) the type or extension of a file attachment that is directly attached to the message, (b) the type or extension of a file that is contained within a compressed file, an archive, a .zip file, or another file that is directly attached to the message, and (c) a data fingerprint obtained from an attachment. A separate virus score value may be generated and stored for each attachment type found in any of the foregoing. Further, the virus score value may be generated and stored based upon the most risky attachment type found in a message.
  • In one embodiment, step 402 involves computing a combination of virus score values for the last three 15-minute periods, for a given file attachment type. Further, in one embodiment, a weighting value is applied to the three values for the 15-minute periods, with the most recent 15-minute time period being weighted more heavily than earlier 15-minute time periods. For example, in one weighting approach, a multiplier of 0.10 is applied to the virus score value for the oldest 15-minute period (30-45 minutes ago), a multiplier of 0.25 is applied to the second-oldest value (15-30 minutes ago), and a multiplier of 0.65 is applied to the most recent virus score value for the period 0-15 minutes ago.
  • In step 404, a percent-of-normal virus score value is generated for a particular file attachment type, by comparing the current average virus score value determined at step 402 to a long-term average virus score value. The current percent of normal level may be computed with reference to a 30-day average value for that file attachment type over all 15-minute time periods within the 30-day period.
  • In step 405, all of the percent-of-normal virus score values for all sources, such as virus information source 104 and spamtrap 106, are averaged to result in creating an overall percent-of-normal value for a particular file attachment type.
  • In step 406, the overall percent-of-normal value is mapped to a virus score value for a particular file attachment type. In one embodiment, the virus score value is an integer between 0-5, and the overall percent-of-normal value is mapped to a virus score value. Table 1 presents an example of a virus score scale.
    TABLE 1
    Example Virus Score Scale
    Percent of normal Score Level of Threat
     0-150 0 No known threat/very low threat
    150-300 1 Possible threat
    300-900 2 Small threat
     900-1500 3 Moderate threat
    >1500 4 High threat/extremely risky
  • In other embodiments, mappings to score values of 0 to 100, 0 to 10, 1 to 5, or any other desired range of values may be used. In addition to integer score values, non-integer values can be used. Instead of using a defined range of values, a probability value can be determined, such as a probability in the range of 0% to 100% in which the higher probabilities indicate a stronger likelihood of a virus outbreak, or such as a probability in the range of 0 to 1 in which the probability is expressed as a fraction or decimal, such at 0.543.
  • As an optimization, and to avoid division by zero issues that may occur with very low 30-day counts, the process of FIG. 4 can add one to the baseline averages computed in step 402. In essence, adding one raises the noise level of the values slightly in a beneficial way, by dampening some of the data.
  • Table 2 presents example data for the EXE file type in a hypothetical embodiment:
    TABLE 2
    Example data for “.exe” file type:
    Current “.exe” counts,
    30-day 45 min., 30 min., Current Current “.exe”
    Source average 15 min. ago average as % of normal
    Source 1 3.6 21, 40, 3 14 382%
    Source 2 15.4 50, 48, 7 21.6 140%
    Source 3 1.7 1, 1, 15 10.1 600%
    Source 4 1.3 15, 15, 15 15 1200% 
    Average % 581%
    of normal
    Virus Score 2
  • In an alternative embodiment, the processes of FIG. 2, FIG. 3, FIG. 4 also may include logic to recognize trends in the reported data and identify anomalies in virus score computations.
  • Since the majority of executables are spread through one type of email attachment or another, the strategy of the approaches herein focuses on making policy decisions based on attachment type. In an alternative embodiment, a virus score value could be developed by considering other message data and metadata, such as Universal Resource Locators (URLs) in a message, the name of a file attachment, source network address, etc. Further, in an alternative embodiment, a virus score value may be assigned to individual messages rather than to file attachment types.
  • In yet another embodiment, other metrics may be considered to determine the virus score value. For example, if a large number of messages are suddenly received from new hosts that have never sent messages to virus information processor 108 or its information sources before, a virus may be indicated. Thus, the fact that the date that a particular message has been first seen is recent, and a spike in message volume detected by virus information processor 108, may provide an early indication of a virus outbreak.
  • 2.5 Using Virus Outbreak Information
  • As described above, virus outbreak information can simply associate a virus score value with a message characteristic, such as an attachment type, or virus outbreak information can include a set of rules that each associates a virus score value with one or more characteristics of messages that are indicative of viruses. An MGA can apply the set of rules to incoming messages to determine which rules match a message. Based on the rules that match an incoming message, the MGA can determine the likelihood that the message includes a virus, such as by determining a virus score value based on one or more of the virus score values from the matching rules.
  • For example, a rule can be “if ‘exe’, then 4” to denote a virus score of 4 for messages with EXE attachments. As another example, a rule can be “if ‘exe’ and size <50 k, then 3” to denote a virus score of 3 for messages with EXE attachments with a size of less than 50 k. As yet another example, a rule can be “if SBRS<−5, then 4” to denote a virus score of 4 if the SenderBase Reputation Score (SBRS) is less than “−5”. As another example, a rule can be “if ‘PIF’ and subject contains FOOL, then 5” to denote a virus score of 5 if the message has a PIF type of attachment and the subject of the message includes the string “FOOL.” In general, a rule can associate any number of message characteristics or other data that can be used to determine a virus outbreak with an indicator of the likelihood that a message matching the message characteristics or other data includes a virus.
  • Furthermore, a messaging gateway can apply exceptions, such as in the form of one or more quarantine policies, to determine whether a message, which otherwise satisfies the specified threshold based on the virus score value determined based on the matching rules, such as is determined in block 312 of FIG. 3, is to be placed into the outbreak quarantine queue or whether the message is to be processed without being placed into the outbreak quarantine queue. The MGA can be configured to apply one or more policies for applying the rules, such as a policy to always allow messages to be delivered to an email address or group of email addresses regardless of the virus scores, or to always deliver messages with a specified type of attachment, such as ZIP files containing PDF files.
  • In general, by having the virus information processor supply rules instead of virus score values, each MGA can apply some or all of the rules in a manner determined by the administrator of the MGA, thereby providing additional flexibility to meet the needs of the particular MGA. As a result, even if two messaging gateways 107 use the same set of rules, the ability to configure the application of the rules by the administrator of each MGA means that each MGA can process the same message and obtain a different result in terms of the determined likelihood that a virus attack is occurring, and each MGA can process the same message and take different actions, depending on the configuration established by the administrator for the MGA.
  • FIG. 5 is a flow diagram illustrating application of a set of rules for managing virus outbreaks, according to an embodiment. The functions illustrated in FIG. 5 can be performed by the messaging gateway as part of block 312 or at any other suitable position during the processing of the incoming message.
  • In block 502, the messaging gateway identifies the message characteristics of an incoming message. For example, messaging gateway 107 can determine whether the message has an attachment, and if so, the type of attachment, the size of the attachment, and the name of the attachment. As another example, messaging gateway 107 can query the SenderBase service based on the sending IP address to obtain a SenderBase reputation score. For the purposes of describing FIG. 5, assume that that message has an EXE type of attachment with a size of 35 k and that sending host for the message has a SenderBase reputation score of −2.
  • In block 504, the messaging gateway determines which rules of the rule set are matched based on the message characteristics for the message. For example, assume that for the purposes of describing FIG. 5, the rule set consists of the following five rules that associate the example characteristics with the provided hypothetical virus score values:
  • Rule 1: “if EXE, then 3”
  • Rule 2: “if ZIP, then 4”
  • Rule 3: “if EXE and size >50 k, then 5”
  • Rule 4: “if EXE and size <50 k and size >20 k, then 4”
  • Rule 5: “if SBRS<−5, then 4”
  • In these example rules, Rule 1 indicates that ZIP attachments are more likely to include a virus than EXE attachments because the virus score is 4 in Rule 2 but only 3 in Rule 1. Furthermore, the example rules above indicate that EXE attachments with a size of greater than 50 k are the most likely to have a virus, but EXE attachments with a size of less than 50 k but greater than 20 k are a little less likely to include a virus, perhaps because most of the suspicious messages with EXE attachments are greater than 50 k in size.
  • In the present example in which the message has an EXE type of attachment with a size of 35 k and the associated SenderBase reputation score is −2, Rules 1 and 4 match while Rules 2, 3, and 5 do not match.
  • In block 506, the messaging gateway determines a virus score value to be used for the message based on the virus score values from the matching rules. The determination of the virus score value to be used for the message can be performed based on any of a number of approaches. The particular approach used can be specified by the administrator of the messaging gateway and modified as desired.
  • For example, the rule that is matched first when applying the list of rules in the order listed can be used, and any other matching rules are ignored. Thus, in this example, the first rule to match is Rule 1, and therefore the virus score value for the message is 3.
  • As another example, the matching rule with the highest virus score value is used. Thus, in this example, Rule 3 has the highest virus score value among the matching rules, and therefore, the virus score value for the message is 5.
  • As yet another example, the matching rule with the most specific set of message characteristics is used. Thus, in this example, Rule 4 is the most specific matching rule because Rule 4 includes three different criteria, and therefore the virus score value for the message is 4.
  • As another example, virus score values from the matching rules can be combined to determine the virus score value to apply to the message. As a specific example, the virus score values from Rules 1, 3, and 4 can be averaged to determine a virus score value of 4 (e.g., (3+4+5)÷3=4). As another example, a weighted average of the virus score values of the matching rules can be used, so as to give more weight to the more specific rules. As a specific example, the weight for each virus score value can be equal to the number of criteria in the rule (e.g., Rule 1 with one criterion has a weight of 1 while Rule 4 with three criteria has a weight of 3), and thus the weighted average of Rule 1, 3, and 4 results in a virus score value of 4.2 (e.g., (1*3+2*5+3*4)÷(1+2+3)=4.2).
  • In block 508, the messaging gateway uses the virus score value determined in block 506 to determine whether the specified threshold virus score value is satisfied. For example, assume that in this example the threshold is a virus score value of 4. As a result, the virus score value determined in block 506 by all the example approaches would satisfy the threshold value, except for the first example that uses the first rule to match and for which block 506 determines the virus score value to be 3.
  • If the specified threshold is determined to be satisfied by the virus score value determined in block 508, then in block 510 one or more quarantine policies are applied to determine whether to add the message to the outbreak quarantine queue. For example, the administrator of the messaging gateway may determine that one or more users or one or more groups of users should never have their messages quarantined even if a virus outbreak has been detected. As another example, the administrator can establish a policy that messages with certain characteristics (e.g., messages with XLS attachments with a size of at least 75 k) are to always be delivered instead of being quarantined when the virus outbreak information indicates a virus attack based on the specified threshold.
  • As a specific example, the members of the organizations legal department may frequently receive ZIP files containing important legal documents that should not be delayed by being placed in the outbreak quarantine, even if the messaging gateway determines that a virus outbreak is occurring. Thus, the mail administrator for the messaging gateway can establish a policy to always deliver messages with ZIP attachments to the legal department, even if the virus score value for ZIP attachments meets or exceeds the specified threshold.
  • As another specific example, the mail administrator may wish to always have messages delivered that are addressed to the email address for the mail administrator, since such messages could provide information for dealing with the virus outbreak. Given that the mail administrator is a sophisticated user, the risk in delivering a virus infected message is low since the mail administrator will likely be able to identify and deal with an infected message before the virus can act.
  • For the example being used in describing FIG. 5, assume that the mail administrator has established a policy that EXE attachments addressed to the company's senior engineering managers are to always be delivered, even if the virus score value for such messages meets or exceeds a threshold virus score value. Thus, if the message is addressed to any of the senior engineering managers, the message is nevertheless delivered instead of being placed into the outbreak quarantine. However, messages addressed to others besides the senior engineering manages are quarantined (unless otherwise excluded by another applicable policy).
  • In one embodiment, the messaging gateway can be configured to be in one of two states: “calm” and “nervous.” The calm state applies if no messages are being quarantined. However, when virus outbreak information is updated and indicates that a specified threshold is exceeded, the state changes from calm to nervous, regardless of whether any messages being received by the messaging gateway are being quarantined. The nervous state persists until the virus outbreak information is updated and indicates that the specified threshold is not longer exceeded.
  • In some implementations, an alert message is sent to an operator or administrator whenever a change in the system state occurs (e.g., calm to nervous or nervous to calm). In addition, alerts can be issued when a previously low virus score value that did not satisfy the threshold now does meet or exceed the threshold, even if the overall state of the system does not change (e.g., the system previously changed from calm to nervous, and while in the nervous state, another virus score was received from the virus information processor that also meets or exceeds the threshold). Similarly, an alert can be issued when a previously high virus score that did satisfy the threshold has dropped and now is less than the specified threshold.
  • Alert messages can include one or more types of information, including but not limited to, the following: the attachment type for which the virus outbreak information changed, the current virus score, the prior virus score, the current threshold, and when the last update for the virus outbreak information occurred.
  • 2.6 Additional Features
  • One or more of the following additional features can be used in a particular implementation, in addition to the features described above.
  • One additional feature is to obtain sender-based data that is specifically designed to aid in the identification of virus threats. For example, when an MGA queries a service such as SenderBase to obtain the SenderBase reputation score for the connecting IP address, SenderBase can provide virus threat data that is specific for the connecting IP address. The virus threat data is based on data collected by SenderBase for the IP address and reflects the history of the IP address in terms of how often viruses are detected in messages originating from the IP address or the company associated with the IP address. This can allow the MGA to obtain a virus score from SenderBase based solely on the sender of the message without any information or knowledge about the content of a particular message from the sending IP address. The data on the virus threat for the sender can be used in place of, or in addition to, a virus score as determined above, or the data on the virus threat for the sender can be factored into the calculation of the virus score. For example, the MGA could increase or decrease a particular virus score value based on the virus threat data for the sender.
  • Another feature is to use a dynamic or dial-up blacklist to identify messages that are likely infected with a virus when a dynamic or dial-up host connects directly to an external SMTP server. Normally, dynamic and dial-up hosts that connect to the Internet are expected to send outgoing messages through the hosts' local SMTP server. However, if the host is infected with a virus, the virus can cause the host to connect directly to an external SMTP server, such as an MGA. In such a situation, the likelihood that the host is infected with a virus that is causing the host to establish the direct connection to the external SMTP server is high. Examples include spam and open relay blocking system (SORBS) dynamic hosts and not just another bogus list (NJABL) dynamic hosts.
  • However, in some cases, the direct connection is not virus initiated, such as when a novice user is making the direct connection or when the connection is from a broadband host that is not dynamic, such as DSL or cable modems. Nevertheless, such direct connections from a dial-up or dynamic host to an external SMTP server can result in determining a high virus score or increasing an already determined virus score to reflect the increased likelihood that the direct connection is due to a virus.
  • Another feature is to use as a virus information source an exploited host blacklist that track hosts that have been exploited by viruses in the past. A host can be exploited when the server is an open relay, an open proxy or has another vulnerability that allows anybody to deliver email to anywhere. Exploited host blacklists track exploited hosts using one of two techniques: the content that infected hosts are sending and locating hosts that have been infected via connect-time scanning. Examples include the Exploits Block List (XBL), which uses data from the Composite Blocking List (CBL) and the Open Proxy Monitor (OPM), and the Distributed Server Boycott List (DSBL).
  • Another feature is for the virus information processor to develop a blacklist of senders and networks that have a past history of sending viruses. For example, the highest virus score can be assigned to individual IP addresses that are known to send only viruses. Moderate virus scores can be associated with individual IP addresses that are known to send both viruses and legitimate messages that are not virus infected. Moderate to low virus scores can be assigned to networks that contain one or more individual infected hosts.
  • Another feature is to incorporate a broader set of tests for identifying suspicious messages in addition to those discussed above, such as identifying attachment characteristics. For example, a generic header test can be used to test on any generic message header to look for either a fixed string or a regular expression, such as in the following examples:
    head X_MIME_FOO X-Mime=˜/foo/
    head SUBJECT_YOUR  Subject=˜/your document/
  • As another example, a generic body test can be used to test the message body by searching for a fixed string or a regular expression, such as in the following examples:
    body HEY_PAL /hey pal|long time, no see/
    body ZIP_PASSWORD /\.zip password is/i
  • As yet another example, a function test can be used to craft custom tests to test very specific aspects of a message, such as in the following examples:
    eval EXTENSION_EXE message_attachment_ext(“.exe”)
    eval MIME_BOUND_FOO mime_boundary(“--/d/d/d/d[a-f]”)
    eval XBL_IP connecting_ip(exploited host)
  • As another example, a meta test can be used to build on multiple features, such as those above, to create a meta rule of rules, such as in the following examples:
    meta VIRUS_FOO ((SUBJECT_FOO1 || SUBJECT_FOO2)
    && BODY_FOO)
    meta VIRUS_BAR (SIZE_BAR + SUBJECT_BAR +
    BODY_BAR >2)
  • Another feature that can be used is to extend the virus score determination approach above to one or more machine learning techniques so that not all rules need to be run and to provide accurate classification by minimizing false positives and false negatives. For example, one or more of the following methods can be employed: a decision tree, to provide discrete answers; perception, to provide additive scores; and Bayes-like analysis, to map probabilities to scores.
  • Another feature is to factor into the virus score determination the severity of the threat from a virus outbreak based on the consequences of the virus. For example, if the virus results in the infected computer's hard drive having all its contents deleted, the virus score can be increased, whereas a virus that merely displays a message can have the virus score left unchanged or even reduced.
  • Another additional feature is to expand the options for handling suspicious messages. For example, a suspicious message can be tagged to indicate that the message is suspicious, such as by adding to the message (e.g., in the subject or body) the virus score so that the user can be alerted to the level of virus risk determined for the message. As another example, a new message can be generated to either alert the recipient of the attempt to send to them a virus infected message or to create a new and uninfected message that includes the non-virus infected portions of the message.
  • 2.7 Example Use Cases
  • The following hypothetical descriptions provide examples of how the approaches described herein may be used to manage virus outbreaks.
  • As a first use case, assume that a new virus entitled “Sprosts.ky” is spread through a Visual Basic macro embedded in Microsoft Excel. Shortly after the virus hits, the virus score moves from 1 to 3 for .xls attachments, and a user of the approaches herein, Big Company, starts delaying the delivery of Excel files. The network administrator for Big Company receives an email stating that .xls files are now quarantined. Sophos then sends out an alert an hour later stating that a new update file is available to stop the virus. The network administrator then confirms that his IronPort C60 has the latest update file installed. Although the network administrator had set the delay period to 5 hours for the quarantine queue, Excel files are critical to the company, so the administrator cannot afford to wait another four hours. Therefore, the administrator accesses the IronPort C60 and manually flushes the queue, sending all messages with Excel files attached through Sophos anti-virus checking. The administrator finds that 249 of these messages were virus positive, and 1 was not caught by Sophos, because it wasn't infected. The messages are delivered with a total delay of 1 1/2 hours.
  • As a second use case, assume that a “Clegg.P” virus is spread through encrypted .zip files. The network administrator at Big Company receives an email alert that the virus score value has jumped, but the administrator ignores the alert, relying on automatic processing as provided herein. Six hours later, overnight, the administrator receives a second page alerting him that the quarantine queue has reached 75% of capacity. By the time the administrator arrives at work, Clegg.P has filled Big Company's quarantine queue. Fortunately, the network administrator had set policies on the IronPort C60 to deliver messages as normal when the quarantine queue overflowed, and Sophos had come out with a new update overnight, before the quarantine queue overflowed. Only two users were infected prior to the virus score value triggering the quarantine queue, so the administrator is faced only with an over-filled quarantine queue. The administrator flushes the messages from the queue, automatically deleting them to spare load on the IronPort C60, on the assumption that all the messages were viruses. As a preventive approach, the network admin starts blocking all encrypted .zip files for a specified future time period.
  • 3.0 Approaches For Blocking “Spam” Messages
  • FIG. 7 is a block diagram of a system that may be used in approaches for blocking “spam” messages, and for other kinds of email scanning processes. In this context, the term “spam” refers to any unsolicited email, and the term “ham” refers to legitimate bulk email. The term “TI” refers to threat identification, that is, determining that virus outbreaks or spam communications are occurring.
  • Within a service provider 700, one or more TI development computers 702 are coupled to a corpus server cluster 706, which hosts a corpus or master repository for threat identification rules, and which applies threat identification rules to messages on an evaluation basis to result in generating score values. A mail server 704 of the service provider 700 contributes ham email to the corpus server cluster 706. One or more spamtraps 716 contribute spam email to the corpus. Spamtraps 716 are email addresses that are established and seeded to spammers so that the addresses receive only spam email. Messages received at spamtraps 716 may be transformed into message signatures or checksums that are stored in corpus server cluster 706. One or more avatars 714 contribute unclassified email to the corpus for evaluation.
  • Scores created by the corpus server cluster 706 are coupled to a rules/URLs server 707, which publishes the rules and URLs associated with viruses, spam, and other email threats to one or more messaging gateways 107 located at customers of the service provider 700. Messaging gateways 107 periodically retrieve new rules through HTTPS transfers. A threat operations center (TOC) 708 may generate and send the corpus server cluster 706 tentative rules for testing purposes. Threat operations center 708 refers to staff, tools, data and facilities involved in detecting and responding to virus threats. The TOC 708 also publishes rules that are approved for production use to the rules/URLs server 707, and sends the rules-URLs server whitelisted URLs that are known as not associated with spam, viruses or other threats. A TI team 710 may manually create other rules and provide them to the rules/URLs server.
  • For purposes of illustrating a clear example, FIG. 7 shows one messaging gateway 107. However, in various embodiments and commercial implementations, service provider 700 is coupled to a large number of field-deployed messaging gateways 107 at various customers or customer sites. Messaging gateways 107, avatar 714, and spamtrap 716 connect to service provider 700 through a public network such as the Internet.
  • According to one embodiment, each of the customer messaging gateways 107 maintains a local DNS URL blacklist module 718 comprising executable logic and a DNS blacklist. The structure of the DNS blacklist may comprise a plurality of DNS type A records that map network addresses, such as IP addresses, to reputation score values associated with the IP addresses. The IP addresses may represent IP addresses of senders of spam messages, or server addresses associated with a root domain of a URL that has been found in spam messages or that is known to be associated with threats such as phishing attacks or viruses.
  • Thus, each messaging gateway 107 maintains its own DNS blacklist of IP addresses. In contrast, in prior approaches, DNS information is maintained in a global location that must receive all queries through network communications. The present approach improves performance, because DNS queries generated by an MGA need not traverse a network to reach a centralized DNS server. This approach also is easier to update; a central server can send incremental updates to the messaging gateways 107 periodically. To filter spam messages, other logic in the messaging gateway 107 can extract one or more URLs from a message under test, provide input to the blacklist module 718 as a list of (URL, bitmask) pairs and receive output as a list of blacklist IP address hits. If hits are indicated, then the messaging gateway 107 can block delivery of the email, quarantine the email, or apply other policy, such as stripping the URLs from the message prior to delivery.
  • In one embodiment, the blacklist module 718 also tests for URL poisoning in an email. URL poisoning refers to a technique used by spammers of placing malicious or disruptive URLs within an unsolicited email message that also contains non-malicious URLs, so that an unsuspecting user who clicks on the URLs may unwittingly trigger malicious local action, displays of advertisements, etc. The presence of the “good” URLs is intended to prevent spam detection software from marking the message as spam. In an embodiment, the blacklist module 718 can determine when a particular combination of malicious and good URLs provided as input represents a spam message.
  • An embodiment provides a system for taking DNS data and moving it into a hash-type local database that can accept several database queries and then receive a DNS response.
  • The foregoing approaches may be implemented in computer programs that are structured as plug-ins to the SpamAssassin open source project. SpamAssassin consists of a set of Perl modules that can be used with a core program that provides a network protocol for performing message checks, such as “spamd,” which is shipped with SpamAssassin. SpamAssassin's plug-in architecture is extensible through application programming interfaces; a programmer can add new checking heuristics and other functions without changing the core code. The plug-ins are identified in a configuration file, and are loaded at runtime and become a functional part of SpamAssassin. The APIs define the format of heuristics (rules to detect words or phrases that are commonly used in spam) and message checking rules. In an embodiment, the heuristics are based on dictionaries of words, and messaging gateway 107 supports a user interface that enables an administrator to edit the contents of the dictionaries to add or remove objectionable words or known good words. In an embodiment, an administrator can configure anti-spam logic 119 to scan a message against enterprise-specific content dictionaries before performing other anti-spam scanning. This approach enables messages to first receive a low score if they contain enterprise-specific terms or industry-standard terms, without undergoing other computationally expensive spam scanning.
  • Further, in a broad sense, the foregoing approaches enable a spam checking engine to receive and use information that has formed a basis for reputation determinations, but has not found direct use in spam checking. The information can be used to modify weight values and other heuristics of a spam checker. Therefore, a spam checker can determine with greater precision whether a newly received message is spam. Further, the spam checker becomes informed by a large volume of information in the corpus, also improving accuracy.
  • 3.1 Early Exit from Spam Scanning
  • Anti-spam logic 119 normally operates on each message in a complete fashion, meaning that every element of each message is completely parsed, and then every registered test is performed. This gives a very accurate total assessment of whether a piece of mail is ham or spam. However, once a message is “spammy” enough, it can be flagged and treated as spam. There is no additional information necessary to contribute to the binary disposition of the mail. When an embodiment implements thresholds of spam and ham, then performance of anti-spam logic 119 increases by exiting from a message scan function once the logic determines that a message is “spammy” enough to be sure it is spam. In this description, such an approach is termed Early Exit from anti-spam parsing or scanning.
  • With Early Exit, significant time can be saved by not evaluating hundreds of rules that will merely further confirm that a message is spam. Since few negative scoring rules typically exist, once a certain threshold is hit, logic 119 can determine positively that a message spam. Two further performance gains are also implemented using mechanisms termed Rule Ordering and Execution, and Parse on Demand.
  • Rule Ordering and Execution is a mechanism using indicators allow certainty to be reached quickly. Rules are ordered and placed into test groups. After each group is executed the current score is checked, and a decision is made whether a message is “spammy” enough. If so, then logic 119 discontinues rule processing and announces the verdict that a message is spam.
  • Parse on Demand performs message parsing as part of anti-spam logic 119 only when required. For example, if parsing only message headers results in a determination that a message is spam, then no other parsing operations are performed. In particular, rules applicable to message headers can be very good indicators of spam; if anti-spam logic 119 determines that a message is spam based on header rules, then the body is not parsed. As a result, performance of anti-spam logic 119 increases, because parsing headers is computationally expensive than parsing the message body.
  • As another example, the message body is parsed but HTML elements are excluded if rules applied to non-HTML body elements result in a verdict of spam. Parsing the HTML or testing for URI blacklisting (as described further below) is performed only when required.
  • FIG. 11 is a flow diagram of a process of performing message threat scanning with an early exit approach. In step 1102, a plurality of rules is received. The rules specify characteristics of electronic messages that indicate threats associated with the messages. Thus, when a rule matches a message element, the message probably has a threat or is spam. Each rule has a priority value, and each rule is associated with a message element type.
  • In step 1104, an electronic mail message is received, having a destination address for a recipient account. The message comprises a plurality of message elements. The elements typically include headers, a raw body, and HTML body elements.
  • In step 1106, a next message element is extracted. As indicated in block 1106A, step 1106 can involve extracting the headers, raw body, or HTML body elements. As an example, assume that only the message headers are extracted at step 1106. Extracting typically involves making a transient copy into a data structure.
  • In step 1108, a next rule is selected among a set of rules for the same element type, based on the order of the priorities of the rules. Thus, step 1108 reflects that for the current message element extracted at step 1106, only rules for that element type are considered, and the rules are matched according to the order of their priorities. For example, if the message headers were extracted at step 1106, then only header rules are matched. Unlike past approaches, the entire message is not considered at the same time and all the rules are not considered at the same time.
  • In step 1109, a threat score value for the message is determined by matching only the current message element to only the current rule. Alternatively, steps 1108 and 1109 can involve selecting all rules that correspond to the current message element type and matching all such rules to the current message element. Thus, FIG. 11 encompasses performing an early exit by testing after each rule, or matching all rules for a particular message element type and then determining if early exit is possible.
  • When the threat score value is greater than a specified threshold, as tested in step 1110, an exit from scanning, parsing and matching is performed at step 1112, and the threat score value is output at step 1114. As a result, early exit from the scanning process is accomplished and the threat score value may be output far more rapidly when the threshold is exceeded early in the scanning, extracting and rule matching process. In particular, the computationally costly process of rendering HTML message elements and matching rules to them can be skipped if header rules result in a threat score value that exceeds the threshold.
  • However, if the threat score value is not greater than the threshold at step 1110, then a test is performed at step 1111 to determine if all rules for the current message element have been matched. In the alternative noted above in which all rules for a message element are matched before the test of step 1110, step 1111 is not necessary. If other rules exist for the same message element type, then control returns to step 1108 to match those rules. If all rules for the same message element type have been matched, then control returns to step 1106 to consider the next message element.
  • The process of FIG. 11 may be implemented in an anti-spam scanning engine, an anti-virus scanner, or a generic threat-scanning engine that can identify multiple different kinds of threats. The threats can comprise any one of a virus, spam, or a phishing attack.
  • Accordingly, in an embodiment, a logical engine that performs anti-spam, anti-virus, or other message scanning operations does not perform tests or operations on a message once certainty about the message disposition has been reached. The engine groups rules into priority sets, so that the most effective and least costly tests are performed early. The engine is logically ordered to avoid parsing until a specific rule or group of rules requires parsing.
  • In an embodiment, rule priority values are assigned to rules and allow rules to be ordered in execution. For example, a rule with a priority of −4 runs before a rule with priority 0, and a rule with priority 0 runs before a rule with priority 1000. In an embodiment, rule priority values are assigned by an administrator when rule sets are created. Example rule priorities include −4, −3, −2, −1, BOTH, VOF and are assigned based on the efficacy of the rule, the rule type, and the profiled overhead of the rule. For example, a header rule that is very effective and is a simple regular expression comparison may be a −4 (run first) priority. BOTH indicates that a rule is effective for detecting both spam and viruses. VOF indicates a rule that is performed to detect a virus outbreak.
  • In an embodiment, threat identification team 710 (FIG. 7) determines rule grouping and ordering and assigns priorities. TI team 710 also can continuously evaluate the statistical effectiveness of the rules to determine how to order them for execution, including assigning different priorities.
  • In an embodiment, first the message headers are parsed and header rules run. Next, message body decoding is performed and raw body rules are run. Last, HTML elements are rendered, and body rules and URI rules are run. After each parsing step, a test is performed to determine if the current spam score is greater than a spam positive threshold. If so, then the parser exits and subsequent steps are not performed. Additionally or alternatively, the test is performed after each rule is run.
  • Table 3 is a matrix stating an example operational order of events within anti-spam logic 119 in an implementation of Early Exit. The HEAD row indicates the message HEAD is parsed, and header tests are run, and such tests support early exit, and are allowed to have the full priority range (−4 . . . VOF).
    TABLE 3
    EXAMPLE OPERATIONAL ORDER FOR EARLY EXIT
    Parsing Tests (in order) EE Priorities Allowed
    HEAD header early exit −4, −3, −2, −1, BOTH
    header_eval early exit
    Decode rawbody early exit −3, −2, −1, BOTH
    rawbody_eval early exit
    Render body early exit −2, −1, BOTH
    body_uri early exit
    body_eval early exit
    meta early exit BOTH
    VOF VOF No VOF (will run BOTH rules)
  • 3.2 Spam Scan Verdict Caching
  • Certain spam messages may case anti-spam logic 119 to require an extensive amount of time to determine a verdict about whether the message is spam. Thus, spam senders may use “poison message” attacks that repeatedly send such a difficult message in an attempt to force the system administrator to disable anti-spam logic 119. To address this issue and improve performance, in an embodiment, message anti-spam verdicts that anti-spam logic 119 generates are stored in a verdict cache 115 in messaging gateway 107, and anti-spam logic 119 reuses cached verdicts for processing messages that have identical bodies.
  • In an effective implementation, when the verdict retrieved from the cache is the same as the verdict that would be returned by an actual scan, the verdict is termed a “true verdict”. A verdict from the cache that does not match the verdict from a scan is referred to as a “false verdict”. In an effective implementation, some performance gains are traded off to assure reliability. For example, in an embodiment, the digest of the message “Subject” line is included as part of the key to the cache, which reduces the cache hit rate, but also reduces the chance of a false verdict.
  • A spam sender may attempt to defeat the use of a verdict cache by including a non-printing, invalid URL tag that varies in form in the body successive messages that are otherwise identical in content. The use of such tags within the message body will cause a message digest of the body to be different among such successive messages. In an embodiment, a fuzzy digest generating algorithm can be used in which HTML elements are parsed and non-displayed bytes are eliminated from the input to the digest algorithm.
  • In an embodiment, verdict cache 115 is implemented as a Python dictionary of verdicts from anti-spam logic 119. The key to the cache is a message digest. In an embodiment, anti-spam logic 119 comprises Brightmail software and the cache key comprises a DCC “fuz2” message digest. Fuz2 is an MD5 hash or digest of those portions of a message body that are meaningfully unique. Fuz2 parses HTML and skips over bytes in the message that do not affect what the user sees when viewing the message. Fuz2 also attempts to skip portions of the message that are frequently changed by spam senders. For example a Subject line that begins with “Dear” is excluded from the input to the digest.
  • In an embodiment, when anti-spam logic 119 begins processing a message that is eligible for spam or virus scanning, a message digest is created and stored. If creating a message digest fails or if use of verdict cache 115 is disabled, the digest is set to “None.” The digest is used as a key to perform a lookup in verdict cache 115, to determine whether a previously computed verdict has been stored for a message with an identical message body. The term“identical” means identical in the parts of the message that the reader sees as meaningful in deciding whether or not the message is spam. If a hit occurs in the cache, then the cached verdict is retrieved and further message scanning is not performed. If no digest is present in the cache, then the message is scanned using anti-spam logic 119.
  • In an embodiment, verdict cache 115 has a size limit. If the size limit is reached, the least recently used entry is deleted from the cache. In an embodiment, each cache entry expires at the end of a configurable entry lifetime. The default value for the lifetime is 600 seconds. The size limit is set to 100 times the entry lifetime. Therefore, the cache requires a relatively small amount of memory of about 6 MB. In an embodiment, each value in the cache is a tuple comprising the time entered, a verdict, and the time that anti-spam logic 119 took to complete the original scan.
  • In an embodiment, if the requested cache key is present in the cache, then the time entered of the value is compared to current time. If the entry is still current, then the value of the item in the cache is returned as the verdict. If the entry has expired, it is deleted from the cache.
  • In an embodiment, several attempts may be made to compute a message digest before a verdict is cached. For example, fuz2 is used if available, otherwise fuz1 is used if available, and otherwise “all mime parts” is used as a digest if available, otherwise no cache entry is created. An “all mime part” digest comprises, in one embodiment, a concatenation of digests of the message's MIME parts. If there are no MIME parts, a digest of the entire message body is used. In an embodiment, the “all mime parts” digest is computed only if anti-spam logic 119 performs a message body scan for some other reason. Body scanning extracts the MIME parts, and the marginal cost of computing the digest is negligible; therefore, the operations can be combined efficiently.
  • In an embodiment, the verdict cache is flushed whenever messaging gateway 107 receives a rule update from rules-URLs server 707 (FIG. 7). In an embodiment, the verdict cache is flushed whenever a change in the configuration of anti-spam logic 119 occurs, for example, by administrative action or by loading a new configuration file.
  • In an embodiment, anti-spam logic 119 can scan multiple messages in parallel. Therefore, two or more identical messages could be scanned at the same time, causing a cache miss because the verdict cache is not yet updated based on one of the messages. In an embodiment, the verdict is cached only after one copy of the message is fully scanned. Other copies of the same message that are currently being scanned are cache misses.
  • In an embodiment, anti-spam logic 119 periodically scans the entire verdict cache and deletes expired verdict cache entries. In that event, anti-spam logic 119 writes a log entry in log file 113 that reports counts of cache hits, misses, expires and adds. Anti-spam logic 119 or verdict cache 115 may maintain counter variables for the purpose of performing logging or performance reporting.
  • In other embodiments, cached digests may be used for message filters or anti-virus verdicts. In an embodiment, multiple checksums are used to create a richer key that provides both a higher hit rate and a lower rate of false verdicts. Further, other information may be stored in the verdict cache such as the amount of time required to scan a long message for spam.
  • Optimizations can be introduced to address particular requirements of specific anti-spam software or logic. For example, Brightmail creates a tracker string and returns the tracker string with a message verdict; the tracker string can be added to the message as an X-Brightmail-Tracker header. The tracker string can be used by Brightmail's plug-in to Microsoft Outlook to implement language identification. The tracker string is also sent back to Brightmail when the plug-in reports a false positive.
  • Both the verdict and the tracker string can be different for messages that have identical bodies. In some cases the body is non-spam, but spam is encoded in the subject. In one approach, the message Subject line is included with the message body as input to the message digest algorithm. However, the Subject line can be different when the body of the message is clearly spam or clearly a virus of both. For example, two messages can contain the same virus and be considered spam by Brightmail, but the Subject header may be different. Each message may have a brief text attachment that is different from the other message, and may have different names. The name of the files in the attachments may be different. However, when both messages are scanned, the same verdict will result.
  • In an embodiment, cache hit rate is improved using a virus-positive rule. If the digest of an attachment matches a virus positive verdict and spam positive verdict, then the previous spam verdict is reused, even if the Subject and prologue are different.
  • In some similar messages a different From value and a different Message-ID line result in generating different tracker strings. The spam verdict is the same, but an obviously false “From” value and an obviously false Message-ID will result in finding the verdict sooner and reporting other rules in the tracker string. In an embodiment, the From header and the Message-ID header are deleted from the second message and the message is re-scanned, and the tracker header becomes is the same as for the first message.
  • 4.0 Methods of Detection of Viruses Based on Message Heuristics, Sender Information, Dynamic Quarantine Operation, and Fine-Grained Rules
  • 4.1 Detecting Using Message Heuristics
  • According to one approach, detecting viruses using heuristic approaches is provided. Basic approaches for detecting virus outbreaks are described in copending application Ser. No. 11/006,209, filed Dec. 6, 2004, “Method and apparatus for managing computer virus outbreaks,” of Michael Olivier et al.
  • In this context, message heuristics refers to a set of factors that are used to determine the likelihood that a message is a virus, when no signature information about the message is available. Heuristics may comprise rules to detect words or phrases that are commonly used in spam. Heuristics may vary according to a language used in the message text. In an embodiment, administrative users can select which language heuristics to use in anti-spam scanning. Message heuristics may be used to determine a VSV value. Heuristics of a message may be determined by a scanning engine that performs basic anti-spam scanning and anti-virus scanning.
  • A message can be placed in quarantine storage, because it may contain a virus, based on the results of heuristic operations rather than or in addition to definitions of virus outbreaks. Such definitions are described in the application of Olivier et al. referenced above. Thus, the corpus server cluster 706 contains a past history of viruses, and if a message matches a pattern in that past history as a result of the heuristics, then the message may be quarantined regardless of whether it matches the definitions of a virus outbreak. Such early quarantining provides a beneficial delay in message processing while the TOC prepares a definition of a virus outbreak.
  • FIG. 8 is a graph of time versus the number of machines infected in a hypothetical example virus outbreak. In FIG. 8, the horizontal axis 814 represents time and vertical axis 812 represents a number of infected machines. Point 806 represents a time at which an anti-virus software vendor, such as Sophos, publishes an updated virus definition that will detect a virus-laden message and prevent further infection on machines in networks protected by messaging gateways 107 that are using that anti-virus software. Point 808 represents a time when the TOC 708 publishes a rule identifying a virus outbreak for the same virus. Curve 804 varies as indicated in FIG. 8 such that the number of infected machines increases over time, but the rate of increase goes down after point 808, and then the total number of infected machines eventually declines significantly further after point 806. Early quarantine based on heuristics as described herein are applied at point 810 to help reduce the number of machines that are covered within the area 816 of curve 804.
  • Variable quarantine time is used in one embodiment. The quarantine time may be increased when the heuristics indicate a higher likelihood that a message contains a virus. This provides maximum time for a TOC or anti-virus vendor to prepare rules or definitions, while applying minimum quarantine delay to messages that are less likely to contain a virus. Thus, the quarantine time is coupled to the probability that a message contains a virus, resulting in optimum use of quarantine buffer space, as well as minimizing the time of quarantining a message that is not viral.
  • 4.2 Sender-Based Detection of Viruses
  • According to one approach, a virus score is determined and stored in a database in association with an IP address value of a sender of the message. The score thus indicates the likelihood that a message originating from the associated address will contain a virus. The premise is that machines that send one virus are likely to become infected with another virus or to become re-infected with the same virus or an updated virus, because those machines are not well protected. Further, if a machine is sending spam then it is more likely to be sending a virus.
  • The IP address may specify a remote machine, or may specify a machine that is within a corporate network that a messaging gateway 107 is protecting. For example, the IP address may specify a machine within the corporate network that inadvertently became infected with a virus. Such an infected machine is likely to send other messages that contain the virus.
  • In a related approach, a virus outbreak detection check can be performed at the same time in overall message processing as a spam check within the messaging gateway 107. Thus, virus outbreak detection can be performed at the same time that a message is parsed and subjected to spam detection. In one embodiment, one thread performs the foregoing operations in an ordered serial manner. Further, the results of certain heuristic operations can be used to inform both an anti-spam detection operation and an anti-virus detection operation.
  • In an embodiment, the VSV value is determined based upon any one or more of: filename extension; volume spikes in message volume on a local basis, on a global basis, identified per sender and per content; based on attachment content, such as Microsoft executables; and sender-based threat identification information. In various embodiments, a variety of sender-based threat identification information is used. Examples include dynamic or dial-up host blacklists, exploited host blacklists, and virus hot zones.
  • Dynamic and dial-up hosts connecting to the Internet generally send outgoing mail through a local SMTP server. When a host connects directly to an external SMTP server, such as messaging gateway 107, the host probably has been compromised and is sending either spam messages or an email virus. In an embodiment, messaging gateway 107 comprises logic that maintains a blacklist of dynamic hosts that have operated in the preceding manner in the past, or connects to a dynamic host blacklist may be obtained at an external source such as the NJABL dynamic hosts list and SORBS dynamic hosts list.
  • In this embodiment, identifying message characteristics of an incoming message at step 502 of FIG. 5 further comprises determining if a sender of the message is in the dynamic host blacklist. If so, then a higher VSV value is determined or assigned.
  • Step 502 also may comprise connecting to or managing an exploited host blacklist and determining if the sender of the message is on the exploited host blacklist. An exploited host blacklist tracks hosts that are known to be infected by viruses or that are known to send spam based on the content that infected hosts are sending and locating hosts that have been infected by connect time scanning. Examples include XBL (CBL and OPM) and DSBL.
  • In another embodiment, service provider 700 creates and stores an internal blacklist of senders and networks that have a past history of sending viruses, based on sender information received from customer messaging gateways 107. In an embodiment, customer messaging gateways 107 periodically initiate network communications to corpus server cluster 706 and report the network addresses (e.g., IP addresses) of senders of messages that internal logic of the messaging gateways 107 determined to be spam or associated with viruses or other threats. Logic at service provider 700 can periodically scan the internal blacklist and determine if any network addresses are known to send only viruses or spam. If so, the logic can store high threat level values or VSVs in association with those addresses. Moderate threat level values can be stored in association with network addresses that are known to send both viruses and legitimate email. Moderate or low threat level values can be associated with networks that contain one or more individual infected hosts.
  • Testing against the blacklists can be initiated using rules of the type described above. For example, the following rules can initiate blacklist testing:
    eval DYNAMIC_IP   connecting_ip(dynamic)
    eval HOTZONE_NETWORK   connecting_ip(hotzone)
    eval XBL_IP   connecting_ip(exploited host)
  • 4.3 Dynamic Quarantine Operations Including Rescanning
  • In prior approaches, messages are released from quarantine in first-in-first-out order. Alternatively, a first-to-exit algorithm may be used, in another embodiment. In this approach, when the quarantine buffer is full, an ordering mechanism determines which messages should be released first. In one embodiment, messages that are deemed least dangerous are released first. For example, messages that have been quarantined as a result of heuristics are released first, and messages that have been quarantined as a result of matching virus outbreak tests are released second. To support this mechanism, each quarantined message is stored in the quarantine of a messaging gateway 107 in association with information indicating a reason for the quarantine. Thereafter, a process in the messaging gateway 107 can release messages based on the reasons.
  • The ordering may be configured in a data-driven fashion by specifying the order in a configuration file that is processed by the messaging gateway 107. Thus, publishing a new configuration file containing the ordering from the service provider to customer messaging gateways 107 automatically causes those messaging gateways 107 to adopt the new ordering.
  • Similarly, different actions can be taken on quarantined messages when those messages leave the quarantine based on the threat level associated with the messages when they leave the quarantine. For example, messages that appear extremely threatening but may leave the quarantine as a result of overflow can be subjected to a strip-and-deliver operation in which attachments are stripped and the message is delivered to the recipient without the attachments. Alternatively, a message with a lower threat level is delivered as normal.
  • In still another alternative, an X-header could be added to lower threat level messages. This alternative is appropriate when a client email program (e.g., Eudora, Microsoft Outlook) is configured with a rule to recognize the X-header and place messages with the X-header in a special folder (e.g., “Potentially Dangerous Messages”). In yet another alternative, a file attachment of a message with a particular threat level is renamed (the message is “de-fanged”), requiring the receiving user to affirmatively rename the file attachment again to make it usable with an application. This approach is intended to cause the user to examine the file carefully before renaming and opening it. The message could be forwarded to an administrator for evaluation. Any of these alternatives can be combined in an embodiment.
  • FIG. 9 is a flow diagram of an approach for rescanning messages that may contain viruses. According to an embodiment, when the TOC 710 releases new threat rules to messaging gateways 107, each messaging gateway rescans messages in its quarantine against the new rules. This approach offers the advantage that messages may be released from the quarantine earlier, because in later-stage processing the messages will be detected, using the new rules, as containing viruses. In this context, “release” refers to removing a message from quarantine and sending it to an anti-virus scanning process.
  • Alternatively, rescanning might reduce or increase the quarantine time of a message. This minimizes the number of messages in the quarantine and reduces the likelihood of releasing infected messages. Such inadvertent release could occur, for example, if the quarantine had a fixed release time, and the fixed release timer expired before an anti-virus vendor or other source released a virus definition that would trap the released message. In that scenario, a malicious message would be automatically released and downstream processing would not trap it.
  • In an embodiment, any of several events may trigger rescanning messages in a message quarantine. Further, the approach of FIG. 9 applies to processing messages that are in a quarantine as a result of viruses, spam, or other threats or undesired characteristics of messages. In step 902, a re-scanning timer is started and runs until expiration, and upon expiration re-scanning all messages in the quarantine queue is triggered at step 906.
  • Additionally or alternatively, in step 904, the messaging gateway 107 receives one or more new virus threat rules, anti-spam rules, URLs, scores, or other message classification information from Rules-URLs server 707. Receiving such information also can trigger re-scanning at step 906. The new rules, scores and other information are used in the re-scanning step to generate a new VSV for each message in the quarantine. For example, the TOC server 708 may publish, through rules-URL server 707, a set of rules for a virus outbreak that are initially broad, and later narrow the scope of the rules as more information about the outbreak becomes known. As a result, messages that matched the earlier rule set may not match the revised rules, and become known false positives. The approach herein attempts to release known false positives automatically in response to a rule update, without intervention by an administrator of messaging gateway 107.
  • In an embodiment, each message in the quarantine queue 316 has a stored time value indicating when the message entered the quarantine, and re-scanning at step 906 is performed in order of quarantine entry time, oldest message first.
  • In step 908, a test is performed to determine if the new VSV for a message is greater than or equal to a particular threshold value, as in step 312 of FIG. 3. The VSV threshold value is set by an administrator of a messaging gateway 107 to determine tolerance for quarantining messages. If the VSV is below the threshold, then the message probably can be released from the quarantine. Therefore control passes to step 910 at which a normal quarantine exit delivery policy is applied.
  • Optionally, in an embodiment, a messaging gateway 107 may implement a separate reporting threshold. When a message has a VSV that exceeds the reporting threshold, as tested at step 907, the messaging gateway 107 notifies the service provider 700 at step 909 and continues processing the message. Such notifications may provide important input to determining the occurrence of new virus outbreaks. In certain embodiments, such reporting is an aspect of “SenderBase Network Participation” (SBNP) and can be selectively enabled by an administrator using a configuration setting.
  • Applying a delivery policy at step 910 may comprise immediately queuing the message for delivery to a recipient in unmodified form, or stripping attachments, or performing content filtering, or performing other checks on the message. Applying a delivery policy may comprise adding an X-header to the message indicating a virus scan result. All applicable X-headers may be added to the message in the order in which actions occurred. Applying a delivery policy may comprise modifying a Subject line of the message to indicate the possible presence of a virus, spam or other threat. Applying a delivery policy may comprise redirecting the message to an alternate recipient, and storing an archived copy of the message for subsequent analysis by other logic, systems or persons.
  • In an embodiment, applying a delivery policy at step 910 comprises stripping all attachments from the message before delivering it when the message is in any of several quarantines and one quarantine determines that stripping attachments is the correct action. For example, a messaging gateway 107 may support a virus outbreak quarantine queue 316 and a separate quarantine queue that holds messages that appear to violate a policy of the gateway, such as the presence of disallowed words. Assume that the virus outbreak quarantine queue 316 is configured to strip attachments upon overflow before delivery. Assume the message is in both the virus outbreak quarantine queue 316 and the separate policy quarantine queue, and happens to overflow the virus outbreak quarantine queue 316. If an administrator then manually releases the same message from the policy quarantine queue, then the attachments are stripped again before delivery.
  • At step 912, the message is delivered.
  • If the test of step 909 is true, then the message is problematic and probably needs to be retained in the quarantine.
  • Optionally, each message may be assigned an expiration time value, and the expiration time value is stored in a database of messaging gateway 107 in association with quarantine queue 316. In an embodiment, the expiration time value is equal to the time at which the message entered the quarantine queue 316 and a specified retention time. The expiration time value may vary based upon message contents or heuristics of a message.
  • In step 914 a test is performed to determine if the message expiration time has expired. If so, then the message is removed from the quarantine, but the removal of a message at that point is deemed an abnormal or early exit, and therefore an abnormal exit delivery policy is applied at step 918. Thereafter the message can be delivered in step 912 subject to the delivery policy of step 918. The delivery policy that is applied at step 918 may be different than the policy that is applied at step 910. For example, the policy of step 910 could provide for unrestricted delivery, whereas at step 918 (for delivery of messages that are suspect, but have been in the quarantine for longer than the expiration time) removing attachments could be required.
  • If the message time has not expired at step 914, then the message is retained in the quarantine as shown at step 916. If the rule that causes the VSV to exceed the threshold changes, then the rule name and description are updated in the message database.
  • In various embodiments, different steps of FIG. 9 may cause the messaging gateway 107 to send one or more alert messages to an administrator or to specified user accounts or groups. For example, alerts can be generated at steps 904, 912 or 916. Example alert events include reaching specified quarantine fill levels or space limits; quarantine overflow; receiving a new outbreak rule, e.g. a rule that if matched sets a VSV higher than the quarantine threshold value that is configured in the messaging gateway; receiving information removing an outbreak rule; and a failure in an attempt to update new rules in the messaging gateway. Information removing an outbreak rule may comprise receiving a new rule that reduces a threat level of a particular type of message below the quarantine threshold value that is configured in the messaging gateway.
  • Further, different steps of FIG. 9 may cause the messaging gateway 107 to write one or more log entries in log file 113 describing actions that were performed. For example, log file entries can be written when messages are released abnormally or in an early exit. Alerts or log entries can be sent or written as the quarantine fills at specified levels. For example, alerts or log entries are sent or written when the quarantine reaches 5% full, 50% full, 75% full, etc. Log entries may include quarantine receipt time, quarantine exit time, quarantine exit criteria, quarantine exit actions, number of messages in quarantine, etc.
  • In other embodiments, alert messages can indicate scanning engine update failures; rule update failures; failure to receive a rule update in a specified time period; rejection of a specified percentage of messages; rejection of a specified number of messages; etc.
  • FIG. 10 is a block diagram of message flow model in a messaging gateway that implements the logic described above. Message heuristics 1002 and virus outbreak rules 1004 are provided to a scanning engine, such as anti-virus checker 116, which generates a VSV value or virus threat level (VTL) value 1005. If the VSV value exceeds a specified threshold, messages enter quarantine 316.
  • A plurality of exit criteria 1006 can enable a message to leave the quarantine 316. Example exit criteria 1006 include expiration of a time limit 1008, overflow 1010, manual release 1012, or a rule update 1014. When an exit criteria 1006 is satisfied, one or more exit actions 1018 then occur. Example exit actions 1018 include strip and deliver 1020, delete 1022, normal delivery 1024, tagging the message subject with keywords (e.g., [SPAM]) 1026, and adding an X-header 1028. In another embodiment, exit actions can include altering the specified recipient of the message.
  • In one embodiment, messaging gateway 107 maintains a data structure that defines, for each sending host associated with a message, policies for acting on messages received from that host. For example, a Host Access Table comprises a Boolean attribute value indicating whether to perform for that host virus outbreak scanning as described herein for FIG. 3, FIG. 9.
  • Further, each message processed in messaging gateway 107 may be stored in a data structure that carries metadata indicating what message processing to perform within the messaging gateway. Examples of metadata include: the VSV value of the message; the name of the rule that resulted in the VSV value and the corresponding rule description; the message quarantine time and overflow priority; flags to specify whether to perform anti-spam and anti-virus scanning and virus outbreak scanning; and a flag to enable content filters to be bypassed.
  • In an embodiment, a set of configuration information stored in messaging gateway 107 specifies additional program behavior for virus outbreak scanning for each potential recipient of a message from the gateway. Since messaging gateway 107 typically controls message traffic to a finite set of users, e.g., employees, contractors or other users in an enterprise private network, such configuration information may be managed for all potential recipients. For example, a per-recipient configuration value may specify a list of message attachment file extension types (“.doc”, “.ppt”, etc.) that are excluded from consideration by the scanning described herein, and a value indicating that a message should not be quarantined. In an embodiment, the configuration information can include a particular threshold value for each recipient. Thus, the tests of step 312 and step 908 may have a different outcome for different recipients depending upon the associated threshold values.
  • Messaging gateway 107 may also manage a database table that counts messages that have been filtered using the techniques of FIG. 3, FIG. 9, the VSV of such messages, and a count of messages that were sent to the message quarantine 316.
  • In one embodiment, each message quarantine 316 has a plurality of associated programmatic actions that control how messages exit the quarantine. Referring again to FIG. 3, exit actions may include manual release of a message from the message quarantine 316 based on operator decision 318. Exit actions may include automatic release of a message from the message quarantine 316 when an expiration timer expires, as in FIG. 9. Exit actions may include an early exit from the message quarantine 316 when the quarantine is full, as an implementation of overflow policy 322. “Early exit” refers to prematurely releasing a message before the end of an expiration time value associated with the message based on a resource limitation such as queue overflow.
  • Normal message exit actions and early exit actions may be organized as a primary action and a secondary action of the type described above for delivery policy step 910. Primary actions may include Bounce, Delete, Strip Attachments and Deliver, and Deliver. Secondary actions may include Subject tag, X-header, Redirect, or Archive. The secondary actions are not associated with a primary action of Delete. In an embodiment, the secondary action of Redirect enables sending messages to a secondary “off box” quarantine queue that is hosted at corpus server cluster 706 or another element within service provider 700 rather than on the messaging gateway 107. This approach enables TI team 710 to examine quarantined messages.
  • In an embodiment, early exit actions from the quarantine resulting from quarantine queue overflow may include any of the primary actions, including Strip Attachments and Deliver. Any of the secondary actions may be used for such early exit. An administrator of the messaging gateway 107 may select the primary action and the secondary action for use upon early exit by issuing a configuration command to the messaging gateway using a command interface or GUI. Additionally or alternatively, message heuristics determined as a result of performing anti-virus scanning or other message scanning may cause different early exit actions to be performed in response.
  • In an embodiment, a local database in messaging gateway 107 stores names of file attachments of received messages that are in the message quarantine 316, and the size of the file attachment.
  • Re-scanning at step 906 may occur for a particular message in response to other actions of the messaging gateway 107. In an embodiment, messaging gateway 107 implements a content filter that can change the content of a received message according to one or more rules. If a content filter changes the content of a received message that was previously scanned for viruses, then the VSV value of that message could change upon re-scanning. For example, if the content filter strips attachments from the message, and a virus was in an attachment, the stripped message may no longer have a virus threat. Therefore, in an embodiment, when a content filter changes the content of a received message, re-scanning at step 906 is performed.
  • In an embodiment, an administrator of messaging gateway 107 can search the contents of quarantine 316 using console commands or other user interface commands. In an embodiment, searches can be performed based on attachment names, attachment types, attachment size, and other message attributes. In an embodiment, searching by file type can be performed only on messages that are in quarantine 316 and not in a policy quarantine or other quarantine, because such searching requires a scan of the message body that may negatively impact performance. In an embodiment, the administrator can display the contents of the virus outbreak quarantine 316 in a sorted order according to any of the foregoing attributes.
  • In an embodiment, when messages are placed in quarantine 316 through the process of FIG. 3 or FIG. 9, the messaging gateway 107 automatically displays a view of the virus outbreak quarantine. In an embodiment, the view includes for each message in the quarantine the following attribute values: outbreak identifier or rule name; sender name; sender domain; recipient name; recipient domain; subject name; attachment name; attachment type; attachment size; VSV; quarantine entry time; quarantine remaining time.
  • In an embodiment, messaging gateway 107 stores a reinsertion key, comprising an optional unique text string that can be associated with messages that have been manually released from the quarantine 316. When a released message has a reinsertion key associated therewith, the released message cannot be quarantined again during subsequent processing in messaging gateway 107 prior to delivery.
  • 4.4 Fine-Grained Rules
  • Message rules are abstract statements, which if matched in comparison to a message in the anti-spam logic 119, result in a higher spam score. Rules may have rule types. Example rule types include compromised host, suspected spam source, header characteristics, body characteristics, URI, and learning. In an embodiment, specific outbreak rules can be applied. For example, a virus outbreak detection mechanism might determine that a certain type of message with a ZIP file attachment of 20 kb in size represents a virus. The mechanism can create a rule under which customer messaging gateways 107 will quarantine messages with 20 kb ZIP attachments, but not messages with 1 MB ZIP attachments. As a result, fewer false quarantine operations occur.
  • In an embodiment, virus information logic 114 comprises logic that supports establishing rules or tests on message headers and message bodies to identify fixed strings or regular expressions. For example, an embodiment permits defining the following rules:
    head X_MIME_FOO   X-Mime = ˜/foo/
    head SUBJECT_YOUR  Subject = ˜/your document/
    body HEY_PAL  /hey pal|long time, no see/
    body ZIP_PASSWORD  /\.zip password is/i
  • In an embodiment, function tests can test specific aspects of a message. Each function executes custom code to examine messages, information already captured about messages, etc. The tests cannot be formed using simple logical combinations of generic header or body tests. For example, an effective test for matching viruses without examining file content is comparing the extension of the “filename” or “name” MIME field to the claimed MIME Content-Type. If the extension is “doc” and the Content-Type is neither application/octet-stream nor application/.*word, then the content is suspicious. Similar comparisons can be performed for PowerPoint, Excel, image files, text files, and executables.
  • Other examples of tests include: testing whether the first line of base64-type content matches the regular expression/ˆTV[nopqr]/indicating a Microsoft executable; testing whether email priority is set to High, but there is no X-Mailer or User-Agent header; testing whether the message is multipart/altemative, but alternative parts are very different in content; testing whether the message is multipart, but contains only HTML text; looking for specific MIME boundary formats for new outbreaks.
  • In an embodiment, virus information logic 114 comprises logic that supports establishing meta-rules that comprise a plurality of linked rules. Examples include:
    meta VIRUS_FOO ((SUBJECT_FOO1 || SUBJECT_FOO2)
    && BODY_FOO)
    meta VIRUS_BAR (SIZE_BAR + SUBJECT_BAR +
    BODY_BAR > 2)
  • In an embodiment, virus information logic 114 comprises logic that supports establishing and testing messages against rules that are based upon file attachment size, file name keywords, encrypted files, message URLs, and anti-virus logic version values. In an embodiment, rules relating to file attachment size are established based on discrete values rather than every possible size value; for example, rules can specify file size in 1K increments for files from 0-5K; in 5K increments for files that are sized from 5K to 1 MB; and in 1 MB increments.
  • File name keyword rules match on a message when a file attachment to the message has a name that includes one or more keywords in the rules. Encrypted file rules test whether or not a file attachment is encrypted. Such rules may be useful to quarantine messages that have encrypted containers, such as encrypted ZIP files, as attachments to messages. Message URL rules match on a message when the message body contains one or more URLs specified in the rules. In an embodiment, a message is not scanned to identify URLs unless at least one message URL is installed in the system.
  • Rules based on anti-virus logic version values match a message when the messaging gateway 107 is running anti-virus logic having a matching version. For example, a rule may specify an AV signature version of “7.3.1” and would match on messages if a messaging gateway is running AV software with a signature file having that version number.
  • In an embodiment, a messaging gateway 107 automatically reduces a stored VSV for a message upon receiving a new rule that is more specific for a set of messages than a previously received rule. For example, assume that the TOC 708 initially distributes a rule that any message with a .ZIP file attachment is assigned VSV “3”. The TOC 708 then distributes a rule that .ZIP file attachments between 30 KB and 35 KB have VSV “3”. In response, messaging gateway 107 reduces the VSVs of all messages with ZIP attachments of different file sizes to a default VSV, e.g., “1”.
  • In an embodiment, anti-spam logic 119 can learn to identify legitimate email specific to an organization based on outbound message characteristics such as recipient addresses, recipient domains and frequently used words or phrases. In this context, an outbound message is a message composed by a user account associated with computers 120A, 120B, 120C on private network 110 and directed through messaging gateway 107 to a recipient account that is logically outside the messaging gateway. Such a recipient account typically is on a computer that is connected to public network 102. Since all outbound messages pass through messaging gateway 107 before delivery into network 102, and such outbound messages are nearly never spam, the messaging gateway can scan such messages and automatically generate heuristics or rules that are associated with non-spam messages. In an embodiment, learning is accomplished by training a Bayesian filter in anti-spam logic 119 on the text of outbound messages, and then using the Bayesian filter to test inbound messages. If the trained Bayesian filter returns a high probability, then the inbound message probably is not spam according to the probability that the outbound messages are not spam.
  • In an embodiment, messaging gateway 107 periodically polls the rules-URLs server 707 to request any available rule updates. HTTPS may be used to deliver rule updates. In an embodiment, an administrator of messaging gateway 107 can access and examine rule updates by entering URLs of the rule updates and connecting to rules-URLs server 707 using a browser and a proxy server or fixed address. An administrator can then delivery the updates to selected messaging gateways 107 within a managed network. Receiving a rule update may comprise displaying a user notification in an interface of messaging gateway 107, or writing an entry in log file 113 stating that a rule update was received or that the messaging gateway successfully connected to the rules-URLs server 707.
  • 4.5 Communication with Service Provider
  • Customer messaging gateways 107 in FIG. 1 may implement a “phone home” or “SenderBase Network Participation” service in which the messaging gateways 107 can open connections to the service provider 700 and provide information about the messages that the messaging gateways 107 have processed, so that such information from the field can be added to the corpus and otherwise used at the service provider to improve scoring, outbreak detection, and heuristics.
  • In one embodiment, a tree data structure and processing algorithm are used to provide efficient data communication from messaging gateways 107 to the service provider.
  • Data from service provider generated as part of anti-spam and anti-virus checks is sent to messaging gateways 107 in the field. As a result, the service provider creates metadata describing what data the service provider wants the messaging gateways 107 to return to the service provider. The messaging gateways 107 collate data matching the metadata for a period of time, e.g., 5 minutes. The messaging gateways 107 then connect back to the service provider and provide field data according to the specifications of the metadata.
  • In this approach, defining and delivering different metadata to the messaging gateways 107 at different times enables the service provider to instruct the messaging gateways 107 in the field to deliver different data back to the service provider. Thus, the “phone home” service becomes extensible at the direction of the service provider. No update to software at the MGA is required.
  • In one implementation, a tree is implemented as a hash of hashes. A standard mapping of nested hashes (or dictionaries in Python) to trees existed. Certain nodes are named in a way that the data returns from the MGA about which things are which. By naming nodes in the tree, rather than describing things solely based on their position, the MGA does not need to know what the service provider will do with the data. The MGA merely needs to locate the correct data by name, and send a copy of the data back to the service provider. The only thing the MGA needs to know is the type of the data, that is, whether the data is a numeric value or string. The MGA does not need to perform computations or transformations of the data to suit the service provider.
  • Constraints are placed on the structure of the data. Rules are that endpoints of the tree are always one of two things. If the target data is a number, then the leaf node is a counter. When the MGA sees the next message that comes through, it increments or decrements the counter for that node. If the target data is a string, then the leaf node is overwritten with that string value.
  • Using the counter approach, any form of data can be communicated. For example, if the MGA needs to communicate an average score value back to the service provider, rather than having the service provider inform the MGA that the service provider wants the MGA to return a particular value as an average score, two counters are used, one for the top value and one for the bottom value. The MGA need not know which is which. It simply counts the prescribed values and returns them. Logic at the service provider knows that the values received from the MGA are counters and need to be averaged and stored.
  • Thus, this approach provides a method for transparent collation and transfer of data in which the device transferring the data does not know the specific use of the data, but can collate and provide the data. Further, the service provider can update its software to request additional values from messaging gateways 107, but no update to the MGA software is required. This enables the service provider to collect data without having to change hundreds or thousands of messaging gateways 107 in the field.
  • Example data that can be communicated from a messaging gateway 107 to service provider 700 includes X-header values containing obfuscated rules that matched on a particular message and resulted in a spam verdict.
  • 4.7 Outbound Whitelist Module
  • In the configuration of FIG. 3, customer messaging gateways 107 can be deployed in a customer network so that they receive and process both inbound and outbound message traffic. Therefore, a messaging gateway 107 can be configured with an outbound message whitelist. In this approach, the destination network addresses of designated messages leaving the messaging gateway 107 are placed in an outbound message whitelist with a weight value. The outbound message whitelist is consulted when an inbound message is received, and inbound messages having source network addresses in the outbound whitelist are delivered if the weight value is appropriate. That is, the weight value is considered in determining if the message should be delivered; the presence of an address in the outbound whitelist does not necessarily mandate delivery. The rationale is that a message received from an entity in the outbound whitelist should not be spam or threatening, because sending a message to that entity implicitly indicates trust. The outbound whitelist may be maintained at the service provider for distribution to other customer messaging gateways 107.
  • Determining weight values may be performed with several approaches. For example, a destination address can be processed using a reputation scoring system, and a weight value can be selected based on the resulting reputation score. Message identifiers can be tracked and compared to determine if an inbound message is actually replying to a prior message that was sent. A cache of message identifiers may be used. Thus, if the Reply-To header contains a message identifier of a message previously sent by the same messaging gateway 107, then it is likely that the reply is not spam or a threat.
  • 5.0 Implementation Mechanisms—Hardware Overview
  • The approach for managing computer virus outbreaks described herein may be implemented in a variety of ways and the invention is not limited to any particular implementation. The approach may be integrated into a electronic mail system or a mail gateway appliance or other suitable device, or may be implemented as a stand-alone mechanism. Furthermore, the approach may be implemented in computer software, hardware, or a combination thereof.
  • FIG. 6 is a block diagram that illustrates a computer system 600 upon which an embodiment of the invention may be implemented. Computer system 600 includes a bus 602 or other communication mechanism for communicating information, and a processor 604 coupled with bus 602 for processing information. Computer system 600 also includes a main memory 606, such as a random access memory (“RAM”) or other dynamic storage device, coupled to bus 602 for storing information and instructions to be executed by processor 604. Main memory 606 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 604. Computer system 600 further includes a read only memory (“ROM”) 608 or other static storage device coupled to bus 602 for storing static information and instructions for processor 604. A storage device 610, such as a magnetic disk or optical disk, is provided and coupled to bus 602 for storing information and instructions.
  • Computer system 600 may be coupled via bus 602 to a display 612, such as a cathode ray tube (“CRT”), for displaying information to a computer user. An input device 614, including alphanumeric and other keys, is coupled to bus 602 for communicating information and command selections to processor 604. Another type of user input device is cursor control 616, such as a mouse, trackball, stylus, or cursor direction keys for communicating direction information and command selections to processor 604 and for controlling cursor movement on display 612. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
  • The invention is related to the use of computer system 600 for applying heuristic tests to message content, managing a dynamic threat quarantine queue, and message scanning with early exit from parsing and scanning. According to one embodiment of the invention, applying heuristic tests to message content, managing a dynamic threat quarantine queue, and message scanning with early exit from parsing and scanning is provided by computer system 600 in response to processor 604 executing one or more sequences of one or more instructions contained in main memory 606. Such instructions may be read into main memory 606 from another machine-readable medium, such as storage device 610. Execution of the sequences of instructions contained in main memory 606 causes processor 604 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.
  • The term “machine-readable medium” as used herein refers to any medium that participates in providing instructions to processor 604 for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 610. Volatile media includes dynamic memory, such as main memory 606. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 602. Transmission media can also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
  • Common forms of machine-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read.
  • Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to processor 604 for execution. For example, the instructions may initially be carried on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system 600 can receive the data on the telephone line and use an infrared transmitter to convert the data to an infrared signal. An infrared detector can receive the data carried in the infrared signal and appropriate circuitry can place the data on bus 602. Bus 602 carries the data to main memory 606, from which processor 604 retrieves and executes the instructions. The instructions received by main memory 606 may optionally be stored on storage device 610 either before or after execution by processor 604.
  • Computer system 600 also includes a communication interface 618 coupled to bus 602. Communication interface 618 provides a two-way data communication coupling to a network link 620 that is connected to a local network 622. For example, communication interface 618 may be an integrated services digital network (“ISDN”) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 618 may be a local area network (“LAN”) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface 618 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
  • Network link 620 typically provides data communication through one or more networks to other data devices. For example, network link 620 may provide a connection through local network 622 to a host computer 624 or to data equipment operated by an Internet Service Provider (“ISP”) 626. ISP 626 in turn provides data communication services through the worldwide packet data communication network now commonly referred to as the “Internet”628. Local network 622 and Internet 628 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 620 and through communication interface 618, which carry the digital data to and from computer system 600, are exemplary forms of carrier waves transporting the information.
  • Computer system 600 can send messages and receive data, including program code, through the network(s), network link 620 and communication interface 618. In the Internet example, a server 630 might transmit a requested code for an application program through Internet 628, ISP 626, local network 622 and communication interface 618. In accordance with the invention, one such downloaded application provides for applying heuristic tests to message content, managing a dynamic threat quarantine queue, and message scanning with early exit from parsing and scanning as described herein.
  • Processor 604 may execute the received code as it is received, and/or stored in storage device 610, or other non-volatile storage for later execution. In this manner, computer system 600 may obtain application code in the form of a carrier wave.
  • 6.0 Extensions and Alternatives
  • In the foregoing specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The invention includes other contexts and applications in which the mechanisms and processes described herein are available to other mechanisms, methods, programs, and processes.
  • In addition, in this description, certain process steps are set forth in a particular order, and alphabetic and alphanumeric labels are used to identify certain steps. Unless specifically stated in the disclosure, embodiments of the invention are not limited to any particular order of carrying out such steps. In particular, the labels are used merely for convenient identification of steps, and are not intended to imply, specify or require a particular order of carrying out such steps. Furthermore, other embodiments may use more or fewer steps than those discussed herein.

Claims (23)

1. An apparatus, comprising:
a network interface;
one or more processors coupled to the network interface;
logic coupled to the one or more processors which, when executed by the one or more processors, causes the one or more processors to perform:
receiving an electronic mail message having a destination address for a recipient account;
determining a threat score value for the message;
when the threat score value is greater than or equal to a specified threat threshold, storing the message in a quarantine queue without immediately delivering the message to the recipient account;
releasing the message from the quarantine queue in other than first-in-first-out order upon any of a plurality of quarantine exit criteria, wherein each quarantine exit criterion is associated with one or more exit actions; and
upon a particular exit criterion, selecting and performing the associated one or more exit actions.
2. The apparatus of claim 1 or 12, wherein the quarantine exit criteria comprise expiration of a message quarantine time limit, overflow of the quarantine queue, manual release from the quarantine queue, and receiving an update to one or more rules for determining the threat score value.
3. The apparatus of claim 1 or 12, wherein the exit actions comprise removing the file attachment from the message and delivering the message without the file attachment to the recipient account; deleting the message; modifying a subject line of the message; and
adding an X-Header to the message.
4. The apparatus of claim 1 or 12, further comprising logic for performing different actions in response to (a) a user request to manually release the message from the message quarantine, (b) expiration of a timer associated with the message, and (c) the message quarantine becoming full.
5. The apparatus of claim 1 or 12, further comprising logic for (a) in response to a user request to manually release the message from the message quarantine, delivering the message to the recipient account without modification; and (b) in response to the message quarantine becoming full, removing the file attachment from the message and delivering the message without the file attachment to the recipient account.
6. The apparatus of claim 1 or 12, further comprising logic for assigning an expiration time value to the message, wherein the expiration time value differs based upon results of applying heuristic tests to the message content.
7. The apparatus of claim 1 or 12, wherein the quarantine exit criteria comprise receiving an update to one or more rules for determining the threat score value, and wherein the exit actions comprise again determining a threat score value for the message based upon the updated rules.
8. The apparatus of claim 1 or 12, wherein the exit actions comprise sending a copy of the message to a second quarantine queue on a host other than the apparatus.
9. The apparatus of claim 1 or 12, wherein one or more different sets of exit actions are associated with different quarantine exit criteria.
10. The apparatus of claim 1 or 12, further comprising logic for determining when the threat score value is greater than or equal to a specified reporting threshold, and in response thereto, creating and sending an alert message to another host.
11. The apparatus of claim 1 or 12, wherein the threat comprises any one of a virus, spam, or a phishing attack.
12. An apparatus, comprising:
means for receiving an electronic mail message having a destination address for a recipient account;
means for determining a threat score value for the message;
means for storing the message in a quarantine queue without immediately delivering the message to the recipient account when the threat score value is greater than or equal to a specified threat threshold;
means for releasing the message from the quarantine queue in other than first-in-first-out order upon any of a plurality of quarantine exit criteria, wherein each quarantine exit criterion is associated with one or more exit actions; and
means for selecting and performing the associated one or more exit actions upon a particular exit criterion.
13. A method, comprising:
receiving an electronic mail message having a destination address for a recipient account;
determining a threat score value for the message;
when the threat score value is greater than or equal to a specified threat threshold, storing the message in a quarantine queue without immediately delivering the message to the recipient account;
releasing the message from the quarantine queue in other than first-in-first-out order upon any of a plurality of quarantine exit criteria, wherein each quarantine exit criterion is associated with one or more exit actions; and
upon a particular exit criterion, selecting and performing the associated one or more exit actions.
14. The method of claim 13, wherein the quarantine exit criteria comprise expiration of a message quarantine time limit, overflow of the quarantine queue, manual release from the quarantine queue, and receiving an update to one or more rules for determining the threat score value.
15. The method of claim 13, wherein the exit actions comprise removing the file attachment from the message and delivering the message without the file attachment to the recipient account; deleting the message; modifying a subject line of the message; and adding an X-Header to the message.
16. The method of claim 13, further comprising logic for performing different actions in response to (a) a user request to manually release the message from the message quarantine, (b) expiration of a timer associated with the message, and (c) the message quarantine becoming full.
17. The method of claim 13, further comprising logic for (a) in response to a user request to manually release the message from the message quarantine, delivering the message to the recipient account without modification; and (b) in response to the message quarantine becoming full, removing the file attachment from the message and delivering the message without the file attachment to the recipient account.
18. The method of claim 13, further comprising logic for assigning an expiration time value to the message, wherein the expiration time value differs based upon results of applying heuristic tests to the message content.
19. The method of claim 13, wherein the quarantine exit criteria comprise receiving an update to one or more rules for determining the threat score value, and wherein the exit actions comprise again determining a threat score value for the message based upon the updated rules.
20. The method of claim 13, wherein the exit actions comprise sending a copy of the message to a second quarantine queue on a host other than the method.
21. The method of claim 13, wherein one or more different sets of exit actions are associated with different quarantine exit criteria.
22. The method of claim 13, further comprising logic for determining when the threat score value is greater than or equal to a specified reporting threshold, and in response thereto, creating and sending an alert message to another host.
23. The method of claim 13, wherein the threat comprises any one of a virus, spam, or a phishing attack.
US11/635,921 2005-05-05 2006-12-07 Determining whether to quarantine a message Abandoned US20070220607A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/635,921 US20070220607A1 (en) 2005-05-05 2006-12-07 Determining whether to quarantine a message

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US67839105P 2005-05-05 2005-05-05
US11/418,812 US7854007B2 (en) 2005-05-05 2006-05-05 Identifying threats in electronic messages
US11/635,921 US20070220607A1 (en) 2005-05-05 2006-12-07 Determining whether to quarantine a message

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/418,812 Continuation US7854007B2 (en) 2005-05-05 2006-05-05 Identifying threats in electronic messages

Publications (1)

Publication Number Publication Date
US20070220607A1 true US20070220607A1 (en) 2007-09-20

Family

ID=37308748

Family Applications (6)

Application Number Title Priority Date Filing Date
US11/418,823 Active 2026-11-02 US7836133B2 (en) 2005-05-05 2006-05-05 Detecting unwanted electronic mail messages based on probabilistic analysis of referenced resources
US11/429,393 Expired - Fee Related US7877493B2 (en) 2005-05-05 2006-05-05 Method of validating requests for sender reputation information
US11/418,812 Active 2029-07-02 US7854007B2 (en) 2005-05-05 2006-05-05 Identifying threats in electronic messages
US11/429,474 Active US7548544B2 (en) 2005-05-05 2006-05-05 Method of determining network addresses of senders of electronic mail messages
US11/635,921 Abandoned US20070220607A1 (en) 2005-05-05 2006-12-07 Determining whether to quarantine a message
US11/636,150 Expired - Fee Related US7712136B2 (en) 2005-05-05 2006-12-07 Controlling a message quarantine

Family Applications Before (4)

Application Number Title Priority Date Filing Date
US11/418,823 Active 2026-11-02 US7836133B2 (en) 2005-05-05 2006-05-05 Detecting unwanted electronic mail messages based on probabilistic analysis of referenced resources
US11/429,393 Expired - Fee Related US7877493B2 (en) 2005-05-05 2006-05-05 Method of validating requests for sender reputation information
US11/418,812 Active 2029-07-02 US7854007B2 (en) 2005-05-05 2006-05-05 Identifying threats in electronic messages
US11/429,474 Active US7548544B2 (en) 2005-05-05 2006-05-05 Method of determining network addresses of senders of electronic mail messages

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/636,150 Expired - Fee Related US7712136B2 (en) 2005-05-05 2006-12-07 Controlling a message quarantine

Country Status (6)

Country Link
US (6) US7836133B2 (en)
EP (2) EP1877904B1 (en)
JP (2) JP4880675B2 (en)
CN (2) CN101495969B (en)
CA (2) CA2606998C (en)
WO (4) WO2006122055A2 (en)

Cited By (239)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020116463A1 (en) * 2001-02-20 2002-08-22 Hart Matthew Thomas Unwanted e-mail filtering
US20040123242A1 (en) * 2002-12-11 2004-06-24 Mckibben Michael T. Context instantiated application protocol
US20060200528A1 (en) * 2005-01-25 2006-09-07 Krishna Pathiyal Method and system for processing data messages
US20060277259A1 (en) * 2005-06-07 2006-12-07 Microsoft Corporation Distributed sender reputations
US20070061402A1 (en) * 2005-09-15 2007-03-15 Microsoft Corporation Multipurpose internet mail extension (MIME) analysis
US20070233787A1 (en) * 2006-04-03 2007-10-04 Pagan William G Apparatus and method for filtering and selectively inspecting e-mail
US20070260649A1 (en) * 2006-05-02 2007-11-08 International Business Machines Corporation Determining whether predefined data controlled by a server is replicated to a client machine
US20080005315A1 (en) * 2006-06-29 2008-01-03 Po-Ching Lin Apparatus, system and method for stream-based data filtering
US20080320095A1 (en) * 2007-06-25 2008-12-25 Microsoft Corporation Determination Of Participation In A Malicious Software Campaign
US20080320088A1 (en) * 2007-06-19 2008-12-25 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Helping valuable message content pass apparent message filtering
US20090007102A1 (en) * 2007-06-29 2009-01-01 Microsoft Corporation Dynamically Computing Reputation Scores for Objects
US20090030993A1 (en) * 2007-07-26 2009-01-29 Mxtoolbox Simultaneous synchronous split-domain email routing with conflict resolution
US20090055818A1 (en) * 2007-08-22 2009-02-26 International Business Machines Corporation Method for supporting, software support agent and computer system
US20090063585A1 (en) * 2007-08-31 2009-03-05 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Using party classifiability to inform message versioning
US20090063632A1 (en) * 2007-08-31 2009-03-05 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Layering prospective activity information
US20090063631A1 (en) * 2007-08-31 2009-03-05 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Message-reply-dependent update decisions
US20090083758A1 (en) * 2007-09-20 2009-03-26 Research In Motion Limited System and method for delivering variable size messages based on spam probability
US20090150497A1 (en) * 2007-12-06 2009-06-11 Mcafee Randolph Preston Electronic mail message handling and presentation methods and systems
US20090204613A1 (en) * 2008-02-13 2009-08-13 Yasuyuki Muroi Pattern detection apparatus, pattern detection system, pattern detection program and pattern detection method
US20090241173A1 (en) * 2008-03-19 2009-09-24 Websense, Inc. Method and system for protection against information stealing software
US20090248814A1 (en) * 2008-04-01 2009-10-01 Mcafee, Inc. Increasing spam scanning accuracy by rescanning with updated detection rules
US20090257434A1 (en) * 2006-12-29 2009-10-15 Huawei Technologies Co., Ltd. Packet access control method, forwarding engine, and communication apparatus
US20090265317A1 (en) * 2008-04-21 2009-10-22 Microsoft Corporation Classifying search query traffic
US20090282119A1 (en) * 2007-01-18 2009-11-12 Roke Manor Research Limited Method of filtering sections of a data stream
US20090282075A1 (en) * 2008-05-06 2009-11-12 Dawson Christopher J System and method for identifying and blocking avatar-based unsolicited advertising in a virtual universe
US20090328221A1 (en) * 2008-06-30 2009-12-31 Microsoft Corporation Malware detention for suspected malware
US20100023871A1 (en) * 2008-07-25 2010-01-28 Zumobi, Inc. Methods and Systems Providing an Interactive Social Ticker
US20100049848A1 (en) * 2007-09-24 2010-02-25 Barracuda Networks, Inc Distributed frequency data collection via indicator embedded with dns request
EP2169897A1 (en) * 2008-09-25 2010-03-31 Avira GmbH Computer-based method for the prioritization of potential malware sample messages
US20100162403A1 (en) * 2008-12-23 2010-06-24 International Business Machines Corporation System and method in a virtual universe for identifying spam avatars based upon avatar multimedia characteristics
US20100251362A1 (en) * 2008-06-27 2010-09-30 Microsoft Corporation Dynamic spam view settings
US20100287182A1 (en) * 2009-05-08 2010-11-11 Raytheon Company Method and System for Adjudicating Text Against a Defined Policy
US20100306845A1 (en) * 2009-05-26 2010-12-02 Microsoft Corporation Managing potentially phishing messages in a non-web mail client context
US20100306853A1 (en) * 2009-05-28 2010-12-02 International Business Machines Corporation Providing notification of spam avatars
US7854006B1 (en) 2006-03-31 2010-12-14 Emc Corporation Differential virus scan
US20110030069A1 (en) * 2007-12-21 2011-02-03 General Instrument Corporation System and method for preventing unauthorised use of digital media
US20110047192A1 (en) * 2009-03-19 2011-02-24 Hitachi, Ltd. Data processing system, data processing method, and program
US20110067101A1 (en) * 2009-09-15 2011-03-17 Symantec Corporation Individualized Time-to-Live for Reputation Scores of Computer Files
US20110153035A1 (en) * 2009-12-22 2011-06-23 Caterpillar Inc. Sensor Failure Detection System And Method
US20110149736A1 (en) * 2005-04-27 2011-06-23 Extreme Networks, Inc. Integrated methods of performing network switch functions
US20110179487A1 (en) * 2010-01-20 2011-07-21 Martin Lee Method and system for using spam e-mail honeypots to identify potential malware containing e-mails
US20110219450A1 (en) * 2010-03-08 2011-09-08 Raytheon Company System And Method For Malware Detection
US20110247072A1 (en) * 2008-11-03 2011-10-06 Stuart Gresley Staniford Systems and Methods for Detecting Malicious PDF Network Content
US8087084B1 (en) 2006-06-28 2011-12-27 Emc Corporation Security for scanning objects
US8122507B1 (en) 2006-06-28 2012-02-21 Emc Corporation Efficient scanning of objects
US8205261B1 (en) 2006-03-31 2012-06-19 Emc Corporation Incremental virus scan
US8214438B2 (en) 2004-03-01 2012-07-03 Microsoft Corporation (More) advanced spam detection features
US20120324579A1 (en) * 2011-06-16 2012-12-20 Microsoft Corporation Cloud malware false positive recovery
US20130097666A1 (en) * 2010-07-13 2013-04-18 Huawei Technologies Co., Ltd. Proxy gateway anti-virus method, pre-classifier, and proxy gateway
US8443445B1 (en) * 2006-03-31 2013-05-14 Emc Corporation Risk-aware scanning of objects
US8447722B1 (en) 2009-03-25 2013-05-21 Mcafee, Inc. System and method for data mining and security policy management
US8463800B2 (en) 2005-10-19 2013-06-11 Mcafee, Inc. Attributes of captured objects in a capture system
US20130159985A1 (en) * 2011-12-18 2013-06-20 International Business Machines Corporation Determining optimal update frequency for software application updates
US8473442B1 (en) 2009-02-25 2013-06-25 Mcafee, Inc. System and method for intelligent state management
US8504537B2 (en) 2006-03-24 2013-08-06 Mcafee, Inc. Signature distribution in a document registration system
US20130246371A1 (en) * 2009-01-13 2013-09-19 Mcafee, Inc. System and Method for Concept Building
US8548170B2 (en) 2003-12-10 2013-10-01 Mcafee, Inc. Document de-registration
US8554907B1 (en) * 2011-02-15 2013-10-08 Trend Micro, Inc. Reputation prediction of IP addresses
US8554774B2 (en) 2005-08-31 2013-10-08 Mcafee, Inc. System and method for word indexing in a capture system and querying thereof
US8560534B2 (en) 2004-08-23 2013-10-15 Mcafee, Inc. Database for a capture system
US8595830B1 (en) 2010-07-27 2013-11-26 Symantec Corporation Method and system for detecting malware containing E-mails based on inconsistencies in public sector “From” addresses and a sending IP address
US8601537B2 (en) 2008-07-10 2013-12-03 Mcafee, Inc. System and method for data mining and security policy management
US8601160B1 (en) * 2006-02-09 2013-12-03 Mcafee, Inc. System, method and computer program product for gathering information relating to electronic content utilizing a DNS server
US8615785B2 (en) 2005-12-30 2013-12-24 Extreme Network, Inc. Network threat detection and mitigation
US8627476B1 (en) * 2010-07-05 2014-01-07 Symantec Corporation Altering application behavior based on content provider reputation
US8656039B2 (en) 2003-12-10 2014-02-18 Mcafee, Inc. Rule parser
US8667121B2 (en) 2009-03-25 2014-03-04 Mcafee, Inc. System and method for managing data and policies
US8683035B2 (en) 2006-05-22 2014-03-25 Mcafee, Inc. Attributes of captured objects in a capture system
US8700561B2 (en) 2011-12-27 2014-04-15 Mcafee, Inc. System and method for providing data protection workflows in a network environment
US8707008B2 (en) 2004-08-24 2014-04-22 Mcafee, Inc. File system for a capture system
US8706709B2 (en) 2009-01-15 2014-04-22 Mcafee, Inc. System and method for intelligent term grouping
US8730955B2 (en) 2005-08-12 2014-05-20 Mcafee, Inc. High speed packet capture
US8762386B2 (en) 2003-12-10 2014-06-24 Mcafee, Inc. Method and apparatus for data capture and analysis system
US8806615B2 (en) 2010-11-04 2014-08-12 Mcafee, Inc. System and method for protecting specified data combinations
US8856165B1 (en) * 2010-03-26 2014-10-07 Google Inc. Ranking of users who report abuse
US20140324985A1 (en) * 2013-04-30 2014-10-30 Cloudmark, Inc. Apparatus and Method for Augmenting a Message to Facilitate Spam Identification
WO2014210289A1 (en) * 2013-06-28 2014-12-31 Symantec Corporation Techniques for detecting a security vulnerability
US8931043B2 (en) 2012-04-10 2015-01-06 Mcafee Inc. System and method for determining and using local reputations of users and hosts to protect information in a network environment
US8938773B2 (en) 2007-02-02 2015-01-20 Websense, Inc. System and method for adding context to prevent data leakage over a computer network
US8959634B2 (en) 2008-03-19 2015-02-17 Websense, Inc. Method and system for protection against information stealing software
US8984133B2 (en) 2007-06-19 2015-03-17 The Invention Science Fund I, Llc Providing treatment-indicative feedback dependent on putative content treatment
US8990939B2 (en) 2008-11-03 2015-03-24 Fireeye, Inc. Systems and methods for scheduling analysis of network content for malware
US8990944B1 (en) 2013-02-23 2015-03-24 Fireeye, Inc. Systems and methods for automatically detecting backdoors
US9009820B1 (en) 2010-03-08 2015-04-14 Raytheon Company System and method for malware detection using multiple techniques
US9009823B1 (en) 2013-02-23 2015-04-14 Fireeye, Inc. Framework for efficient security coverage of mobile software applications installed on mobile devices
US9015842B2 (en) 2008-03-19 2015-04-21 Websense, Inc. Method and system for protection against information stealing software
US9106680B2 (en) 2011-06-27 2015-08-11 Mcafee, Inc. System and method for protocol fingerprinting and reputation correlation
US9122877B2 (en) * 2011-03-21 2015-09-01 Mcafee, Inc. System and method for malware and network reputation correlation
US9130972B2 (en) 2009-05-26 2015-09-08 Websense, Inc. Systems and methods for efficient detection of fingerprinted data and information
US9176843B1 (en) 2013-02-23 2015-11-03 Fireeye, Inc. Framework for efficient security coverage of mobile software applications
US9208291B1 (en) * 2008-04-30 2015-12-08 Netapp, Inc. Integrating anti-virus in a clustered storage system
US9223972B1 (en) 2014-03-31 2015-12-29 Fireeye, Inc. Dynamically remote tuning of a malware content detection system
US9241259B2 (en) 2012-11-30 2016-01-19 Websense, Inc. Method and apparatus for managing the transfer of sensitive information to mobile devices
US9253154B2 (en) 2008-08-12 2016-02-02 Mcafee, Inc. Configuration management for a capture/registration system
US9262635B2 (en) 2014-02-05 2016-02-16 Fireeye, Inc. Detection efficacy of virtual machine-based analysis with application specific events
US9282109B1 (en) 2004-04-01 2016-03-08 Fireeye, Inc. System and method for analyzing packets
US9294501B2 (en) 2013-09-30 2016-03-22 Fireeye, Inc. Fuzzy hash of behavioral results
US9300686B2 (en) 2013-06-28 2016-03-29 Fireeye, Inc. System and method for detecting malicious links in electronic messages
US9306974B1 (en) 2013-12-26 2016-04-05 Fireeye, Inc. System, apparatus and method for automatically verifying exploits within suspect objects and highlighting the display information associated with the verified exploits
US9306960B1 (en) 2004-04-01 2016-04-05 Fireeye, Inc. Systems and methods for unauthorized activity defense
US9311479B1 (en) 2013-03-14 2016-04-12 Fireeye, Inc. Correlation and consolidation of analytic data for holistic view of a malware attack
US9355247B1 (en) 2013-03-13 2016-05-31 Fireeye, Inc. File extraction from memory dump for malicious content analysis
US9361130B2 (en) 2010-05-03 2016-06-07 Apple Inc. Systems, methods, and computer program products providing an integrated user interface for reading content
US9363280B1 (en) 2014-08-22 2016-06-07 Fireeye, Inc. System and method of detecting delivery of malware using cross-customer data
US9367681B1 (en) 2013-02-23 2016-06-14 Fireeye, Inc. Framework for efficient security coverage of mobile software applications using symbolic execution to reach regions of interest within an application
US9374242B2 (en) 2007-11-08 2016-06-21 Invention Science Fund I, Llc Using evaluations of tentative message content
US9398028B1 (en) 2014-06-26 2016-07-19 Fireeye, Inc. System, device and method for detecting a malicious attack based on communcations between remotely hosted virtual machines and malicious web servers
US20160241575A1 (en) * 2015-02-12 2016-08-18 Fujitsu Limited Information processing system and information processing method
US9432389B1 (en) 2014-03-31 2016-08-30 Fireeye, Inc. System, apparatus and method for detecting a malicious attack based on static analysis of a multi-flow object
US9430646B1 (en) 2013-03-14 2016-08-30 Fireeye, Inc. Distributed systems and methods for automatically detecting unknown bots and botnets
US9438623B1 (en) 2014-06-06 2016-09-06 Fireeye, Inc. Computer exploit detection using heap spray pattern matching
US9438613B1 (en) 2015-03-30 2016-09-06 Fireeye, Inc. Dynamic content activation for automated analysis of embedded objects
US9483644B1 (en) 2015-03-31 2016-11-01 Fireeye, Inc. Methods for detecting file altering malware in VM based analysis
US9495180B2 (en) 2013-05-10 2016-11-15 Fireeye, Inc. Optimized resource allocation for virtual machines within a malware content detection system
US9578052B2 (en) 2013-10-24 2017-02-21 Mcafee, Inc. Agent assisted malicious application blocking in a network environment
US9591015B1 (en) 2014-03-28 2017-03-07 Fireeye, Inc. System and method for offloading packet processing and static analysis operations
US9594904B1 (en) 2015-04-23 2017-03-14 Fireeye, Inc. Detecting malware based on reflection
US9594912B1 (en) 2014-06-06 2017-03-14 Fireeye, Inc. Return-oriented programming detection
US20170078234A1 (en) * 2015-09-16 2017-03-16 Litera Technologies, LLC. Systems and methods for detecting, reporting and cleaning metadata from inbound attachments
US9628507B2 (en) 2013-09-30 2017-04-18 Fireeye, Inc. Advanced persistent threat (APT) detection center
US9628498B1 (en) 2004-04-01 2017-04-18 Fireeye, Inc. System and method for bot detection
US9626509B1 (en) 2013-03-13 2017-04-18 Fireeye, Inc. Malicious content analysis with multi-version application support within single operating environment
US9661018B1 (en) 2004-04-01 2017-05-23 Fireeye, Inc. System and method for detecting anomalous behaviors using a virtual machine environment
US9690606B1 (en) 2015-03-25 2017-06-27 Fireeye, Inc. Selective system call monitoring
US9690933B1 (en) 2014-12-22 2017-06-27 Fireeye, Inc. Framework for classifying an object as malicious with machine learning for deploying updated predictive models
US9690936B1 (en) 2013-09-30 2017-06-27 Fireeye, Inc. Multistage system and method for analyzing obfuscated content for malware
US9704177B2 (en) 2008-12-23 2017-07-11 International Business Machines Corporation Identifying spam avatars in a virtual universe (VU) based upon turing tests
US9729565B2 (en) * 2014-09-17 2017-08-08 Cisco Technology, Inc. Provisional bot activity recognition
US9736179B2 (en) 2013-09-30 2017-08-15 Fireeye, Inc. System, apparatus and method for using malware analysis results to drive adaptive instrumentation of virtual machines to improve exploit detection
US9747446B1 (en) 2013-12-26 2017-08-29 Fireeye, Inc. System and method for run-time object classification
US9773112B1 (en) 2014-09-29 2017-09-26 Fireeye, Inc. Exploit detection of malware and malware families
US9824216B1 (en) 2015-12-31 2017-11-21 Fireeye, Inc. Susceptible environment detection system
US9825989B1 (en) 2015-09-30 2017-11-21 Fireeye, Inc. Cyber attack early warning system
US9825976B1 (en) 2015-09-30 2017-11-21 Fireeye, Inc. Detection and classification of exploit kits
US9838416B1 (en) 2004-06-14 2017-12-05 Fireeye, Inc. System and method of detecting malicious content
US9838417B1 (en) 2014-12-30 2017-12-05 Fireeye, Inc. Intelligent context aware user interaction for malware detection
US9910988B1 (en) 2013-09-30 2018-03-06 Fireeye, Inc. Malware analysis in accordance with an analysis plan
US9921978B1 (en) 2013-11-08 2018-03-20 Fireeye, Inc. System and method for enhanced security of storage devices
US9973531B1 (en) 2014-06-06 2018-05-15 Fireeye, Inc. Shellcode detection
US10027689B1 (en) 2014-09-29 2018-07-17 Fireeye, Inc. Interactive infection visualization for improved exploit detection and signature generation for malware and malware families
US10027690B2 (en) 2004-04-01 2018-07-17 Fireeye, Inc. Electronic message analysis for malware detection
US10033747B1 (en) 2015-09-29 2018-07-24 Fireeye, Inc. System and method for detecting interpreter-based exploit attacks
US10050998B1 (en) 2015-12-30 2018-08-14 Fireeye, Inc. Malicious message analysis system
US10068091B1 (en) 2004-04-01 2018-09-04 Fireeye, Inc. System and method for malware containment
US10075455B2 (en) 2014-12-26 2018-09-11 Fireeye, Inc. Zero-day rotating guest image profile
US10084813B2 (en) 2014-06-24 2018-09-25 Fireeye, Inc. Intrusion prevention and remedy system
US10133866B1 (en) 2015-12-30 2018-11-20 Fireeye, Inc. System and method for triggering analysis of an object for malware in response to modification of that object
US10133863B2 (en) 2013-06-24 2018-11-20 Fireeye, Inc. Zero-day discovery system
US10148693B2 (en) 2015-03-25 2018-12-04 Fireeye, Inc. Exploit detection system
US10165000B1 (en) 2004-04-01 2018-12-25 Fireeye, Inc. Systems and methods for malware attack prevention by intercepting flows of information
US10169585B1 (en) 2016-06-22 2019-01-01 Fireeye, Inc. System and methods for advanced malware detection through placement of transition events
US10176321B2 (en) 2015-09-22 2019-01-08 Fireeye, Inc. Leveraging behavior-based rules for malware family classification
US10210329B1 (en) 2015-09-30 2019-02-19 Fireeye, Inc. Method to detect application execution hijacking using memory protection
US10242185B1 (en) 2014-03-21 2019-03-26 Fireeye, Inc. Dynamic guest image creation and rollback
US10284575B2 (en) 2015-11-10 2019-05-07 Fireeye, Inc. Launcher for setting analysis environment variations for malware detection
US10284574B1 (en) 2004-04-01 2019-05-07 Fireeye, Inc. System and method for threat detection and identification
US10341365B1 (en) 2015-12-30 2019-07-02 Fireeye, Inc. Methods and system for hiding transition events for malware detection
US10417031B2 (en) 2015-03-31 2019-09-17 Fireeye, Inc. Selective virtualization for security threat detection
US10432649B1 (en) 2014-03-20 2019-10-01 Fireeye, Inc. System and method for classifying an object based on an aggregated behavior results
US10447728B1 (en) 2015-12-10 2019-10-15 Fireeye, Inc. Technique for protecting guest processes using a layered virtualization architecture
US10454950B1 (en) 2015-06-30 2019-10-22 Fireeye, Inc. Centralized aggregation technique for detecting lateral movement of stealthy cyber-attacks
US10462173B1 (en) 2016-06-30 2019-10-29 Fireeye, Inc. Malware detection verification and enhancement by coordinating endpoint and malware detection systems
US10476906B1 (en) 2016-03-25 2019-11-12 Fireeye, Inc. System and method for managing formation and modification of a cluster within a malware detection system
US10474813B1 (en) 2015-03-31 2019-11-12 Fireeye, Inc. Code injection technique for remediation at an endpoint of a network
US10474820B2 (en) 2014-06-17 2019-11-12 Hewlett Packard Enterprise Development Lp DNS based infection scores
US10491627B1 (en) 2016-09-29 2019-11-26 Fireeye, Inc. Advanced malware detection using similarity analysis
US10503904B1 (en) 2017-06-29 2019-12-10 Fireeye, Inc. Ransomware detection and mitigation
US10515214B1 (en) 2013-09-30 2019-12-24 Fireeye, Inc. System and method for classifying malware within content created during analysis of a specimen
US10523609B1 (en) 2016-12-27 2019-12-31 Fireeye, Inc. Multi-vector malware detection and analysis
US10528726B1 (en) 2014-12-29 2020-01-07 Fireeye, Inc. Microvisor-based malware detection appliance architecture
US10554507B1 (en) 2017-03-30 2020-02-04 Fireeye, Inc. Multi-level control for enhanced resource and object evaluation management of malware detection system
US10552610B1 (en) 2016-12-22 2020-02-04 Fireeye, Inc. Adaptive virtual machine snapshot update framework for malware behavioral analysis
US10565378B1 (en) 2015-12-30 2020-02-18 Fireeye, Inc. Exploit of privilege detection framework
US10572665B2 (en) 2012-12-28 2020-02-25 Fireeye, Inc. System and method to create a number of breakpoints in a virtual machine via virtual machine trapping events
US10581874B1 (en) 2015-12-31 2020-03-03 Fireeye, Inc. Malware detection system with contextual analysis
US10581879B1 (en) 2016-12-22 2020-03-03 Fireeye, Inc. Enhanced malware detection for generated objects
US10587647B1 (en) 2016-11-22 2020-03-10 Fireeye, Inc. Technique for malware detection capability comparison of network security devices
US10592678B1 (en) 2016-09-09 2020-03-17 Fireeye, Inc. Secure communications between peers using a verified virtual trusted platform module
US10601865B1 (en) 2015-09-30 2020-03-24 Fireeye, Inc. Detection of credential spearphishing attacks using email analysis
US10601863B1 (en) 2016-03-25 2020-03-24 Fireeye, Inc. System and method for managing sensor enrollment
US10601848B1 (en) 2017-06-29 2020-03-24 Fireeye, Inc. Cyber-security system and method for weak indicator detection and correlation to generate strong indicators
US10637880B1 (en) 2013-05-13 2020-04-28 Fireeye, Inc. Classifying sets of malicious indicators for detecting command and control communications associated with malware
US10642753B1 (en) 2015-06-30 2020-05-05 Fireeye, Inc. System and method for protecting a software component running in virtual machine using a virtualization layer
US10649970B1 (en) * 2013-03-14 2020-05-12 Invincea, Inc. Methods and apparatus for detection of functionality
US10671721B1 (en) 2016-03-25 2020-06-02 Fireeye, Inc. Timeout management services
US10671726B1 (en) 2014-09-22 2020-06-02 Fireeye Inc. System and method for malware analysis using thread-level event monitoring
US10701091B1 (en) 2013-03-15 2020-06-30 Fireeye, Inc. System and method for verifying a cyberthreat
US10706149B1 (en) 2015-09-30 2020-07-07 Fireeye, Inc. Detecting delayed activation malware using a primary controller and plural time controllers
US10713358B2 (en) 2013-03-15 2020-07-14 Fireeye, Inc. System and method to extract and utilize disassembly features to classify software intent
US10715542B1 (en) 2015-08-14 2020-07-14 Fireeye, Inc. Mobile application risk analysis
US10728263B1 (en) 2015-04-13 2020-07-28 Fireeye, Inc. Analytic-based security monitoring system and method
US10726127B1 (en) 2015-06-30 2020-07-28 Fireeye, Inc. System and method for protecting a software component running in a virtual machine through virtual interrupts by the virtualization layer
US10740456B1 (en) 2014-01-16 2020-08-11 Fireeye, Inc. Threat-aware architecture
US10747872B1 (en) 2017-09-27 2020-08-18 Fireeye, Inc. System and method for preventing malware evasion
US10785255B1 (en) 2016-03-25 2020-09-22 Fireeye, Inc. Cluster configuration within a scalable malware detection system
US10791138B1 (en) 2017-03-30 2020-09-29 Fireeye, Inc. Subscription-based malware detection
US10795991B1 (en) 2016-11-08 2020-10-06 Fireeye, Inc. Enterprise search
US10798112B2 (en) 2017-03-30 2020-10-06 Fireeye, Inc. Attribute-controlled malware detection
US10805346B2 (en) 2017-10-01 2020-10-13 Fireeye, Inc. Phishing attack detection
US10805340B1 (en) 2014-06-26 2020-10-13 Fireeye, Inc. Infection vector and malware tracking with an interactive user display
US10817606B1 (en) 2015-09-30 2020-10-27 Fireeye, Inc. Detecting delayed activation malware using a run-time monitoring agent and time-dilation logic
US10826931B1 (en) 2018-03-29 2020-11-03 Fireeye, Inc. System and method for predicting and mitigating cybersecurity system misconfigurations
US10846117B1 (en) 2015-12-10 2020-11-24 Fireeye, Inc. Technique for establishing secure communication between host and guest processes of a virtualization architecture
US10848521B1 (en) 2013-03-13 2020-11-24 Fireeye, Inc. Malicious content analysis using simulated user interaction without user involvement
US10855700B1 (en) 2017-06-29 2020-12-01 Fireeye, Inc. Post-intrusion detection of cyber-attacks during lateral movement within networks
US10893059B1 (en) 2016-03-31 2021-01-12 Fireeye, Inc. Verification and enhancement using detection systems located at the network periphery and endpoint devices
US10893068B1 (en) 2017-06-30 2021-01-12 Fireeye, Inc. Ransomware file modification prevention technique
US10904286B1 (en) 2017-03-24 2021-01-26 Fireeye, Inc. Detection of phishing attacks using similarity analysis
US10902119B1 (en) 2017-03-30 2021-01-26 Fireeye, Inc. Data extraction system for malware analysis
US10929266B1 (en) 2013-02-23 2021-02-23 Fireeye, Inc. Real-time visual playback with synchronous textual analysis log display and event/time indexing
US10956477B1 (en) 2018-03-30 2021-03-23 Fireeye, Inc. System and method for detecting malicious scripts through natural language processing modeling
US11005860B1 (en) 2017-12-28 2021-05-11 Fireeye, Inc. Method and system for efficient cybersecurity analysis of endpoint events
US11003773B1 (en) 2018-03-30 2021-05-11 Fireeye, Inc. System and method for automatically generating malware detection rule recommendations
US11075930B1 (en) 2018-06-27 2021-07-27 Fireeye, Inc. System and method for detecting repetitive cybersecurity attacks constituting an email campaign
US11108809B2 (en) 2017-10-27 2021-08-31 Fireeye, Inc. System and method for analyzing binary code for malware classification using artificial neural network techniques
US11113086B1 (en) 2015-06-30 2021-09-07 Fireeye, Inc. Virtual system and method for securing external network connectivity
US11153341B1 (en) 2004-04-01 2021-10-19 Fireeye, Inc. System and method for detecting malicious network content using virtual environment components
US11159464B2 (en) * 2019-08-02 2021-10-26 Dell Products L.P. System and method for detecting and removing electronic mail storms
US11182473B1 (en) 2018-09-13 2021-11-23 Fireeye Security Holdings Us Llc System and method for mitigating cyberattacks against processor operability by a guest process
US11200080B1 (en) 2015-12-11 2021-12-14 Fireeye Security Holdings Us Llc Late load technique for deploying a virtualization layer underneath a running operating system
US11228491B1 (en) 2018-06-28 2022-01-18 Fireeye Security Holdings Us Llc System and method for distributed cluster configuration monitoring and management
US11240275B1 (en) 2017-12-28 2022-02-01 Fireeye Security Holdings Us Llc Platform and method for performing cybersecurity analyses employing an intelligence hub with a modular architecture
US11244056B1 (en) 2014-07-01 2022-02-08 Fireeye Security Holdings Us Llc Verification of trusted threat-aware visualization layer
US11258806B1 (en) 2019-06-24 2022-02-22 Mandiant, Inc. System and method for automatically associating cybersecurity intelligence to cyberthreat actors
US11271955B2 (en) 2017-12-28 2022-03-08 Fireeye Security Holdings Us Llc Platform and method for retroactive reclassification employing a cybersecurity-based global data store
US11316900B1 (en) 2018-06-29 2022-04-26 FireEye Security Holdings Inc. System and method for automatically prioritizing rules for cyber-threat detection and mitigation
US11314859B1 (en) 2018-06-27 2022-04-26 FireEye Security Holdings, Inc. Cyber-security system and method for detecting escalation of privileges within an access token
US11368475B1 (en) 2018-12-21 2022-06-21 Fireeye Security Holdings Us Llc System and method for scanning remote services to locate stored objects with malware
US11381578B1 (en) 2009-09-30 2022-07-05 Fireeye Security Holdings Us Llc Network-based binary file extraction and analysis for malware detection
US11392700B1 (en) 2019-06-28 2022-07-19 Fireeye Security Holdings Us Llc System and method for supporting cross-platform data verification
US11552986B1 (en) 2015-12-31 2023-01-10 Fireeye Security Holdings Us Llc Cyber-security framework for application of virtual features
US11556640B1 (en) 2019-06-27 2023-01-17 Mandiant, Inc. Systems and methods for automated cybersecurity analysis of extracted binary string sets
US11558401B1 (en) 2018-03-30 2023-01-17 Fireeye Security Holdings Us Llc Multi-vector malware detection data sharing system for improved detection
US11637862B1 (en) 2019-09-30 2023-04-25 Mandiant, Inc. System and method for surfacing cyber-security threats with a self-learning recommendation engine
US20230199029A1 (en) * 2020-05-29 2023-06-22 Siemens Ltd., China Industrial Control System Security Analysis Method and Apparatus
US11763004B1 (en) 2018-09-27 2023-09-19 Fireeye Security Holdings Us Llc System and method for bootkit detection
US11841947B1 (en) 2015-08-05 2023-12-12 Invincea, Inc. Methods and apparatus for machine learning based malware detection
US11853427B2 (en) 2016-06-22 2023-12-26 Invincea, Inc. Methods and apparatus for detecting whether a string of characters represents malicious activity using machine learning
US11886585B1 (en) 2019-09-27 2024-01-30 Musarubra Us Llc System and method for identifying and mitigating cyberattacks through malicious position-independent code execution

Families Citing this family (421)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030097654A1 (en) * 1998-06-05 2003-05-22 Franken Kenneth A. System and method of geographic authorization for television and radio programming distributed by multiple delivery mechanisms
US8010981B2 (en) 2001-02-08 2011-08-30 Decisionmark Corp. Method and system for creating television programming guide
US7913287B1 (en) 2001-06-15 2011-03-22 Decisionmark Corp. System and method for delivering data over an HDTV digital television spectrum
US8359650B2 (en) * 2002-10-01 2013-01-22 Skybox Secutiry Inc. System, method and computer readable medium for evaluating potential attacks of worms
US8407798B1 (en) 2002-10-01 2013-03-26 Skybox Secutiry Inc. Method for simulation aided security event management
US7822620B2 (en) * 2005-05-03 2010-10-26 Mcafee, Inc. Determining website reputations using automatic testing
US7765481B2 (en) * 2005-05-03 2010-07-27 Mcafee, Inc. Indicating website reputations during an electronic commerce transaction
US9384345B2 (en) * 2005-05-03 2016-07-05 Mcafee, Inc. Providing alternative web content based on website reputation assessment
US20060253582A1 (en) * 2005-05-03 2006-11-09 Dixon Christopher J Indicating website reputations within search results
US7562304B2 (en) * 2005-05-03 2009-07-14 Mcafee, Inc. Indicating website reputations during website manipulation of user information
US8438499B2 (en) 2005-05-03 2013-05-07 Mcafee, Inc. Indicating website reputations during user interactions
US8566726B2 (en) 2005-05-03 2013-10-22 Mcafee, Inc. Indicating website reputations based on website handling of personal information
US20060253584A1 (en) * 2005-05-03 2006-11-09 Dixon Christopher J Reputation of an entity associated with a content item
WO2006135798A2 (en) * 2005-06-09 2006-12-21 Boma Systems, Incorporated Personal notification and broadcasting
GB2427048A (en) 2005-06-09 2006-12-13 Avecho Group Ltd Detection of unwanted code or data in electronic mail
US8522347B2 (en) * 2009-03-16 2013-08-27 Sonicwall, Inc. Real-time network updates for malicious content
US7636734B2 (en) * 2005-06-23 2009-12-22 Microsoft Corporation Method for probabilistic analysis of most frequently occurring electronic message addresses within personal store (.PST) files to determine owner with confidence factor based on relative weight and set of user-specified factors
US20090144826A2 (en) * 2005-06-30 2009-06-04 Webroot Software, Inc. Systems and Methods for Identifying Malware Distribution
US8560413B1 (en) * 2005-07-14 2013-10-15 John S. Quarterman Method and system for detecting distributed internet crime
US9282081B2 (en) 2005-07-28 2016-03-08 Vaporstream Incorporated Reduced traceability electronic message system and method
US7610345B2 (en) * 2005-07-28 2009-10-27 Vaporstream Incorporated Reduced traceability electronic message system and method
US7565358B2 (en) * 2005-08-08 2009-07-21 Google Inc. Agent rank
US8024799B2 (en) * 2005-08-19 2011-09-20 Cpacket Networks, Inc. Apparatus and method for facilitating network security with granular traffic modifications
US8346918B2 (en) * 2005-08-19 2013-01-01 Cpacket Networks, Inc. Apparatus and method for biased and weighted sampling of network traffic to facilitate network monitoring
US8665868B2 (en) * 2005-08-19 2014-03-04 Cpacket Networks, Inc. Apparatus and method for enhancing forwarding and classification of network traffic with prioritized matching and categorization
US8296846B2 (en) * 2005-08-19 2012-10-23 Cpacket Networks, Inc. Apparatus and method for associating categorization information with network traffic to facilitate application level processing
US8769663B2 (en) 2005-08-24 2014-07-01 Fortinet, Inc. Systems and methods for detecting undesirable network traffic content
US8204974B1 (en) * 2005-08-30 2012-06-19 Sprint Communications Company L.P. Identifying significant behaviors within network traffic
US8028337B1 (en) 2005-08-30 2011-09-27 Sprint Communications Company L.P. Profile-aware filtering of network traffic
US7925786B2 (en) * 2005-09-16 2011-04-12 Microsoft Corp. Hosting of network-based services
US20070129999A1 (en) * 2005-11-18 2007-06-07 Jie Zhou Fraud detection in web-based advertising
US8255480B2 (en) 2005-11-30 2012-08-28 At&T Intellectual Property I, L.P. Substitute uniform resource locator (URL) generation
US20070124500A1 (en) * 2005-11-30 2007-05-31 Bedingfield James C Sr Automatic substitute uniform resource locator (URL) generation
US8595325B2 (en) * 2005-11-30 2013-11-26 At&T Intellectual Property I, L.P. Substitute uniform resource locator (URL) form
US8185741B1 (en) * 2006-01-30 2012-05-22 Adobe Systems Incorporated Converting transport level transactional security into a persistent document signature
WO2007099507A2 (en) * 2006-03-02 2007-09-07 International Business Machines Corporation Operating a network monitoring entity
US8701196B2 (en) 2006-03-31 2014-04-15 Mcafee, Inc. System, method and computer program product for obtaining a reputation associated with a file
US7849502B1 (en) 2006-04-29 2010-12-07 Ironport Systems, Inc. Apparatus for monitoring network traffic
US8706470B2 (en) 2006-05-08 2014-04-22 David T. Lorenzen Methods of offering guidance on common language usage utilizing a hashing function consisting of a hash triplet
US7603350B1 (en) 2006-05-09 2009-10-13 Google Inc. Search result ranking based on trust
US20070282770A1 (en) * 2006-05-15 2007-12-06 Nortel Networks Limited System and methods for filtering electronic communications
US7921063B1 (en) 2006-05-17 2011-04-05 Daniel Quinlan Evaluating electronic mail messages based on probabilistic analysis
US20080082662A1 (en) * 2006-05-19 2008-04-03 Richard Dandliker Method and apparatus for controlling access to network resources based on reputation
US20070282723A1 (en) * 2006-05-31 2007-12-06 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Monitoring a status of a database by placing a false identifier in the database
US8209755B2 (en) * 2006-05-31 2012-06-26 The Invention Science Fund I, Llc Signaling a security breach of a protected set of files
US8640247B2 (en) * 2006-05-31 2014-01-28 The Invention Science Fund I, Llc Receiving an indication of a security breach of a protected set of files
US8191140B2 (en) * 2006-05-31 2012-05-29 The Invention Science Fund I, Llc Indicating a security breach of a protected set of files
US20070294767A1 (en) * 2006-06-20 2007-12-20 Paul Piccard Method and system for accurate detection and removal of pestware
US20080005249A1 (en) * 2006-07-03 2008-01-03 Hart Matt E Method and apparatus for determining the importance of email messages
US8615800B2 (en) 2006-07-10 2013-12-24 Websense, Inc. System and method for analyzing web content
US8020206B2 (en) 2006-07-10 2011-09-13 Websense, Inc. System and method of analyzing web content
US9003056B2 (en) 2006-07-11 2015-04-07 Napo Enterprises, Llc Maintaining a minimum level of real time media recommendations in the absence of online friends
US7970922B2 (en) * 2006-07-11 2011-06-28 Napo Enterprises, Llc P2P real time media recommendations
US8327266B2 (en) 2006-07-11 2012-12-04 Napo Enterprises, Llc Graphical user interface system for allowing management of a media item playlist based on a preference scoring system
US8059646B2 (en) 2006-07-11 2011-11-15 Napo Enterprises, Llc System and method for identifying music content in a P2P real time recommendation network
US8001603B1 (en) * 2006-07-24 2011-08-16 Symantec Corporation Variable scan of files based on file context
US8082587B2 (en) * 2006-08-02 2011-12-20 Lycos, Inc. Detecting content in files
US7971257B2 (en) * 2006-08-03 2011-06-28 Symantec Corporation Obtaining network origins of potential software threats
US8190868B2 (en) 2006-08-07 2012-05-29 Webroot Inc. Malware management through kernel detection
US8620699B2 (en) 2006-08-08 2013-12-31 Napo Enterprises, Llc Heavy influencer media recommendations
US8090606B2 (en) 2006-08-08 2012-01-03 Napo Enterprises, Llc Embedded media recommendations
US8533822B2 (en) * 2006-08-23 2013-09-10 Threatstop, Inc. Method and system for propagating network policy
US20160248813A1 (en) * 2006-08-23 2016-08-25 Threatstop, Inc. Method and system for propagating network policy
US20080077704A1 (en) * 2006-09-24 2008-03-27 Void Communications, Inc. Variable Electronic Communication Ping Time System and Method
US8087088B1 (en) * 2006-09-28 2011-12-27 Whitehat Security, Inc. Using fuzzy classification models to perform matching operations in a web application security scanner
CN101155182A (en) * 2006-09-30 2008-04-02 阿里巴巴公司 Garbage information filtering method and apparatus based on network
US20080086555A1 (en) * 2006-10-09 2008-04-10 David Alexander Feinleib System and Method for Search and Web Spam Filtering
US7882187B2 (en) 2006-10-12 2011-02-01 Watchguard Technologies, Inc. Method and system for detecting undesired email containing image-based messages
US8306199B2 (en) * 2006-10-20 2012-11-06 Nokia Corporation Accounting in a transit network
AU2007315843B2 (en) * 2006-11-03 2013-01-17 Network Box Corporation Limited An administration portal
US8484733B2 (en) * 2006-11-28 2013-07-09 Cisco Technology, Inc. Messaging security device
US7962460B2 (en) 2006-12-01 2011-06-14 Scenera Technologies, Llc Methods, systems, and computer program products for determining availability of presentable content via a subscription service
US9654495B2 (en) * 2006-12-01 2017-05-16 Websense, Llc System and method of analyzing web addresses
GB2444514A (en) 2006-12-04 2008-06-11 Glasswall Electronic file re-generation
US9729513B2 (en) 2007-11-08 2017-08-08 Glasswall (Ip) Limited Using multiple layers of policy management to manage risk
US8312536B2 (en) * 2006-12-29 2012-11-13 Symantec Corporation Hygiene-based computer security
GB2458094A (en) * 2007-01-09 2009-09-09 Surfcontrol On Demand Ltd URL interception and categorization in firewalls
US20090070185A1 (en) * 2007-01-17 2009-03-12 Concert Technology Corporation System and method for recommending a digital media subscription service
KR100850911B1 (en) * 2007-01-19 2008-08-07 삼성전자주식회사 Apparatus and method for message transmission
US20080177843A1 (en) * 2007-01-22 2008-07-24 Microsoft Corporation Inferring email action based on user input
US8763114B2 (en) 2007-01-24 2014-06-24 Mcafee, Inc. Detecting image spam
US8214497B2 (en) 2007-01-24 2012-07-03 Mcafee, Inc. Multi-dimensional reputation scoring
US8027975B2 (en) * 2007-01-31 2011-09-27 Reputation.Com, Inc. Identifying and changing personal information
US20080201722A1 (en) * 2007-02-20 2008-08-21 Gurusamy Sarathy Method and System For Unsafe Content Tracking
US7904958B2 (en) * 2007-02-27 2011-03-08 Symantec Corporation Spam honeypot domain identification
US9224427B2 (en) 2007-04-02 2015-12-29 Napo Enterprises LLC Rating media item recommendations using recommendation paths and/or media item usage
US8112720B2 (en) * 2007-04-05 2012-02-07 Napo Enterprises, Llc System and method for automatically and graphically associating programmatically-generated media item recommendations related to a user's socially recommended media items
US9246938B2 (en) * 2007-04-23 2016-01-26 Mcafee, Inc. System and method for detecting malicious mobile program code
US20080281606A1 (en) * 2007-05-07 2008-11-13 Microsoft Corporation Identifying automated click fraud programs
US8230023B2 (en) 2007-05-17 2012-07-24 International Business Machines Corporation Managing email disk usage based on user specified conditions
GB0709527D0 (en) 2007-05-18 2007-06-27 Surfcontrol Plc Electronic messaging system, message processing apparatus and message processing method
US8613092B2 (en) * 2007-05-21 2013-12-17 Mcafee, Inc. System, method and computer program product for updating a security system definition database based on prioritized instances of known unwanted data
US9083556B2 (en) * 2007-05-31 2015-07-14 Rpx Clearinghouse Llc System and method for detectng malicious mail from spam zombies
US9164993B2 (en) 2007-06-01 2015-10-20 Napo Enterprises, Llc System and method for propagating a media item recommendation message comprising recommender presence information
US20090049045A1 (en) 2007-06-01 2009-02-19 Concert Technology Corporation Method and system for sorting media items in a playlist on a media device
US9037632B2 (en) 2007-06-01 2015-05-19 Napo Enterprises, Llc System and method of generating a media item recommendation message with recommender presence information
US8285776B2 (en) 2007-06-01 2012-10-09 Napo Enterprises, Llc System and method for processing a received media item recommendation message comprising recommender presence information
US7865965B2 (en) * 2007-06-15 2011-01-04 Microsoft Corporation Optimization of distributed anti-virus scanning
WO2009003059A1 (en) * 2007-06-25 2008-12-31 Google Inc. Zero-hour quarantine of suspect electronic messages
US8849921B2 (en) * 2007-06-28 2014-09-30 Symantec Corporation Method and apparatus for creating predictive filters for messages
US20090012965A1 (en) * 2007-07-01 2009-01-08 Decisionmark Corp. Network Content Objection Handling System and Method
US20090006211A1 (en) * 2007-07-01 2009-01-01 Decisionmark Corp. Network Content And Advertisement Distribution System and Method
JP4945344B2 (en) * 2007-07-02 2012-06-06 日本電信電話株式会社 Packet filtering method and packet filtering system
US8849909B2 (en) * 2007-07-06 2014-09-30 Yahoo! Inc. Real-time asynchronous event aggregation systems
US20090019041A1 (en) * 2007-07-11 2009-01-15 Marc Colando Filename Parser and Identifier of Alternative Sources for File
JP4943278B2 (en) 2007-09-06 2012-05-30 株式会社日立製作所 Virus scanning method and computer system using the method
US8219686B2 (en) 2007-09-17 2012-07-10 Mcafee, Inc. Method and computer program product utilizing multiple UDP data packets to transfer a quantity of data otherwise in excess of a single UDP packet
US10606901B1 (en) * 2007-09-28 2020-03-31 Emc Corporation Data disposition services orchestrated in an information management infrastructure
US8730946B2 (en) * 2007-10-18 2014-05-20 Redshift Internetworking, Inc. System and method to precisely learn and abstract the positive flow behavior of a unified communication (UC) application and endpoints
US20100306856A1 (en) * 2007-10-23 2010-12-02 Gecad Technologies Sa System and method for filtering email data
US8959624B2 (en) * 2007-10-31 2015-02-17 Bank Of America Corporation Executable download tracking system
US9060034B2 (en) * 2007-11-09 2015-06-16 Napo Enterprises, Llc System and method of filtering recommenders in a media item recommendation system
US8037536B2 (en) * 2007-11-14 2011-10-11 Bank Of America Corporation Risk scoring system for the prevention of malware
US8590039B1 (en) 2007-11-28 2013-11-19 Mcafee, Inc. System, method and computer program product for sending information extracted from a potentially unwanted data sample to generate a signature
US8144841B2 (en) * 2007-12-05 2012-03-27 Microsoft Corporation Multimedia spam determination using speech conversion
US10318730B2 (en) * 2007-12-20 2019-06-11 Bank Of America Corporation Detection and prevention of malicious code execution using risk scoring
US8396951B2 (en) 2007-12-20 2013-03-12 Napo Enterprises, Llc Method and system for populating a content repository for an internet radio service based on a recommendation network
US9734507B2 (en) 2007-12-20 2017-08-15 Napo Enterprise, Llc Method and system for simulating recommendations in a social network for an offline user
US8060525B2 (en) 2007-12-21 2011-11-15 Napo Enterprises, Llc Method and system for generating media recommendations in a distributed environment based on tagging play history information with location information
US8316015B2 (en) 2007-12-21 2012-11-20 Lemi Technology, Llc Tunersphere
US8117193B2 (en) 2007-12-21 2012-02-14 Lemi Technology, Llc Tunersphere
US8296245B2 (en) * 2008-01-03 2012-10-23 Kount Inc. Method and system for creation and validation of anonymous digital credentials
US9183368B2 (en) * 2008-01-24 2015-11-10 Go Daddy Operating Company, LLC Validating control of domain zone
US8433747B2 (en) * 2008-02-01 2013-04-30 Microsoft Corporation Graphics remoting architecture
US8706820B2 (en) * 2008-02-08 2014-04-22 Microsoft Corporation Rules extensibility engine
US20110225244A1 (en) * 2008-02-13 2011-09-15 Barracuda Networks Inc. Tracing domains to authoritative servers associated with spam
US9306796B1 (en) 2008-03-18 2016-04-05 Mcafee, Inc. System, method, and computer program product for dynamically configuring a virtual environment for identifying unwanted data
US8266672B2 (en) * 2008-03-21 2012-09-11 Sophos Plc Method and system for network identification via DNS
US9123027B2 (en) * 2010-10-19 2015-09-01 QinetiQ North America, Inc. Social engineering protection appliance
US9985978B2 (en) * 2008-05-07 2018-05-29 Lookingglass Cyber Solutions Method and system for misuse detection
US8028030B2 (en) * 2008-05-22 2011-09-27 International Business Machines Corporation Method and system for supervising electronic text communications of an enterprise
US20090300012A1 (en) * 2008-05-28 2009-12-03 Barracuda Inc. Multilevel intent analysis method for email filtration
US8301904B1 (en) * 2008-06-24 2012-10-30 Mcafee, Inc. System, method, and computer program product for automatically identifying potentially unwanted data as unwanted
EP2318955A1 (en) * 2008-06-30 2011-05-11 Websense, Inc. System and method for dynamic and real-time categorization of webpages
US20100011420A1 (en) * 2008-07-02 2010-01-14 Barracuda Networks Inc. Operating a service on a network as a domain name system server
US8219644B2 (en) * 2008-07-03 2012-07-10 Barracuda Networks, Inc. Requesting a service or transmitting content as a domain name system resolver
US8676903B2 (en) * 2008-07-17 2014-03-18 International Business Machines Corporation System and method to control email whitelists
US9641537B2 (en) * 2008-08-14 2017-05-02 Invention Science Fund I, Llc Conditionally releasing a communiqué determined to be affiliated with a particular source entity in response to detecting occurrence of one or more environmental aspects
US20100057895A1 (en) * 2008-08-29 2010-03-04 At& T Intellectual Property I, L.P. Methods of Providing Reputation Information with an Address and Related Devices and Computer Program Products
CN101378407B (en) * 2008-09-26 2012-10-17 成都市华为赛门铁克科技有限公司 Method, system and equipment for pushing information
US20100125663A1 (en) * 2008-11-17 2010-05-20 Donovan John J Systems, methods, and devices for detecting security vulnerabilities in ip networks
US8181251B2 (en) * 2008-12-18 2012-05-15 Symantec Corporation Methods and systems for detecting malware
US8375435B2 (en) * 2008-12-19 2013-02-12 International Business Machines Corporation Host trust report based filtering mechanism in a reverse firewall
US8424075B1 (en) * 2008-12-31 2013-04-16 Qurio Holdings, Inc. Collaborative firewall for a distributed virtual environment
US8265658B2 (en) * 2009-02-02 2012-09-11 Waldeck Technology, Llc System and method for automated location-based widgets
US8200602B2 (en) 2009-02-02 2012-06-12 Napo Enterprises, Llc System and method for creating thematic listening experiences in a networked peer media recommendation environment
US8627461B2 (en) 2009-03-04 2014-01-07 Mcafee, Inc. System, method, and computer program product for verifying an identification of program information as unwanted
US9141794B1 (en) * 2009-03-10 2015-09-22 Trend Micro Incorporated Preemptive and/or reduced-intrusion malware scanning
US8904520B1 (en) 2009-03-19 2014-12-02 Symantec Corporation Communication-based reputation system
US9350755B1 (en) * 2009-03-20 2016-05-24 Symantec Corporation Method and apparatus for detecting malicious software transmission through a web portal
US11489857B2 (en) 2009-04-21 2022-11-01 Webroot Inc. System and method for developing a risk profile for an internet resource
CN101582887B (en) * 2009-05-20 2014-02-26 华为技术有限公司 Safety protection method, gateway device and safety protection system
US8176069B2 (en) 2009-06-01 2012-05-08 Aol Inc. Systems and methods for improved web searching
CN101576947B (en) * 2009-06-05 2012-08-08 成都市华为赛门铁克科技有限公司 Method, device and system for file protection treatment
CN101600207A (en) * 2009-06-18 2009-12-09 中兴通讯股份有限公司 A kind of SP access control method and system based on WAP
JP5147078B2 (en) * 2009-07-01 2013-02-20 日本電信電話株式会社 Address list construction method, address list construction system, and program therefor
US8347394B1 (en) * 2009-07-15 2013-01-01 Trend Micro, Inc. Detection of downloaded malware using DNS information
US8271650B2 (en) * 2009-08-25 2012-09-18 Vizibility Inc. Systems and method of identifying and managing abusive requests
US8510835B1 (en) * 2009-09-18 2013-08-13 Trend Micro Incorporated Techniques for protecting data in cloud computing environments
US8302194B2 (en) * 2009-10-26 2012-10-30 Symantec Corporation Using file prevalence to inform aggressiveness of behavioral heuristics
US8539583B2 (en) * 2009-11-03 2013-09-17 Mcafee, Inc. Rollback feature
US8356354B2 (en) * 2009-11-23 2013-01-15 Kaspersky Lab, Zao Silent-mode signature testing in anti-malware processing
US20110136542A1 (en) * 2009-12-09 2011-06-09 Nokia Corporation Method and apparatus for suggesting information resources based on context and preferences
US20110144567A1 (en) * 2009-12-15 2011-06-16 Alcon Research, Ltd. Phacoemulsification Hand Piece With Integrated Aspiration Pump and Cartridge
US8479286B2 (en) 2009-12-15 2013-07-02 Mcafee, Inc. Systems and methods for behavioral sandboxing
US8719939B2 (en) * 2009-12-31 2014-05-06 Mcafee, Inc. Malware detection via reputation system
US8782209B2 (en) * 2010-01-26 2014-07-15 Bank Of America Corporation Insider threat correlation tool
US9038187B2 (en) * 2010-01-26 2015-05-19 Bank Of America Corporation Insider threat correlation tool
US8800034B2 (en) 2010-01-26 2014-08-05 Bank Of America Corporation Insider threat correlation tool
US8793789B2 (en) 2010-07-22 2014-07-29 Bank Of America Corporation Insider threat correlation tool
US8443452B2 (en) * 2010-01-28 2013-05-14 Microsoft Corporation URL filtering based on user browser history
US8719352B2 (en) * 2010-01-29 2014-05-06 Mcafee, Inc. Reputation management for network content classification
US8516100B1 (en) * 2010-02-04 2013-08-20 Symantec Corporation Method and apparatus for detecting system message misrepresentation using a keyword analysis
US8606792B1 (en) 2010-02-08 2013-12-10 Google Inc. Scoring authors of posts
US20110209207A1 (en) * 2010-02-25 2011-08-25 Oto Technologies, Llc System and method for generating a threat assessment
US8910279B2 (en) * 2010-03-10 2014-12-09 Sonicwall, Inc. Reputation-based threat protection
CN101789105B (en) * 2010-03-15 2013-01-30 北京安天电子设备有限公司 Packet-level dynamic mail attachment virus detection method
US8544100B2 (en) 2010-04-16 2013-09-24 Bank Of America Corporation Detecting secure or encrypted tunneling in a computer network
US8782794B2 (en) 2010-04-16 2014-07-15 Bank Of America Corporation Detecting secure or encrypted tunneling in a computer network
CN101827104B (en) * 2010-04-27 2013-01-02 南京邮电大学 Multi anti-virus engine-based network virus joint defense method
US8719900B2 (en) * 2010-05-18 2014-05-06 Amazon Technologies, Inc. Validating updates to domain name system records
US8601114B1 (en) 2010-05-21 2013-12-03 Socialware, Inc. Method, system and computer program product for interception, quarantine and moderation of internal communications of uncontrolled systems
US8244818B2 (en) 2010-05-28 2012-08-14 Research In Motion Limited System and method for visual representation of spam probability
US8464342B2 (en) * 2010-08-31 2013-06-11 Microsoft Corporation Adaptively selecting electronic message scanning rules
US9021043B2 (en) * 2010-09-28 2015-04-28 Microsoft Technology Licensing Llc Message gateway with hybrid proxy/store-and-forward logic
US9148432B2 (en) * 2010-10-12 2015-09-29 Microsoft Technology Licensing, Llc Range weighted internet protocol address blacklist
US8990316B1 (en) * 2010-11-05 2015-03-24 Amazon Technologies, Inc. Identifying message deliverability problems using grouped message characteristics
US20120123778A1 (en) * 2010-11-11 2012-05-17 At&T Intellectual Property I, L.P. Security Control for SMS and MMS Support Using Unified Messaging System
US8819816B2 (en) * 2010-11-15 2014-08-26 Facebook, Inc. Differentiating between good and bad content in a user-provided content system
US8826437B2 (en) * 2010-12-14 2014-09-02 General Electric Company Intelligent system and method for mitigating cyber attacks in critical systems through controlling latency of messages in a communications network
US8769060B2 (en) 2011-01-28 2014-07-01 Nominum, Inc. Systems and methods for providing DNS services
US8667592B2 (en) * 2011-03-15 2014-03-04 Symantec Corporation Systems and methods for looking up anti-malware metadata
US9473527B1 (en) * 2011-05-05 2016-10-18 Trend Micro Inc. Automatically generated and shared white list
US20130018965A1 (en) * 2011-07-12 2013-01-17 Microsoft Corporation Reputational and behavioral spam mitigation
US9087324B2 (en) 2011-07-12 2015-07-21 Microsoft Technology Licensing, Llc Message categorization
US8650649B1 (en) * 2011-08-22 2014-02-11 Symantec Corporation Systems and methods for determining whether to evaluate the trustworthiness of digitally signed files based on signer reputation
US9432378B1 (en) 2011-09-23 2016-08-30 Jerome Svigals Internet of things security
US9319404B2 (en) 2011-09-23 2016-04-19 Jerome Svigals Security for the internet of things
US9344437B2 (en) 2011-09-23 2016-05-17 Jerome Svigals Internet of things security
US8997188B2 (en) * 2012-04-11 2015-03-31 Jerome Svigals System for enabling a smart device to securely accept unsolicited transactions
JP5667957B2 (en) * 2011-09-30 2015-02-12 Kddi株式会社 Malware detection device and program
US8726385B2 (en) * 2011-10-05 2014-05-13 Mcafee, Inc. Distributed system and method for tracking and blocking malicious internet hosts
GB201117262D0 (en) * 2011-10-06 2011-11-16 Clark Steven D Electronic mail system
WO2013077983A1 (en) 2011-11-01 2013-05-30 Lemi Technology, Llc Adaptive media recommendation systems, methods, and computer readable media
WO2013067404A1 (en) * 2011-11-03 2013-05-10 Raytheon Company Intrusion prevention system (ips) mode for a malware detection system
US9832221B1 (en) * 2011-11-08 2017-11-28 Symantec Corporation Systems and methods for monitoring the activity of devices within an organization by leveraging data generated by an existing security solution deployed within the organization
US8549612B2 (en) * 2011-11-28 2013-10-01 Dell Products, Lp System and method for incorporating quality-of-service and reputation in an intrusion detection and prevention system
US20130159497A1 (en) * 2011-12-16 2013-06-20 Microsoft Corporation Heuristic-Based Rejection of Computing Resource Requests
US20130198203A1 (en) * 2011-12-22 2013-08-01 John Bates Bot detection using profile-based filtration
US8886651B1 (en) 2011-12-22 2014-11-11 Reputation.Com, Inc. Thematic clustering
US9001699B2 (en) * 2011-12-26 2015-04-07 Jaya MEGHANI Systems and methods for communication setup via reconciliation of internet protocol addresses
US9270638B2 (en) 2012-01-20 2016-02-23 Cisco Technology, Inc. Managing address validation states in switches snooping IPv6
CN104137501B (en) * 2012-01-26 2017-10-20 惠普发展公司,有限责任合伙企业 For recognizing the system and method for pushing communication pattern
US10636041B1 (en) 2012-03-05 2020-04-28 Reputation.Com, Inc. Enterprise reputation evaluation
US8676596B1 (en) 2012-03-05 2014-03-18 Reputation.Com, Inc. Stimulating reviews at a point of sale
RU2510982C2 (en) * 2012-04-06 2014-04-10 Закрытое акционерное общество "Лаборатория Касперского" User evaluation system and method for message filtering
US8782793B2 (en) * 2012-05-22 2014-07-15 Kaspersky Lab Zao System and method for detection and treatment of malware on data storage devices
US11093984B1 (en) 2012-06-29 2021-08-17 Reputation.Com, Inc. Determining themes
US9876742B2 (en) * 2012-06-29 2018-01-23 Microsoft Technology Licensing, Llc Techniques to select and prioritize application of junk email filtering rules
US9432401B2 (en) * 2012-07-06 2016-08-30 Microsoft Technology Licensing, Llc Providing consistent security information
US9049235B2 (en) * 2012-07-16 2015-06-02 Mcafee, Inc. Cloud email message scanning with local policy application in a network environment
US9124472B1 (en) 2012-07-25 2015-09-01 Symantec Corporation Providing file information to a client responsive to a file download stability prediction
US9461897B1 (en) 2012-07-31 2016-10-04 United Services Automobile Association (Usaa) Monitoring and analysis of social network traffic
US9363133B2 (en) 2012-09-28 2016-06-07 Avaya Inc. Distributed application of enterprise policies to Web Real-Time Communications (WebRTC) interactive sessions, and related methods, systems, and computer-readable media
US10164929B2 (en) 2012-09-28 2018-12-25 Avaya Inc. Intelligent notification of requests for real-time online interaction via real-time communications and/or markup protocols, and related methods, systems, and computer-readable media
RU2514140C1 (en) * 2012-09-28 2014-04-27 Закрытое акционерное общество "Лаборатория Касперского" System and method for improving quality of detecting malicious objects using rules and priorities
US8918473B1 (en) * 2012-10-09 2014-12-23 Whatsapp Inc. System and method for detecting unwanted content
CN103824018B (en) * 2012-11-19 2017-11-14 腾讯科技(深圳)有限公司 A kind of executable file processing method and executable file monitoring method
US8904526B2 (en) * 2012-11-20 2014-12-02 Bank Of America Corporation Enhanced network security
US8869275B2 (en) * 2012-11-28 2014-10-21 Verisign, Inc. Systems and methods to detect and respond to distributed denial of service (DDoS) attacks
US9258263B2 (en) 2012-11-29 2016-02-09 International Business Machines Corporation Dynamic granular messaging persistence
US9560069B1 (en) * 2012-12-02 2017-01-31 Symantec Corporation Method and system for protection of messages in an electronic messaging system
US9106681B2 (en) * 2012-12-17 2015-08-11 Hewlett-Packard Development Company, L.P. Reputation of network address
US8805699B1 (en) 2012-12-21 2014-08-12 Reputation.Com, Inc. Reputation report with score
US8744866B1 (en) 2012-12-21 2014-06-03 Reputation.Com, Inc. Reputation report with recommendation
US8955137B2 (en) * 2012-12-21 2015-02-10 State Farm Mutual Automobile Insurance Company System and method for uploading and verifying a document
US9398038B2 (en) 2013-02-08 2016-07-19 PhishMe, Inc. Collaborative phishing attack detection
US8966637B2 (en) 2013-02-08 2015-02-24 PhishMe, Inc. Performance benchmarking for simulated phishing attacks
US9356948B2 (en) 2013-02-08 2016-05-31 PhishMe, Inc. Collaborative phishing attack detection
US9137049B2 (en) * 2013-02-28 2015-09-15 Apple Inc. Dynamically overriding alert suppressions based on prior actions
US10659480B2 (en) * 2013-03-07 2020-05-19 Inquest, Llc Integrated network threat analysis
US9294458B2 (en) 2013-03-14 2016-03-22 Avaya Inc. Managing identity provider (IdP) identifiers for web real-time communications (WebRTC) interactive flows, and related methods, systems, and computer-readable media
US8925099B1 (en) 2013-03-14 2014-12-30 Reputation.Com, Inc. Privacy scoring
US10164989B2 (en) * 2013-03-15 2018-12-25 Nominum, Inc. Distinguishing human-driven DNS queries from machine-to-machine DNS queries
US9722918B2 (en) 2013-03-15 2017-08-01 A10 Networks, Inc. System and method for customizing the identification of application or content type
US9244903B2 (en) 2013-04-15 2016-01-26 Vmware, Inc. Efficient data pattern matching
US10318397B2 (en) * 2013-04-15 2019-06-11 Vmware, Inc. Efficient data pattern matching
WO2014176461A1 (en) * 2013-04-25 2014-10-30 A10 Networks, Inc. Systems and methods for network access control
US10205624B2 (en) 2013-06-07 2019-02-12 Avaya Inc. Bandwidth-efficient archiving of real-time interactive flows, and related methods, systems, and computer-readable media
US9065969B2 (en) 2013-06-30 2015-06-23 Avaya Inc. Scalable web real-time communications (WebRTC) media engines, and related methods, systems, and computer-readable media
US9525718B2 (en) 2013-06-30 2016-12-20 Avaya Inc. Back-to-back virtual web real-time communications (WebRTC) agents, and related methods, systems, and computer-readable media
US9112840B2 (en) 2013-07-17 2015-08-18 Avaya Inc. Verifying privacy of web real-time communications (WebRTC) media channels via corresponding WebRTC data channels, and related methods, systems, and computer-readable media
CN103338211A (en) * 2013-07-19 2013-10-02 腾讯科技(深圳)有限公司 Malicious URL (unified resource locator) authenticating method and device
US9614890B2 (en) 2013-07-31 2017-04-04 Avaya Inc. Acquiring and correlating web real-time communications (WEBRTC) interactive flow characteristics, and related methods, systems, and computer-readable media
US9531808B2 (en) 2013-08-22 2016-12-27 Avaya Inc. Providing data resource services within enterprise systems for resource level sharing among multiple applications, and related methods, systems, and computer-readable media
KR101480903B1 (en) * 2013-09-03 2015-01-13 한국전자통신연구원 Method for multiple checking a mobile malicious code
US10225212B2 (en) 2013-09-26 2019-03-05 Avaya Inc. Providing network management based on monitoring quality of service (QOS) characteristics of web real-time communications (WEBRTC) interactive flows, and related methods, systems, and computer-readable media
RU2013144681A (en) 2013-10-03 2015-04-10 Общество С Ограниченной Ответственностью "Яндекс" ELECTRONIC MESSAGE PROCESSING SYSTEM FOR DETERMINING ITS CLASSIFICATION
GB2518880A (en) 2013-10-04 2015-04-08 Glasswall Ip Ltd Anti-Malware mobile content data management apparatus and method
GB2519516B (en) * 2013-10-21 2017-05-10 Openwave Mobility Inc A method, apparatus and computer program for modifying messages in a communications network
US10263952B2 (en) 2013-10-31 2019-04-16 Avaya Inc. Providing origin insight for web applications via session traversal utilities for network address translation (STUN) messages, and related methods, systems, and computer-readable media
US9319423B2 (en) 2013-11-04 2016-04-19 At&T Intellectual Property I, L.P. Malware and anomaly detection via activity recognition based on sensor data
US9769214B2 (en) 2013-11-05 2017-09-19 Avaya Inc. Providing reliable session initiation protocol (SIP) signaling for web real-time communications (WEBRTC) interactive flows, and related methods, systems, and computer-readable media
US10694029B1 (en) 2013-11-07 2020-06-23 Rightquestion, Llc Validating automatic number identification data
GB2520972A (en) 2013-12-05 2015-06-10 Ibm Workload management
US10129243B2 (en) 2013-12-27 2018-11-13 Avaya Inc. Controlling access to traversal using relays around network address translation (TURN) servers using trusted single-use credentials
US9288221B2 (en) * 2014-01-14 2016-03-15 Pfu Limited Information processing apparatus, method for determining unauthorized activity and computer-readable medium
US9264418B1 (en) * 2014-02-20 2016-02-16 Amazon Technologies, Inc. Client-side spam detection and prevention
WO2015126410A1 (en) * 2014-02-21 2015-08-27 Hewlett-Packard Development Company, L.P. Scoring for threat observables
CN103823761B (en) * 2014-03-09 2017-01-25 林虎 Method for increasing blacklist terminal capacity and retrieval speed
US9749363B2 (en) 2014-04-17 2017-08-29 Avaya Inc. Application of enterprise policies to web real-time communications (WebRTC) interactive sessions using an enterprise session initiation protocol (SIP) engine, and related methods, systems, and computer-readable media
US10581927B2 (en) 2014-04-17 2020-03-03 Avaya Inc. Providing web real-time communications (WebRTC) media services via WebRTC-enabled media servers, and related methods, systems, and computer-readable media
US9245123B1 (en) 2014-05-07 2016-01-26 Symantec Corporation Systems and methods for identifying malicious files
US9906422B2 (en) 2014-05-16 2018-02-27 A10 Networks, Inc. Distributed system to determine a server's health
US9912705B2 (en) 2014-06-24 2018-03-06 Avaya Inc. Enhancing media characteristics during web real-time communications (WebRTC) interactive sessions by using session initiation protocol (SIP) endpoints, and related methods, systems, and computer-readable media
US9652615B1 (en) 2014-06-25 2017-05-16 Symantec Corporation Systems and methods for analyzing suspected malware
US20150381533A1 (en) * 2014-06-29 2015-12-31 Avaya Inc. System and Method for Email Management Through Detection and Analysis of Dynamically Variable Behavior and Activity Patterns
CN105338126B (en) * 2014-07-17 2018-10-23 阿里巴巴集团控股有限公司 The method and server of remote-query information
US9654484B2 (en) * 2014-07-31 2017-05-16 Cisco Technology, Inc. Detecting DGA-based malicious software using network flow information
US9548988B1 (en) 2014-08-18 2017-01-17 Symantec Corporation Systems and methods for attributing potentially malicious email campaigns to known threat groups
US10666676B1 (en) * 2014-08-18 2020-05-26 Trend Micro Incorporated Detection of targeted email attacks
US9754106B2 (en) * 2014-10-14 2017-09-05 Symantec Corporation Systems and methods for classifying security events as targeted attacks
KR102295664B1 (en) * 2014-10-21 2021-08-27 삼성에스디에스 주식회사 Global server load balancer apparatus and method for dynamically controlling time-to-live
US9571510B1 (en) 2014-10-21 2017-02-14 Symantec Corporation Systems and methods for identifying security threat sources responsible for security events
US9870534B1 (en) 2014-11-06 2018-01-16 Nominum, Inc. Predicting network activities associated with a given site
US9374385B1 (en) 2014-11-07 2016-06-21 Area 1 Security, Inc. Remediating computer security threats using distributed sensor computers
WO2016073793A1 (en) * 2014-11-07 2016-05-12 Area 1 Security, Inc. Remediating computer security threats using distributed sensor computers
US9398047B2 (en) * 2014-11-17 2016-07-19 Vade Retro Technology, Inc. Methods and systems for phishing detection
US9330264B1 (en) * 2014-11-26 2016-05-03 Glasswall (Ip) Limited Statistical analytic method for the determination of the risk posed by file based content
US10318728B2 (en) 2014-12-16 2019-06-11 Entit Software Llc Determining permissible activity based on permissible activity rules
US9378364B1 (en) * 2014-12-27 2016-06-28 Intel Corporation Technologies for managing security threats to a computing system utilizing user interactions
US9621575B1 (en) * 2014-12-29 2017-04-11 A10 Networks, Inc. Context aware threat protection
US10164927B2 (en) 2015-01-14 2018-12-25 Vade Secure, Inc. Safe unsubscribe
US9674053B2 (en) * 2015-01-30 2017-06-06 Gigamon Inc. Automatic target selection
MA41502A (en) 2015-02-14 2017-12-19 Valimail Inc CENTRALIZED VALIDATION OF EMAIL SENDERS BY TARGETING EHLO NAMES AND IP ADDRESSES
US9906539B2 (en) 2015-04-10 2018-02-27 PhishMe, Inc. Suspicious message processing and incident response
US10298602B2 (en) 2015-04-10 2019-05-21 Cofense Inc. Suspicious message processing and incident response
WO2016164844A1 (en) * 2015-04-10 2016-10-13 PhishMe, Inc. Message report processing and threat prioritization
US20160337394A1 (en) * 2015-05-11 2016-11-17 The Boeing Company Newborn domain screening of electronic mail messages
US11363035B2 (en) * 2015-05-22 2022-06-14 Fisher-Rosemount Systems, Inc. Configurable robustness agent in a plant security system
US9961090B2 (en) * 2015-06-18 2018-05-01 Bank Of America Corporation Message quarantine
US9521157B1 (en) * 2015-06-24 2016-12-13 Bank Of America Corporation Identifying and assessing malicious resources
KR101666614B1 (en) * 2015-07-06 2016-10-14 (주)다우기술 Detection system and method for Advanced Persistent Threat using record
US9954804B2 (en) * 2015-07-30 2018-04-24 International Business Machines Coporation Method and system for preemptive harvesting of spam messages
CN105187408A (en) 2015-08-17 2015-12-23 北京神州绿盟信息安全科技股份有限公司 Network attack detection method and equipment
CN105743876B (en) * 2015-08-28 2019-09-13 哈尔滨安天科技股份有限公司 A kind of method and system based on mail source data discovery targeted attacks
US9467435B1 (en) 2015-09-15 2016-10-11 Mimecast North America, Inc. Electronic message threat protection system for authorized users
US10728239B2 (en) 2015-09-15 2020-07-28 Mimecast Services Ltd. Mediated access to resources
US11595417B2 (en) 2015-09-15 2023-02-28 Mimecast Services Ltd. Systems and methods for mediating access to resources
US10536449B2 (en) 2015-09-15 2020-01-14 Mimecast Services Ltd. User login credential warning system
US9654492B2 (en) * 2015-09-15 2017-05-16 Mimecast North America, Inc. Malware detection system based on stored data
US10686817B2 (en) 2015-09-21 2020-06-16 Hewlett Packard Enterprise Development Lp Identification of a DNS packet as malicious based on a value
US9787581B2 (en) 2015-09-21 2017-10-10 A10 Networks, Inc. Secure data flow open information analytics
FR3043807B1 (en) * 2015-11-18 2017-12-08 Bull Sas COMMUNICATION VALIDATION DEVICE
EP3171567B1 (en) * 2015-11-23 2018-10-24 Alcatel Lucent Advanced persistent threat detection
US9954877B2 (en) * 2015-12-21 2018-04-24 Ebay Inc. Automatic detection of hidden link mismatches with spoofed metadata
US10706368B2 (en) * 2015-12-30 2020-07-07 Veritas Technologies Llc Systems and methods for efficiently classifying data objects
US10049193B2 (en) * 2016-01-04 2018-08-14 Bank Of America Corporation System for neutralizing misappropriated electronic files
US10154056B2 (en) * 2016-02-10 2018-12-11 Agari Data, Inc. Message authenticity and risk assessment
US10218656B2 (en) 2016-03-08 2019-02-26 International Business Machines Corporation Smart message delivery based on transaction processing status
JP5982597B1 (en) * 2016-03-10 2016-08-31 株式会社Ffri Information processing apparatus, information processing method, program, and computer-readable recording medium recording the program
US10142366B2 (en) 2016-03-15 2018-11-27 Vade Secure, Inc. Methods, systems and devices to mitigate the effects of side effect URLs in legitimate and phishing electronic messages
US10432661B2 (en) * 2016-03-24 2019-10-01 Cisco Technology, Inc. Score boosting strategies for capturing domain-specific biases in anomaly detection systems
US9591012B1 (en) 2016-03-31 2017-03-07 Viewpost Ip Holdings, Llc Systems and methods for detecing fraudulent electronic communication
US11277416B2 (en) * 2016-04-22 2022-03-15 Sophos Limited Labeling network flows according to source applications
US11165797B2 (en) 2016-04-22 2021-11-02 Sophos Limited Detecting endpoint compromise based on network usage history
US10938781B2 (en) 2016-04-22 2021-03-02 Sophos Limited Secure labeling of network flows
US11102238B2 (en) 2016-04-22 2021-08-24 Sophos Limited Detecting triggering events for distributed denial of service attacks
US10986109B2 (en) 2016-04-22 2021-04-20 Sophos Limited Local proxy detection
US10073968B1 (en) * 2016-06-24 2018-09-11 Symantec Corporation Systems and methods for classifying files
WO2018004600A1 (en) 2016-06-30 2018-01-04 Sophos Limited Proactive network security using a health heartbeat
US10812348B2 (en) 2016-07-15 2020-10-20 A10 Networks, Inc. Automatic capture of network data for a detected anomaly
US10938844B2 (en) 2016-07-22 2021-03-02 At&T Intellectual Property I, L.P. Providing security through characterizing mobile traffic by domain names
US10341118B2 (en) 2016-08-01 2019-07-02 A10 Networks, Inc. SSL gateway with integrated hardware security module
RU2649793C2 (en) 2016-08-03 2018-04-04 ООО "Группа АйБи" Method and system of detecting remote connection when working on web resource pages
WO2018039792A1 (en) 2016-08-31 2018-03-08 Wedge Networks Inc. Apparatus and methods for network-based line-rate detection of unknown malware
US11182476B2 (en) * 2016-09-07 2021-11-23 Micro Focus Llc Enhanced intelligence for a security information sharing platform
RU2634209C1 (en) 2016-09-19 2017-10-24 Общество с ограниченной ответственностью "Группа АйБи ТДС" System and method of autogeneration of decision rules for intrusion detection systems with feedback
US9847973B1 (en) * 2016-09-26 2017-12-19 Agari Data, Inc. Mitigating communication risk by detecting similarity to a trusted message contact
US10880322B1 (en) 2016-09-26 2020-12-29 Agari Data, Inc. Automated tracking of interaction with a resource of a message
US10805314B2 (en) 2017-05-19 2020-10-13 Agari Data, Inc. Using message context to evaluate security of requested data
US10218716B2 (en) * 2016-10-01 2019-02-26 Intel Corporation Technologies for analyzing uniform resource locators
US10382562B2 (en) 2016-11-04 2019-08-13 A10 Networks, Inc. Verification of server certificates using hash codes
GB2555858B (en) * 2016-11-15 2021-06-23 F Secure Corp Remote malware scanning method and apparatus
US11722513B2 (en) 2016-11-30 2023-08-08 Agari Data, Inc. Using a measure of influence of sender in determining a security risk associated with an electronic message
US11044267B2 (en) 2016-11-30 2021-06-22 Agari Data, Inc. Using a measure of influence of sender in determining a security risk associated with an electronic message
US10250475B2 (en) 2016-12-08 2019-04-02 A10 Networks, Inc. Measurement of application response delay time
CN111541673A (en) * 2016-12-23 2020-08-14 新东网科技有限公司 Efficient method and system for detecting HTTP request security
RU2637477C1 (en) 2016-12-29 2017-12-04 Общество с ограниченной ответственностью "Траст" System and method for detecting phishing web pages
RU2671991C2 (en) 2016-12-29 2018-11-08 Общество с ограниченной ответственностью "Траст" System and method for collecting information for detecting phishing
US10397270B2 (en) 2017-01-04 2019-08-27 A10 Networks, Inc. Dynamic session rate limiter
US10187377B2 (en) 2017-02-08 2019-01-22 A10 Networks, Inc. Caching network generated security certificates
US11019076B1 (en) 2017-04-26 2021-05-25 Agari Data, Inc. Message security assessment using sender identity profiles
JP6533823B2 (en) * 2017-05-08 2019-06-19 デジタルア−ツ株式会社 INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, PROGRAM, RECORDING MEDIUM, AND INFORMATION PROCESSING METHOD
US11102244B1 (en) * 2017-06-07 2021-08-24 Agari Data, Inc. Automated intelligence gathering
US11757914B1 (en) * 2017-06-07 2023-09-12 Agari Data, Inc. Automated responsive message to determine a security risk of a message sender
JP6378808B2 (en) * 2017-06-28 2018-08-22 エヌ・ティ・ティ・コミュニケーションズ株式会社 Connection destination information determination device, connection destination information determination method, and program
US10742669B2 (en) * 2017-08-09 2020-08-11 NTT Security Corporation Malware host netflow analysis system and method
RU2666644C1 (en) * 2017-08-10 2018-09-11 Акционерное общество "Лаборатория Касперского" System and method of identifying potentially hazardous devices at user interaction with bank services
US10891373B2 (en) * 2017-08-31 2021-01-12 Micro Focus Llc Quarantining electronic messages based on relationships among associated addresses
US10983602B2 (en) * 2017-09-05 2021-04-20 Microsoft Technology Licensing, Llc Identifying an input device
US10708308B2 (en) * 2017-10-02 2020-07-07 Servicenow, Inc. Automated mitigation of electronic message based security threats
US11470029B2 (en) * 2017-10-31 2022-10-11 Edgewave, Inc. Analysis and reporting of suspicious email
RU2689816C2 (en) 2017-11-21 2019-05-29 ООО "Группа АйБи" Method for classifying sequence of user actions (embodiments)
US11349873B2 (en) 2017-11-27 2022-05-31 ArmorBlox, Inc. User model-based data loss prevention
RU2668710C1 (en) 2018-01-17 2018-10-02 Общество с ограниченной ответственностью "Группа АйБи ТДС" Computing device and method for detecting malicious domain names in network traffic
RU2680736C1 (en) 2018-01-17 2019-02-26 Общество с ограниченной ответственностью "Группа АйБи ТДС" Malware files in network traffic detection server and method
RU2676247C1 (en) 2018-01-17 2018-12-26 Общество С Ограниченной Ответственностью "Группа Айби" Web resources clustering method and computer device
RU2677361C1 (en) 2018-01-17 2019-01-16 Общество с ограниченной ответственностью "Траст" Method and system of decentralized identification of malware programs
RU2677368C1 (en) 2018-01-17 2019-01-16 Общество С Ограниченной Ответственностью "Группа Айби" Method and system for automatic determination of fuzzy duplicates of video content
RU2681699C1 (en) 2018-02-13 2019-03-12 Общество с ограниченной ответственностью "Траст" Method and server for searching related network resources
JP6768732B2 (en) * 2018-04-05 2020-10-14 デジタルア−ツ株式会社 Information processing equipment, information processing programs, recording media and information processing methods
US10880319B2 (en) * 2018-04-26 2020-12-29 Micro Focus Llc Determining potentially malware generated domain names
US11431745B2 (en) * 2018-04-30 2022-08-30 Microsoft Technology Licensing, Llc Techniques for curating threat intelligence data
US10785188B2 (en) * 2018-05-22 2020-09-22 Proofpoint, Inc. Domain name processing systems and methods
US10839353B2 (en) * 2018-05-24 2020-11-17 Mxtoolbox, Inc. Systems and methods for improved email security by linking customer domains to outbound sources
US11372893B2 (en) 2018-06-01 2022-06-28 Ntt Security Holdings Corporation Ensemble-based data curation pipeline for efficient label propagation
US11374977B2 (en) * 2018-09-20 2022-06-28 Forcepoint Llc Endpoint risk-based network protection
US11025651B2 (en) 2018-12-06 2021-06-01 Saudi Arabian Oil Company System and method for enhanced security analysis for quarantined email messages
RU2708508C1 (en) 2018-12-17 2019-12-09 Общество с ограниченной ответственностью "Траст" Method and a computing device for detecting suspicious users in messaging systems
US11176251B1 (en) 2018-12-21 2021-11-16 Fireeye, Inc. Determining malware via symbolic function hash analysis
US11743290B2 (en) 2018-12-21 2023-08-29 Fireeye Security Holdings Us Llc System and method for detecting cyberattacks impersonating legitimate sources
RU2701040C1 (en) 2018-12-28 2019-09-24 Общество с ограниченной ответственностью "Траст" Method and a computer for informing on malicious web resources
US11601444B1 (en) 2018-12-31 2023-03-07 Fireeye Security Holdings Us Llc Automated system for triage of customer issues
US11411990B2 (en) * 2019-02-15 2022-08-09 Forcepoint Llc Early detection of potentially-compromised email accounts
US11063897B2 (en) 2019-03-01 2021-07-13 Cdw Llc Method and system for analyzing electronic communications and customer information to recognize and mitigate message-based attacks
US11310238B1 (en) 2019-03-26 2022-04-19 FireEye Security Holdings, Inc. System and method for retrieval and analysis of operational data from customer, cloud-hosted virtual resources
US10686826B1 (en) 2019-03-28 2020-06-16 Vade Secure Inc. Optical scanning parameters computation methods, devices and systems for malicious URL detection
RU2710739C1 (en) * 2019-03-29 2020-01-10 Акционерное общество "Лаборатория Касперского" System and method of generating heuristic rules for detecting messages containing spam
US11677786B1 (en) 2019-03-29 2023-06-13 Fireeye Security Holdings Us Llc System and method for detecting and protecting against cybersecurity attacks on servers
US11636198B1 (en) 2019-03-30 2023-04-25 Fireeye Security Holdings Us Llc System and method for cybersecurity analyzer update and concurrent management system
US11582120B2 (en) 2019-05-30 2023-02-14 Vmware, Inc. Partitioning health monitoring in a global server load balancing system
US11405363B2 (en) 2019-06-26 2022-08-02 Microsoft Technology Licensing, Llc File upload control for client-side applications in proxy solutions
US11178178B2 (en) * 2019-07-29 2021-11-16 Material Security Inc. Secure communications service for intercepting suspicious messages and performing backchannel verification thereon
CN110443051B (en) * 2019-07-30 2022-12-27 空气动力学国家重点实验室 Method for preventing confidential documents from spreading on Internet
KR102300193B1 (en) * 2019-09-02 2021-09-08 주식회사 엘지유플러스 Method and apparatus for preventing error remittance
EP3808049B1 (en) * 2019-09-03 2022-02-23 Google LLC Systems and methods for authenticated control of content delivery
RU2728498C1 (en) 2019-12-05 2020-07-29 Общество с ограниченной ответственностью "Группа АйБи ТДС" Method and system for determining software belonging by its source code
RU2728497C1 (en) 2019-12-05 2020-07-29 Общество с ограниченной ответственностью "Группа АйБи ТДС" Method and system for determining belonging of software by its machine code
RU2743974C1 (en) 2019-12-19 2021-03-01 Общество с ограниченной ответственностью "Группа АйБи ТДС" System and method for scanning security of elements of network architecture
US11838300B1 (en) 2019-12-24 2023-12-05 Musarubra Us Llc Run-time configurable cybersecurity system
US11522884B1 (en) 2019-12-24 2022-12-06 Fireeye Security Holdings Us Llc Subscription and key management system
US11436327B1 (en) 2019-12-24 2022-09-06 Fireeye Security Holdings Us Llc System and method for circumventing evasive code for cyberthreat detection
US11582190B2 (en) * 2020-02-10 2023-02-14 Proofpoint, Inc. Electronic message processing systems and methods
SG10202001963TA (en) 2020-03-04 2021-10-28 Group Ib Global Private Ltd System and method for brand protection based on the search results
EP4144063A1 (en) * 2020-04-29 2023-03-08 Knowbe4, Inc. Systems and methods for reporting based simulated phishing campaign
US11475090B2 (en) 2020-07-15 2022-10-18 Group-Ib Global Private Limited Method and system for identifying clusters of affiliated web resources
US11483314B2 (en) * 2020-08-04 2022-10-25 Mastercard Technologies Canada ULC Distributed evaluation list updating
RU2743619C1 (en) 2020-08-06 2021-02-20 Общество с ограниченной ответственностью "Группа АйБи ТДС" Method and system for generating the list of compromise indicators
US11050698B1 (en) * 2020-09-18 2021-06-29 Area 1 Security, Inc. Message processing system with business email compromise detection
US20220116406A1 (en) * 2020-10-12 2022-04-14 Microsoft Technology Licensing, Llc Malware detection and mitigation via a forward proxy server
US11588848B2 (en) 2021-01-05 2023-02-21 Bank Of America Corporation System and method for suspending a computing device suspected of being infected by a malicious code using a kill switch button
US11748680B2 (en) 2021-02-22 2023-09-05 Intone Networks India Pvt. Ltd System for internal audit and internal control management and related methods
US11882112B2 (en) 2021-05-26 2024-01-23 Bank Of America Corporation Information security system and method for phishing threat prevention using tokens
US11792155B2 (en) * 2021-06-14 2023-10-17 Vmware, Inc. Method and apparatus for enhanced client persistence in multi-site GSLB deployments
US20230004638A1 (en) * 2021-06-30 2023-01-05 Citrix Systems, Inc. Redirection of attachments based on risk and context
US20230037564A1 (en) * 2021-08-06 2023-02-09 Bank Of America Corporation System and method for generating optimized data queries to improve hardware efficiency and utilization
US20230041397A1 (en) * 2021-08-06 2023-02-09 Vmware, Inc. System and method for checking reputations of executable files using file origin analysis
US20230205878A1 (en) * 2021-12-28 2023-06-29 Uab 360 It Systems and methods for detecting malware using static and dynamic malware models
CN115348234B (en) * 2022-08-10 2023-11-03 山石网科通信技术股份有限公司 Server detection method and device and electronic equipment
CN115632878B (en) * 2022-12-06 2023-03-31 中海油能源发展股份有限公司采油服务分公司 Data transmission method, device, equipment and storage medium based on network isolation

Citations (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5933416A (en) * 1995-11-16 1999-08-03 Loran Network Systems, Llc Method of determining the topology of a network of objects
US6006329A (en) * 1997-08-11 1999-12-21 Symantec Corporation Detection of computer viruses spanning multiple data streams
US6052709A (en) * 1997-12-23 2000-04-18 Bright Light Technologies, Inc. Apparatus and method for controlling delivery of unsolicited electronic mail
US6067619A (en) * 1998-09-24 2000-05-23 Hewlett-Packard Company Apparatus and method for configuring a computer networking device
US6072942A (en) * 1996-09-18 2000-06-06 Secure Computing Corporation System and method of electronic mail filtering using interconnected nodes
US6119236A (en) * 1996-10-07 2000-09-12 Shipley; Peter M. Intelligent network security device and method
US6161185A (en) * 1998-03-06 2000-12-12 Mci Communications Corporation Personal authentication system and method for multiple computer platform
US20020004908A1 (en) * 2000-07-05 2002-01-10 Nicholas Paul Andrew Galea Electronic mail message anti-virus system and method
US20020199095A1 (en) * 1997-07-24 2002-12-26 Jean-Christophe Bandini Method and system for filtering communication
US20030023692A1 (en) * 2001-07-27 2003-01-30 Fujitsu Limited Electronic message delivery system, electronic message delivery managment server, and recording medium in which electronic message delivery management program is recorded
US20030023875A1 (en) * 2001-07-26 2003-01-30 Hursey Neil John Detecting e-mail propagated malware
US6546416B1 (en) * 1998-12-09 2003-04-08 Infoseek Corporation Method and system for selectively blocking delivery of bulk electronic mail
US20030115485A1 (en) * 2001-12-14 2003-06-19 Milliken Walter Clark Hash-based systems and methods for detecting, preventing, and tracing network worms and viruses
US6615242B1 (en) * 1998-12-28 2003-09-02 At&T Corp. Automatic uniform resource locator-based message filter
US20030185391A1 (en) * 2002-03-28 2003-10-02 Broadcom Corporation Methods and apparatus for performing hash operations in a cryptography accelerator
US6654787B1 (en) * 1998-12-31 2003-11-25 Brightmail, Incorporated Method and apparatus for filtering e-mail
US20040006747A1 (en) * 2000-03-13 2004-01-08 Tyler Joseph C. Electronic publishing system and method
US20040019651A1 (en) * 2002-07-29 2004-01-29 Andaker Kristian L. M. Categorizing electronic messages based on collaborative feedback
US6701440B1 (en) * 2000-01-06 2004-03-02 Networks Associates Technology, Inc. Method and system for protecting a computer using a remote e-mail scanning device
US20040054917A1 (en) * 2002-08-30 2004-03-18 Wholesecurity, Inc. Method and apparatus for detecting malicious code in the form of a trojan horse in an information handling system
US6728690B1 (en) * 1999-11-23 2004-04-27 Microsoft Corporation Classification system trainer employing maximum margin back-propagation with probabilistic outputs
US20040083270A1 (en) * 2002-10-23 2004-04-29 David Heckerman Method and system for identifying junk e-mail
US20040083408A1 (en) * 2002-10-24 2004-04-29 Mark Spiegel Heuristic detection and termination of fast spreading network worm attacks
US20040117648A1 (en) * 2002-12-16 2004-06-17 Kissel Timo S. Proactive protection against e-mail worms and spam
US6757830B1 (en) * 2000-10-03 2004-06-29 Networks Associates Technology, Inc. Detecting unwanted properties in received email messages
US20040250134A1 (en) * 2002-11-04 2004-12-09 Kohler Edward W. Data collectors in connection-based intrusion detection
US20040260776A1 (en) * 2003-06-23 2004-12-23 Starbuck Bryan T. Advanced spam detection techniques
US20050022008A1 (en) * 2003-06-04 2005-01-27 Goodman Joshua T. Origination/destination features and lists for spam prevention
US20050060643A1 (en) * 2003-08-25 2005-03-17 Miavia, Inc. Document similarity detection and classification system
US20050060295A1 (en) * 2003-09-12 2005-03-17 Sensory Networks, Inc. Statistical classification of high-speed network data through content inspection
US20050080856A1 (en) * 2003-10-09 2005-04-14 Kirsch Steven T. Method and system for categorizing and processing e-mails
US6886099B1 (en) * 2000-09-12 2005-04-26 Networks Associates Technology, Inc. Computer virus detection
US20050188036A1 (en) * 2004-01-21 2005-08-25 Nec Corporation E-mail filtering system and method
US6941348B2 (en) * 2002-02-19 2005-09-06 Postini, Inc. Systems and methods for managing the transmission of electronic messages through active message date updating
US6941466B2 (en) * 2001-02-22 2005-09-06 International Business Machines Corporation Method and apparatus for providing automatic e-mail filtering based on message semantics, sender's e-mail ID, and user's identity
US6944616B2 (en) * 2001-11-28 2005-09-13 Pavilion Technologies, Inc. System and method for historical database training of support vector machines
US20050246440A1 (en) * 2004-03-09 2005-11-03 Mailshell, Inc. Suppression of undesirable network messages
US20050259644A1 (en) * 2004-05-18 2005-11-24 Microsoft Corporation System and method for defeating SYN attacks
US20050262209A1 (en) * 2004-03-09 2005-11-24 Mailshell, Inc. System for email processing and analysis
US20050283837A1 (en) * 2004-06-16 2005-12-22 Michael Olivier Method and apparatus for managing computer virus outbreaks
US20060095410A1 (en) * 2004-10-29 2006-05-04 Ostrover Lewis S Personal video recorder for home network providing filtering and format conversion of content
US20060123083A1 (en) * 2004-12-03 2006-06-08 Xerox Corporation Adaptive spam message detector
US20060149820A1 (en) * 2005-01-04 2006-07-06 International Business Machines Corporation Detecting spam e-mail using similarity calculations
US7076527B2 (en) * 2001-06-14 2006-07-11 Apple Computer, Inc. Method and apparatus for filtering email
US20060161988A1 (en) * 2005-01-14 2006-07-20 Microsoft Corporation Privacy friendly malware quarantines
US20060168024A1 (en) * 2004-12-13 2006-07-27 Microsoft Corporation Sender reputations for spam prevention
US20060167971A1 (en) * 2004-12-30 2006-07-27 Sheldon Breiner System and method for collecting and disseminating human-observable data
US20070028301A1 (en) * 2005-07-01 2007-02-01 Markmonitor Inc. Enhanced fraud monitoring systems
US7181498B2 (en) * 2003-10-31 2007-02-20 Yahoo! Inc. Community-based green list for antispam
US7219148B2 (en) * 2003-03-03 2007-05-15 Microsoft Corporation Feedback loop for spam prevention
US7305709B1 (en) * 2002-12-13 2007-12-04 Mcafee, Inc. System, method, and computer program product for conveying a status of a plurality of security applications
US7331061B1 (en) * 2001-09-07 2008-02-12 Secureworks, Inc. Integrated computer security management system and method
US7342906B1 (en) * 2003-04-04 2008-03-11 Airespace, Inc. Distributed wireless network security system
US7366761B2 (en) * 2003-10-09 2008-04-29 Abaca Technology Corporation Method for creating a whitelist for processing e-mails
US20080104187A1 (en) * 2002-07-16 2008-05-01 Mailfrontier, Inc. Message Testing
US20080104186A1 (en) * 2003-05-29 2008-05-01 Mailfrontier, Inc. Automated Whitelist
US7418733B2 (en) * 2002-08-26 2008-08-26 International Business Machines Corporation Determining threat level associated with network activity
US20080256072A1 (en) * 2001-06-01 2008-10-16 James D. Logan Methods and apparatus for controlling the transmission and receipt of email messages
US20080270540A1 (en) * 2004-03-30 2008-10-30 Martin Wahlers Larsen Filter and a Method of Filtering Electronic Messages
US7457823B2 (en) * 2004-05-02 2008-11-25 Markmonitor Inc. Methods and systems for analyzing data related to possible online fraud
US7475118B2 (en) * 2006-02-03 2009-01-06 International Business Machines Corporation Method for recognizing spam email
US20090019126A1 (en) * 2001-10-03 2009-01-15 Reginald Adkins Authorized email control system
US7523168B2 (en) * 2003-04-17 2009-04-21 The Go Daddy Group, Inc. Mail server probability spam filter
US7627670B2 (en) * 2004-04-29 2009-12-01 International Business Machines Corporation Method and apparatus for scoring unsolicited e-mail
US7634810B2 (en) * 2004-12-02 2009-12-15 Microsoft Corporation Phishing detection, prevention, and notification

Family Cites Families (73)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4956769A (en) * 1988-05-16 1990-09-11 Sysmith, Inc. Occurence and value based security system for computer databases
US5715466A (en) 1995-02-14 1998-02-03 Compuserve Incorporated System for parallel foreign language communication over a computer network
US5623600A (en) * 1995-09-26 1997-04-22 Trend Micro, Incorporated Virus detection and removal apparatus for computer networks
US6453327B1 (en) 1996-06-10 2002-09-17 Sun Microsystems, Inc. Method and apparatus for identifying and discarding junk electronic mail
US5970149A (en) * 1996-11-19 1999-10-19 Johnson; R. Brent Combined remote access and security system
US6334193B1 (en) * 1997-05-29 2001-12-25 Oracle Corporation Method and apparatus for implementing user-definable error handling processes
US7778877B2 (en) 2001-07-09 2010-08-17 Linkshare Corporation Enhanced network based promotional tracking system
US6073165A (en) * 1997-07-29 2000-06-06 Jfax Communications, Inc. Filtering computer network messages directed to a user's e-mail box based on user defined filters, and forwarding a filtered message to the user's receiver
US6393465B2 (en) 1997-11-25 2002-05-21 Nixmail Corporation Junk electronic mail detector and eliminator
JP3225924B2 (en) 1998-07-09 2001-11-05 日本電気株式会社 Communication quality control device
US6507866B1 (en) 1999-07-19 2003-01-14 At&T Wireless Services, Inc. E-mail usage pattern detection
US7184971B1 (en) 1999-11-20 2007-02-27 Advertising.Com Method and apparatus for an E-mail affiliate program
CA2392397A1 (en) * 1999-11-23 2001-05-31 Escom Corporation Electronic message filter having a whitelist database and a quarantining mechanism
US7822977B2 (en) 2000-02-08 2010-10-26 Katsikas Peter L System for eliminating unauthorized electronic mail
JP2001222480A (en) * 2000-02-14 2001-08-17 Fujitsu Ltd Electronic mail operation management system
US6931437B2 (en) * 2000-04-27 2005-08-16 Nippon Telegraph And Telephone Corporation Concentrated system for controlling network interconnections
US7428576B2 (en) 2000-05-16 2008-09-23 Hoshiko Llc Addressee-defined mail addressing system and method
US6732153B1 (en) 2000-05-23 2004-05-04 Verizon Laboratories Inc. Unified message parser apparatus and system for real-time event correlation
US8972717B2 (en) 2000-06-15 2015-03-03 Zixcorp Systems, Inc. Automatic delivery selection for electronic content
US20020059418A1 (en) * 2000-07-17 2002-05-16 Alan Bird Method of and system for recording and displaying electronic mail statistics
TW569106B (en) 2000-07-29 2004-01-01 Hai Lin A method preventing spam
US7149778B1 (en) 2000-08-24 2006-12-12 Yahoo! Inc. Unsolicited electronic mail reduction
GB2366706B (en) 2000-08-31 2004-11-03 Content Technologies Ltd Monitoring electronic mail messages digests
US6785712B1 (en) * 2000-09-21 2004-08-31 Rockwell Collins, Inc. Airborne e-mail data transfer protocol
JP2002123469A (en) * 2000-10-13 2002-04-26 Nec System Technologies Ltd Electronic mail transmitter-receiver, electronic mail system, electronic mail processing method and recording medium
US6748422B2 (en) 2000-10-19 2004-06-08 Ebay Inc. System and method to control sending of unsolicited communications relating to a plurality of listings in a network-based commerce facility
GB2371711B (en) * 2000-11-27 2004-07-07 Nokia Mobile Phones Ltd A Server
CA2437726A1 (en) 2001-02-15 2002-08-22 Suffix Mail Inc. E-mail messaging system
US8219620B2 (en) 2001-02-20 2012-07-10 Mcafee, Inc. Unwanted e-mail filtering system including voting feedback
US20020120600A1 (en) * 2001-02-26 2002-08-29 Schiavone Vincent J. System and method for rule-based processing of electronic mail messages
GB2373130B (en) 2001-03-05 2004-09-22 Messagelabs Ltd Method of,and system for,processing email in particular to detect unsolicited bulk email
US7249195B2 (en) 2001-03-30 2007-07-24 Minor Ventures, Llc Apparatus and methods for correlating messages sent between services
US7340505B2 (en) * 2001-04-02 2008-03-04 Akamai Technologies, Inc. Content storage and replication in a managed internet content storage environment
WO2002097629A1 (en) 2001-05-30 2002-12-05 Fox Paul D System and method for providing network security policy enforcement
US7657935B2 (en) * 2001-08-16 2010-02-02 The Trustees Of Columbia University In The City Of New York System and methods for detecting malicious email transmission
US7146402B2 (en) 2001-08-31 2006-12-05 Sendmail, Inc. E-mail system providing filtering methodology on a per-domain basis
JP3717829B2 (en) 2001-10-05 2005-11-16 日本デジタル株式会社 Junk mail repelling system
US20030096605A1 (en) 2001-11-16 2003-05-22 Schlieben Karl J. System for handling proprietary files
US20030095555A1 (en) 2001-11-16 2003-05-22 Mcnamara Justin System for the validation and routing of messages
US7319858B2 (en) 2001-11-16 2008-01-15 Cingular Wireless Ii, Llc System and method for querying message information
US20030149726A1 (en) 2002-02-05 2003-08-07 At&T Corp. Automating the reduction of unsolicited email in real time
EP1482696A4 (en) * 2002-02-22 2006-03-15 Access Co Ltd Method and device for processing electronic mail undesirable for user
AUPS193202A0 (en) 2002-04-23 2002-05-30 Pickup, Robert Barkley Mr A method and system for authorising electronic mail
US7249262B2 (en) * 2002-05-06 2007-07-24 Browserkey, Inc. Method for restricting access to a web site by remote users
US20030225850A1 (en) * 2002-05-28 2003-12-04 Teague Alan H. Message processing based on address patterns
US20040003255A1 (en) 2002-06-28 2004-01-01 Storage Technology Corporation Secure email time stamping
US20040024632A1 (en) 2002-08-05 2004-02-05 Avenue A, Inc. Method of determining the effect of internet advertisement on offline commercial activity
US7072944B2 (en) * 2002-10-07 2006-07-04 Ebay Inc. Method and apparatus for authenticating electronic mail
US7533148B2 (en) 2003-01-09 2009-05-12 Microsoft Corporation Framework to enable integration of anti-spam technologies
US7171450B2 (en) 2003-01-09 2007-01-30 Microsoft Corporation Framework to enable integration of anti-spam technologies
US8595495B2 (en) 2003-01-12 2013-11-26 Yaron Mayer System and method for secure communications
JP4344922B2 (en) * 2003-01-27 2009-10-14 富士ゼロックス株式会社 Evaluation apparatus and method
JP2004254034A (en) * 2003-02-19 2004-09-09 Fujitsu Ltd System and method for controlling spam mail suppression policy
US7249162B2 (en) 2003-02-25 2007-07-24 Microsoft Corporation Adaptive junk message filtering system
US20050091320A1 (en) 2003-10-09 2005-04-28 Kirsch Steven T. Method and system for categorizing and processing e-mails
US20040177120A1 (en) 2003-03-07 2004-09-09 Kirsch Steven T. Method for filtering e-mail messages
US20050091319A1 (en) * 2003-10-09 2005-04-28 Kirsch Steven T. Database for receiving, storing and compiling information about email messages
US20040181581A1 (en) 2003-03-11 2004-09-16 Michael Thomas Kosco Authentication method for preventing delivery of junk electronic mail
US20060168006A1 (en) * 2003-03-24 2006-07-27 Mr. Marvin Shannon System and method for the classification of electronic communication
US7346700B2 (en) * 2003-04-07 2008-03-18 Time Warner Cable, A Division Of Time Warner Entertainment Company, L.P. System and method for managing e-mail message traffic
US7366919B1 (en) 2003-04-25 2008-04-29 Symantec Corporation Use of geo-location data for spam detection
JP4013835B2 (en) * 2003-06-11 2007-11-28 日本電気株式会社 E-mail relay device and e-mail relay method used therefor
US20040254990A1 (en) 2003-06-13 2004-12-16 Nokia, Inc. System and method for knock notification to an unsolicited message
US7051077B2 (en) * 2003-06-30 2006-05-23 Mx Logic, Inc. Fuzzy logic voting method and system for classifying e-mail using inputs from multiple spam classifiers
DE602004022817D1 (en) * 2003-07-11 2009-10-08 Computer Ass Think Inc PROCESS AND SYSTEM FOR PROTECTION FROM COMPUTER VIRUSES
JP2005056048A (en) * 2003-08-01 2005-03-03 Fact-Real:Kk Electronic mail monitoring system, electronic mail monitoring program and electronic mail monitoring method
GB2405229B (en) * 2003-08-19 2006-01-11 Sophos Plc Method and apparatus for filtering electronic mail
US20050071432A1 (en) 2003-09-29 2005-03-31 Royston Clifton W. Probabilistic email intrusion identification methods and systems
US7257564B2 (en) * 2003-10-03 2007-08-14 Tumbleweed Communications Corp. Dynamic message filtering
US20050080858A1 (en) 2003-10-10 2005-04-14 Microsoft Corporation System and method for searching a peer-to-peer network
US7554974B2 (en) 2004-03-09 2009-06-30 Tekelec Systems and methods of performing stateful signaling transactions in a distributed processing environment
US20050204005A1 (en) 2004-03-12 2005-09-15 Purcell Sean E. Selective treatment of messages based on junk rating
JP4128975B2 (en) * 2004-04-02 2008-07-30 株式会社古河テクノマテリアル Superelastic titanium alloy for living body

Patent Citations (70)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5933416A (en) * 1995-11-16 1999-08-03 Loran Network Systems, Llc Method of determining the topology of a network of objects
US6072942A (en) * 1996-09-18 2000-06-06 Secure Computing Corporation System and method of electronic mail filtering using interconnected nodes
US6119236A (en) * 1996-10-07 2000-09-12 Shipley; Peter M. Intelligent network security device and method
US20020199095A1 (en) * 1997-07-24 2002-12-26 Jean-Christophe Bandini Method and system for filtering communication
US6006329A (en) * 1997-08-11 1999-12-21 Symantec Corporation Detection of computer viruses spanning multiple data streams
US6052709A (en) * 1997-12-23 2000-04-18 Bright Light Technologies, Inc. Apparatus and method for controlling delivery of unsolicited electronic mail
US6161185A (en) * 1998-03-06 2000-12-12 Mci Communications Corporation Personal authentication system and method for multiple computer platform
US6067619A (en) * 1998-09-24 2000-05-23 Hewlett-Packard Company Apparatus and method for configuring a computer networking device
US6546416B1 (en) * 1998-12-09 2003-04-08 Infoseek Corporation Method and system for selectively blocking delivery of bulk electronic mail
US6615242B1 (en) * 1998-12-28 2003-09-02 At&T Corp. Automatic uniform resource locator-based message filter
US6654787B1 (en) * 1998-12-31 2003-11-25 Brightmail, Incorporated Method and apparatus for filtering e-mail
US6728690B1 (en) * 1999-11-23 2004-04-27 Microsoft Corporation Classification system trainer employing maximum margin back-propagation with probabilistic outputs
US6701440B1 (en) * 2000-01-06 2004-03-02 Networks Associates Technology, Inc. Method and system for protecting a computer using a remote e-mail scanning device
US20040006747A1 (en) * 2000-03-13 2004-01-08 Tyler Joseph C. Electronic publishing system and method
US20020004908A1 (en) * 2000-07-05 2002-01-10 Nicholas Paul Andrew Galea Electronic mail message anti-virus system and method
US6886099B1 (en) * 2000-09-12 2005-04-26 Networks Associates Technology, Inc. Computer virus detection
US6757830B1 (en) * 2000-10-03 2004-06-29 Networks Associates Technology, Inc. Detecting unwanted properties in received email messages
US6941466B2 (en) * 2001-02-22 2005-09-06 International Business Machines Corporation Method and apparatus for providing automatic e-mail filtering based on message semantics, sender's e-mail ID, and user's identity
US20080256072A1 (en) * 2001-06-01 2008-10-16 James D. Logan Methods and apparatus for controlling the transmission and receipt of email messages
US7076527B2 (en) * 2001-06-14 2006-07-11 Apple Computer, Inc. Method and apparatus for filtering email
US20030023875A1 (en) * 2001-07-26 2003-01-30 Hursey Neil John Detecting e-mail propagated malware
US20030023692A1 (en) * 2001-07-27 2003-01-30 Fujitsu Limited Electronic message delivery system, electronic message delivery managment server, and recording medium in which electronic message delivery management program is recorded
US7331061B1 (en) * 2001-09-07 2008-02-12 Secureworks, Inc. Integrated computer security management system and method
US20090019126A1 (en) * 2001-10-03 2009-01-15 Reginald Adkins Authorized email control system
US6944616B2 (en) * 2001-11-28 2005-09-13 Pavilion Technologies, Inc. System and method for historical database training of support vector machines
US20030115485A1 (en) * 2001-12-14 2003-06-19 Milliken Walter Clark Hash-based systems and methods for detecting, preventing, and tracing network worms and viruses
US6941348B2 (en) * 2002-02-19 2005-09-06 Postini, Inc. Systems and methods for managing the transmission of electronic messages through active message date updating
US20030185391A1 (en) * 2002-03-28 2003-10-02 Broadcom Corporation Methods and apparatus for performing hash operations in a cryptography accelerator
US20080104187A1 (en) * 2002-07-16 2008-05-01 Mailfrontier, Inc. Message Testing
US20040019651A1 (en) * 2002-07-29 2004-01-29 Andaker Kristian L. M. Categorizing electronic messages based on collaborative feedback
US7418733B2 (en) * 2002-08-26 2008-08-26 International Business Machines Corporation Determining threat level associated with network activity
US20040054917A1 (en) * 2002-08-30 2004-03-18 Wholesecurity, Inc. Method and apparatus for detecting malicious code in the form of a trojan horse in an information handling system
US20040083270A1 (en) * 2002-10-23 2004-04-29 David Heckerman Method and system for identifying junk e-mail
US20040083408A1 (en) * 2002-10-24 2004-04-29 Mark Spiegel Heuristic detection and termination of fast spreading network worm attacks
US20040250134A1 (en) * 2002-11-04 2004-12-09 Kohler Edward W. Data collectors in connection-based intrusion detection
US7305709B1 (en) * 2002-12-13 2007-12-04 Mcafee, Inc. System, method, and computer program product for conveying a status of a plurality of security applications
US20040117648A1 (en) * 2002-12-16 2004-06-17 Kissel Timo S. Proactive protection against e-mail worms and spam
US7219148B2 (en) * 2003-03-03 2007-05-15 Microsoft Corporation Feedback loop for spam prevention
US7342906B1 (en) * 2003-04-04 2008-03-11 Airespace, Inc. Distributed wireless network security system
US7523168B2 (en) * 2003-04-17 2009-04-21 The Go Daddy Group, Inc. Mail server probability spam filter
US20080104186A1 (en) * 2003-05-29 2008-05-01 Mailfrontier, Inc. Automated Whitelist
US7464264B2 (en) * 2003-06-04 2008-12-09 Microsoft Corporation Training filters for detecting spasm based on IP addresses and text-related features
US7409708B2 (en) * 2003-06-04 2008-08-05 Microsoft Corporation Advanced URL and IP features
US20050022008A1 (en) * 2003-06-04 2005-01-27 Goodman Joshua T. Origination/destination features and lists for spam prevention
US7272853B2 (en) * 2003-06-04 2007-09-18 Microsoft Corporation Origination/destination features and lists for spam prevention
US20040260776A1 (en) * 2003-06-23 2004-12-23 Starbuck Bryan T. Advanced spam detection techniques
US20050060643A1 (en) * 2003-08-25 2005-03-17 Miavia, Inc. Document similarity detection and classification system
US20050060295A1 (en) * 2003-09-12 2005-03-17 Sensory Networks, Inc. Statistical classification of high-speed network data through content inspection
US7366761B2 (en) * 2003-10-09 2008-04-29 Abaca Technology Corporation Method for creating a whitelist for processing e-mails
US7206814B2 (en) * 2003-10-09 2007-04-17 Propel Software Corporation Method and system for categorizing and processing e-mails
US20050080856A1 (en) * 2003-10-09 2005-04-14 Kirsch Steven T. Method and system for categorizing and processing e-mails
US7181498B2 (en) * 2003-10-31 2007-02-20 Yahoo! Inc. Community-based green list for antispam
US20050188036A1 (en) * 2004-01-21 2005-08-25 Nec Corporation E-mail filtering system and method
US20050246440A1 (en) * 2004-03-09 2005-11-03 Mailshell, Inc. Suppression of undesirable network messages
US20050262209A1 (en) * 2004-03-09 2005-11-24 Mailshell, Inc. System for email processing and analysis
US20080270540A1 (en) * 2004-03-30 2008-10-30 Martin Wahlers Larsen Filter and a Method of Filtering Electronic Messages
US7627670B2 (en) * 2004-04-29 2009-12-01 International Business Machines Corporation Method and apparatus for scoring unsolicited e-mail
US7457823B2 (en) * 2004-05-02 2008-11-25 Markmonitor Inc. Methods and systems for analyzing data related to possible online fraud
US20050259644A1 (en) * 2004-05-18 2005-11-24 Microsoft Corporation System and method for defeating SYN attacks
US20050283837A1 (en) * 2004-06-16 2005-12-22 Michael Olivier Method and apparatus for managing computer virus outbreaks
US20060095410A1 (en) * 2004-10-29 2006-05-04 Ostrover Lewis S Personal video recorder for home network providing filtering and format conversion of content
US7634810B2 (en) * 2004-12-02 2009-12-15 Microsoft Corporation Phishing detection, prevention, and notification
US20060123083A1 (en) * 2004-12-03 2006-06-08 Xerox Corporation Adaptive spam message detector
US20060168024A1 (en) * 2004-12-13 2006-07-27 Microsoft Corporation Sender reputations for spam prevention
US7610344B2 (en) * 2004-12-13 2009-10-27 Microsoft Corporation Sender reputations for spam prevention
US20060167971A1 (en) * 2004-12-30 2006-07-27 Sheldon Breiner System and method for collecting and disseminating human-observable data
US20060149820A1 (en) * 2005-01-04 2006-07-06 International Business Machines Corporation Detecting spam e-mail using similarity calculations
US20060161988A1 (en) * 2005-01-14 2006-07-20 Microsoft Corporation Privacy friendly malware quarantines
US20070028301A1 (en) * 2005-07-01 2007-02-01 Markmonitor Inc. Enhanced fraud monitoring systems
US7475118B2 (en) * 2006-02-03 2009-01-06 International Business Machines Corporation Method for recognizing spam email

Cited By (382)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8219620B2 (en) 2001-02-20 2012-07-10 Mcafee, Inc. Unwanted e-mail filtering system including voting feedback
US8838714B2 (en) 2001-02-20 2014-09-16 Mcafee, Inc. Unwanted e-mail filtering system including voting feedback
US20020116463A1 (en) * 2001-02-20 2002-08-22 Hart Matthew Thomas Unwanted e-mail filtering
US20040123242A1 (en) * 2002-12-11 2004-06-24 Mckibben Michael T. Context instantiated application protocol
US8195714B2 (en) * 2002-12-11 2012-06-05 Leaper Technologies, Inc. Context instantiated application protocol
US8762386B2 (en) 2003-12-10 2014-06-24 Mcafee, Inc. Method and apparatus for data capture and analysis system
US8656039B2 (en) 2003-12-10 2014-02-18 Mcafee, Inc. Rule parser
US9374225B2 (en) 2003-12-10 2016-06-21 Mcafee, Inc. Document de-registration
US8548170B2 (en) 2003-12-10 2013-10-01 Mcafee, Inc. Document de-registration
US9092471B2 (en) 2003-12-10 2015-07-28 Mcafee, Inc. Rule parser
US8214438B2 (en) 2004-03-01 2012-07-03 Microsoft Corporation (More) advanced spam detection features
US9306960B1 (en) 2004-04-01 2016-04-05 Fireeye, Inc. Systems and methods for unauthorized activity defense
US9628498B1 (en) 2004-04-01 2017-04-18 Fireeye, Inc. System and method for bot detection
US9516057B2 (en) 2004-04-01 2016-12-06 Fireeye, Inc. Systems and methods for computer worm defense
US11153341B1 (en) 2004-04-01 2021-10-19 Fireeye, Inc. System and method for detecting malicious network content using virtual environment components
US9591020B1 (en) 2004-04-01 2017-03-07 Fireeye, Inc. System and method for signature generation
US11082435B1 (en) 2004-04-01 2021-08-03 Fireeye, Inc. System and method for threat detection and identification
US10623434B1 (en) 2004-04-01 2020-04-14 Fireeye, Inc. System and method for virtual analysis of network data
US10284574B1 (en) 2004-04-01 2019-05-07 Fireeye, Inc. System and method for threat detection and identification
US10757120B1 (en) 2004-04-01 2020-08-25 Fireeye, Inc. Malicious network content detection
US10587636B1 (en) 2004-04-01 2020-03-10 Fireeye, Inc. System and method for bot detection
US10097573B1 (en) 2004-04-01 2018-10-09 Fireeye, Inc. Systems and methods for malware defense
US10027690B2 (en) 2004-04-01 2018-07-17 Fireeye, Inc. Electronic message analysis for malware detection
US10068091B1 (en) 2004-04-01 2018-09-04 Fireeye, Inc. System and method for malware containment
US9661018B1 (en) 2004-04-01 2017-05-23 Fireeye, Inc. System and method for detecting anomalous behaviors using a virtual machine environment
US9838411B1 (en) 2004-04-01 2017-12-05 Fireeye, Inc. Subscriber based protection system
US10567405B1 (en) 2004-04-01 2020-02-18 Fireeye, Inc. System for detecting a presence of malware from behavioral analysis
US11637857B1 (en) 2004-04-01 2023-04-25 Fireeye Security Holdings Us Llc System and method for detecting malicious traffic using a virtual machine configured with a select software environment
US10511614B1 (en) 2004-04-01 2019-12-17 Fireeye, Inc. Subscription based malware detection under management system control
US9912684B1 (en) 2004-04-01 2018-03-06 Fireeye, Inc. System and method for virtual analysis of network data
US9282109B1 (en) 2004-04-01 2016-03-08 Fireeye, Inc. System and method for analyzing packets
US10165000B1 (en) 2004-04-01 2018-12-25 Fireeye, Inc. Systems and methods for malware attack prevention by intercepting flows of information
US9838416B1 (en) 2004-06-14 2017-12-05 Fireeye, Inc. System and method of detecting malicious content
US8560534B2 (en) 2004-08-23 2013-10-15 Mcafee, Inc. Database for a capture system
US8707008B2 (en) 2004-08-24 2014-04-22 Mcafee, Inc. File system for a capture system
US20060200528A1 (en) * 2005-01-25 2006-09-07 Krishna Pathiyal Method and system for processing data messages
US8767549B2 (en) * 2005-04-27 2014-07-01 Extreme Networks, Inc. Integrated methods of performing network switch functions
US20110149736A1 (en) * 2005-04-27 2011-06-23 Extreme Networks, Inc. Integrated methods of performing network switch functions
US20060277259A1 (en) * 2005-06-07 2006-12-07 Microsoft Corporation Distributed sender reputations
US8730955B2 (en) 2005-08-12 2014-05-20 Mcafee, Inc. High speed packet capture
US8554774B2 (en) 2005-08-31 2013-10-08 Mcafee, Inc. System and method for word indexing in a capture system and querying thereof
US20070061402A1 (en) * 2005-09-15 2007-03-15 Microsoft Corporation Multipurpose internet mail extension (MIME) analysis
US8463800B2 (en) 2005-10-19 2013-06-11 Mcafee, Inc. Attributes of captured objects in a capture system
US8615785B2 (en) 2005-12-30 2013-12-24 Extreme Network, Inc. Network threat detection and mitigation
US9246860B2 (en) 2006-02-09 2016-01-26 Mcafee, Inc. System, method and computer program product for gathering information relating to electronic content utilizing a DNS server
US8601160B1 (en) * 2006-02-09 2013-12-03 Mcafee, Inc. System, method and computer program product for gathering information relating to electronic content utilizing a DNS server
US8504537B2 (en) 2006-03-24 2013-08-06 Mcafee, Inc. Signature distribution in a document registration system
US8205261B1 (en) 2006-03-31 2012-06-19 Emc Corporation Incremental virus scan
US7854006B1 (en) 2006-03-31 2010-12-14 Emc Corporation Differential virus scan
US8443445B1 (en) * 2006-03-31 2013-05-14 Emc Corporation Risk-aware scanning of objects
US8739285B1 (en) 2006-03-31 2014-05-27 Emc Corporation Differential virus scan
US7752274B2 (en) * 2006-04-03 2010-07-06 International Business Machines Corporation Apparatus and method for filtering and selectively inspecting e-mail
US20070233787A1 (en) * 2006-04-03 2007-10-04 Pagan William G Apparatus and method for filtering and selectively inspecting e-mail
US8849760B2 (en) * 2006-05-02 2014-09-30 International Business Machines Corporation Determining whether predefined data controlled by a server is replicated to a client machine
US20070260649A1 (en) * 2006-05-02 2007-11-08 International Business Machines Corporation Determining whether predefined data controlled by a server is replicated to a client machine
US8683035B2 (en) 2006-05-22 2014-03-25 Mcafee, Inc. Attributes of captured objects in a capture system
US9094338B2 (en) 2006-05-22 2015-07-28 Mcafee, Inc. Attributes of captured objects in a capture system
US8087084B1 (en) 2006-06-28 2011-12-27 Emc Corporation Security for scanning objects
US8375451B1 (en) 2006-06-28 2013-02-12 Emc Corporation Security for scanning objects
US8122507B1 (en) 2006-06-28 2012-02-21 Emc Corporation Efficient scanning of objects
US20080005315A1 (en) * 2006-06-29 2008-01-03 Po-Ching Lin Apparatus, system and method for stream-based data filtering
US20090257434A1 (en) * 2006-12-29 2009-10-15 Huawei Technologies Co., Ltd. Packet access control method, forwarding engine, and communication apparatus
US8380795B2 (en) * 2007-01-18 2013-02-19 Roke Manor Research Limited Method of filtering sections of a data stream
US20090282119A1 (en) * 2007-01-18 2009-11-12 Roke Manor Research Limited Method of filtering sections of a data stream
US8938773B2 (en) 2007-02-02 2015-01-20 Websense, Inc. System and method for adding context to prevent data leakage over a computer network
US9609001B2 (en) 2007-02-02 2017-03-28 Websense, Llc System and method for adding context to prevent data leakage over a computer network
US8984133B2 (en) 2007-06-19 2015-03-17 The Invention Science Fund I, Llc Providing treatment-indicative feedback dependent on putative content treatment
US20080320088A1 (en) * 2007-06-19 2008-12-25 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Helping valuable message content pass apparent message filtering
US20080320095A1 (en) * 2007-06-25 2008-12-25 Microsoft Corporation Determination Of Participation In A Malicious Software Campaign
US7899870B2 (en) * 2007-06-25 2011-03-01 Microsoft Corporation Determination of participation in a malicious software campaign
US8584094B2 (en) * 2007-06-29 2013-11-12 Microsoft Corporation Dynamically computing reputation scores for objects
US20090007102A1 (en) * 2007-06-29 2009-01-01 Microsoft Corporation Dynamically Computing Reputation Scores for Objects
US20090030993A1 (en) * 2007-07-26 2009-01-29 Mxtoolbox Simultaneous synchronous split-domain email routing with conflict resolution
US7818384B2 (en) * 2007-07-26 2010-10-19 Rachal Eric M Simultaneous synchronous split-domain email routing with conflict resolution
US20090055818A1 (en) * 2007-08-22 2009-02-26 International Business Machines Corporation Method for supporting, software support agent and computer system
US20090063631A1 (en) * 2007-08-31 2009-03-05 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Message-reply-dependent update decisions
US20090063632A1 (en) * 2007-08-31 2009-03-05 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Layering prospective activity information
US20090063585A1 (en) * 2007-08-31 2009-03-05 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Using party classifiability to inform message versioning
US20090083758A1 (en) * 2007-09-20 2009-03-26 Research In Motion Limited System and method for delivering variable size messages based on spam probability
US8230025B2 (en) * 2007-09-20 2012-07-24 Research In Motion Limited System and method for delivering variable size messages based on spam probability
US8738717B2 (en) 2007-09-20 2014-05-27 Blackberry Limited System and method for delivering variable size messages based on spam probability
US20100049848A1 (en) * 2007-09-24 2010-02-25 Barracuda Networks, Inc Distributed frequency data collection via indicator embedded with dns request
US8775604B2 (en) * 2007-09-24 2014-07-08 Barracuda Networks, Inc. Distributed frequency data collection via indicator embedded with DNS request
US9374242B2 (en) 2007-11-08 2016-06-21 Invention Science Fund I, Llc Using evaluations of tentative message content
US20090150497A1 (en) * 2007-12-06 2009-06-11 Mcafee Randolph Preston Electronic mail message handling and presentation methods and systems
US9058468B2 (en) * 2007-12-21 2015-06-16 Google Technology Holdings LLC System and method for preventing unauthorised use of digital media
US20110030069A1 (en) * 2007-12-21 2011-02-03 General Instrument Corporation System and method for preventing unauthorised use of digital media
US20090204613A1 (en) * 2008-02-13 2009-08-13 Yasuyuki Muroi Pattern detection apparatus, pattern detection system, pattern detection program and pattern detection method
US9455981B2 (en) 2008-03-19 2016-09-27 Forcepoint, LLC Method and system for protection against information stealing software
US9130986B2 (en) * 2008-03-19 2015-09-08 Websense, Inc. Method and system for protection against information stealing software
US9015842B2 (en) 2008-03-19 2015-04-21 Websense, Inc. Method and system for protection against information stealing software
US20090241173A1 (en) * 2008-03-19 2009-09-24 Websense, Inc. Method and system for protection against information stealing software
US9495539B2 (en) 2008-03-19 2016-11-15 Websense, Llc Method and system for protection against information stealing software
US8959634B2 (en) 2008-03-19 2015-02-17 Websense, Inc. Method and system for protection against information stealing software
US7865561B2 (en) * 2008-04-01 2011-01-04 Mcafee, Inc. Increasing spam scanning accuracy by rescanning with updated detection rules
US20090248814A1 (en) * 2008-04-01 2009-10-01 Mcafee, Inc. Increasing spam scanning accuracy by rescanning with updated detection rules
US8244752B2 (en) * 2008-04-21 2012-08-14 Microsoft Corporation Classifying search query traffic
US20090265317A1 (en) * 2008-04-21 2009-10-22 Microsoft Corporation Classifying search query traffic
US9208291B1 (en) * 2008-04-30 2015-12-08 Netapp, Inc. Integrating anti-virus in a clustered storage system
US20090282075A1 (en) * 2008-05-06 2009-11-12 Dawson Christopher J System and method for identifying and blocking avatar-based unsolicited advertising in a virtual universe
US8490185B2 (en) * 2008-06-27 2013-07-16 Microsoft Corporation Dynamic spam view settings
US20100251362A1 (en) * 2008-06-27 2010-09-30 Microsoft Corporation Dynamic spam view settings
US20090328221A1 (en) * 2008-06-30 2009-12-31 Microsoft Corporation Malware detention for suspected malware
US8381298B2 (en) * 2008-06-30 2013-02-19 Microsoft Corporation Malware detention for suspected malware
US8635706B2 (en) 2008-07-10 2014-01-21 Mcafee, Inc. System and method for data mining and security policy management
US8601537B2 (en) 2008-07-10 2013-12-03 Mcafee, Inc. System and method for data mining and security policy management
WO2010011238A1 (en) * 2008-07-25 2010-01-28 Zumobi, Inc. Methods and systems providing an interactive social ticker
US20100023871A1 (en) * 2008-07-25 2010-01-28 Zumobi, Inc. Methods and Systems Providing an Interactive Social Ticker
US10367786B2 (en) 2008-08-12 2019-07-30 Mcafee, Llc Configuration management for a capture/registration system
US9253154B2 (en) 2008-08-12 2016-02-02 Mcafee, Inc. Configuration management for a capture/registration system
EP2169897A1 (en) * 2008-09-25 2010-03-31 Avira GmbH Computer-based method for the prioritization of potential malware sample messages
US9954890B1 (en) 2008-11-03 2018-04-24 Fireeye, Inc. Systems and methods for analyzing PDF documents
US20110247072A1 (en) * 2008-11-03 2011-10-06 Stuart Gresley Staniford Systems and Methods for Detecting Malicious PDF Network Content
US9438622B1 (en) 2008-11-03 2016-09-06 Fireeye, Inc. Systems and methods for analyzing malicious PDF network content
US8990939B2 (en) 2008-11-03 2015-03-24 Fireeye, Inc. Systems and methods for scheduling analysis of network content for malware
US20120222121A1 (en) * 2008-11-03 2012-08-30 Stuart Gresley Staniford Systems and Methods for Detecting Malicious PDF Network Content
US8997219B2 (en) * 2008-11-03 2015-03-31 Fireeye, Inc. Systems and methods for detecting malicious PDF network content
US9118715B2 (en) * 2008-11-03 2015-08-25 Fireeye, Inc. Systems and methods for detecting malicious PDF network content
US9704177B2 (en) 2008-12-23 2017-07-11 International Business Machines Corporation Identifying spam avatars in a virtual universe (VU) based upon turing tests
US10915922B2 (en) 2008-12-23 2021-02-09 International Business Machines Corporation System and method in a virtual universe for identifying spam avatars based upon avatar multimedia characteristics
US10922714B2 (en) 2008-12-23 2021-02-16 International Business Machines Corporation Identifying spam avatars in a virtual universe based upon turing tests
US20100162403A1 (en) * 2008-12-23 2010-06-24 International Business Machines Corporation System and method in a virtual universe for identifying spam avatars based upon avatar multimedia characteristics
US9697535B2 (en) 2008-12-23 2017-07-04 International Business Machines Corporation System and method in a virtual universe for identifying spam avatars based upon avatar multimedia characteristics
US8850591B2 (en) * 2009-01-13 2014-09-30 Mcafee, Inc. System and method for concept building
US20130246371A1 (en) * 2009-01-13 2013-09-19 Mcafee, Inc. System and Method for Concept Building
US8706709B2 (en) 2009-01-15 2014-04-22 Mcafee, Inc. System and method for intelligent term grouping
US8473442B1 (en) 2009-02-25 2013-06-25 Mcafee, Inc. System and method for intelligent state management
US9602548B2 (en) 2009-02-25 2017-03-21 Mcafee, Inc. System and method for intelligent state management
US9195937B2 (en) 2009-02-25 2015-11-24 Mcafee, Inc. System and method for intelligent state management
US20110047192A1 (en) * 2009-03-19 2011-02-24 Hitachi, Ltd. Data processing system, data processing method, and program
US8667121B2 (en) 2009-03-25 2014-03-04 Mcafee, Inc. System and method for managing data and policies
US8447722B1 (en) 2009-03-25 2013-05-21 Mcafee, Inc. System and method for data mining and security policy management
US8918359B2 (en) 2009-03-25 2014-12-23 Mcafee, Inc. System and method for data mining and security policy management
US9313232B2 (en) 2009-03-25 2016-04-12 Mcafee, Inc. System and method for data mining and security policy management
US20100287182A1 (en) * 2009-05-08 2010-11-11 Raytheon Company Method and System for Adjudicating Text Against a Defined Policy
US8234259B2 (en) * 2009-05-08 2012-07-31 Raytheon Company Method and system for adjudicating text against a defined policy
US9692762B2 (en) 2009-05-26 2017-06-27 Websense, Llc Systems and methods for efficient detection of fingerprinted data and information
US20100306845A1 (en) * 2009-05-26 2010-12-02 Microsoft Corporation Managing potentially phishing messages in a non-web mail client context
WO2010138339A3 (en) * 2009-05-26 2011-02-24 Microsoft Corporation Managing potentially phishing messages in a non-web mail client context
US9130972B2 (en) 2009-05-26 2015-09-08 Websense, Inc. Systems and methods for efficient detection of fingerprinted data and information
US8621614B2 (en) 2009-05-26 2013-12-31 Microsoft Corporation Managing potentially phishing messages in a non-web mail client context
US20100306853A1 (en) * 2009-05-28 2010-12-02 International Business Machines Corporation Providing notification of spam avatars
US8656476B2 (en) 2009-05-28 2014-02-18 International Business Machines Corporation Providing notification of spam avatars
US9338132B2 (en) 2009-05-28 2016-05-10 International Business Machines Corporation Providing notification of spam avatars
US8800030B2 (en) * 2009-09-15 2014-08-05 Symantec Corporation Individualized time-to-live for reputation scores of computer files
US20110067101A1 (en) * 2009-09-15 2011-03-17 Symantec Corporation Individualized Time-to-Live for Reputation Scores of Computer Files
US11381578B1 (en) 2009-09-30 2022-07-05 Fireeye Security Holdings Us Llc Network-based binary file extraction and analysis for malware detection
US20110153035A1 (en) * 2009-12-22 2011-06-23 Caterpillar Inc. Sensor Failure Detection System And Method
US20110179487A1 (en) * 2010-01-20 2011-07-21 Martin Lee Method and system for using spam e-mail honeypots to identify potential malware containing e-mails
US8549642B2 (en) * 2010-01-20 2013-10-01 Symantec Corporation Method and system for using spam e-mail honeypots to identify potential malware containing e-mails
US8863279B2 (en) * 2010-03-08 2014-10-14 Raytheon Company System and method for malware detection
US20110219450A1 (en) * 2010-03-08 2011-09-08 Raytheon Company System And Method For Malware Detection
US9009820B1 (en) 2010-03-08 2015-04-14 Raytheon Company System and method for malware detection using multiple techniques
US8856165B1 (en) * 2010-03-26 2014-10-07 Google Inc. Ranking of users who report abuse
US9361130B2 (en) 2010-05-03 2016-06-07 Apple Inc. Systems, methods, and computer program products providing an integrated user interface for reading content
US8627476B1 (en) * 2010-07-05 2014-01-07 Symantec Corporation Altering application behavior based on content provider reputation
US20140325655A1 (en) * 2010-07-13 2014-10-30 Huawei Technologies Co., Ltd. Proxy gateway anti-virus method, pre-classifier, and proxy gateway
US8769694B2 (en) * 2010-07-13 2014-07-01 Huawei Technologies Co., Ltd. Proxy gateway anti-virus method, pre-classifier, and proxy gateway
US20130097666A1 (en) * 2010-07-13 2013-04-18 Huawei Technologies Co., Ltd. Proxy gateway anti-virus method, pre-classifier, and proxy gateway
US9313220B2 (en) * 2010-07-13 2016-04-12 Huawei Technologies Co., Ltd. Proxy gateway anti-virus method, pre-classifier, and proxy gateway
US8595830B1 (en) 2010-07-27 2013-11-26 Symantec Corporation Method and system for detecting malware containing E-mails based on inconsistencies in public sector “From” addresses and a sending IP address
US9794254B2 (en) 2010-11-04 2017-10-17 Mcafee, Inc. System and method for protecting specified data combinations
US8806615B2 (en) 2010-11-04 2014-08-12 Mcafee, Inc. System and method for protecting specified data combinations
US11316848B2 (en) 2010-11-04 2022-04-26 Mcafee, Llc System and method for protecting specified data combinations
US10313337B2 (en) 2010-11-04 2019-06-04 Mcafee, Llc System and method for protecting specified data combinations
US10666646B2 (en) 2010-11-04 2020-05-26 Mcafee, Llc System and method for protecting specified data combinations
US8554907B1 (en) * 2011-02-15 2013-10-08 Trend Micro, Inc. Reputation prediction of IP addresses
US9122877B2 (en) * 2011-03-21 2015-09-01 Mcafee, Inc. System and method for malware and network reputation correlation
US20120324579A1 (en) * 2011-06-16 2012-12-20 Microsoft Corporation Cloud malware false positive recovery
US9858415B2 (en) * 2011-06-16 2018-01-02 Microsoft Technology Licensing, Llc Cloud malware false positive recovery
US9106680B2 (en) 2011-06-27 2015-08-11 Mcafee, Inc. System and method for protocol fingerprinting and reputation correlation
US10365911B2 (en) * 2011-12-18 2019-07-30 International Business Machines Corporation Determining optimal update frequency for software application updates
US20130159985A1 (en) * 2011-12-18 2013-06-20 International Business Machines Corporation Determining optimal update frequency for software application updates
US8700561B2 (en) 2011-12-27 2014-04-15 Mcafee, Inc. System and method for providing data protection workflows in a network environment
US9430564B2 (en) 2011-12-27 2016-08-30 Mcafee, Inc. System and method for providing data protection workflows in a network environment
US9516062B2 (en) 2012-04-10 2016-12-06 Mcafee, Inc. System and method for determining and using local reputations of users and hosts to protect information in a network environment
US8931043B2 (en) 2012-04-10 2015-01-06 Mcafee Inc. System and method for determining and using local reputations of users and hosts to protect information in a network environment
US9241259B2 (en) 2012-11-30 2016-01-19 Websense, Inc. Method and apparatus for managing the transfer of sensitive information to mobile devices
US10135783B2 (en) 2012-11-30 2018-11-20 Forcepoint Llc Method and apparatus for maintaining network communication during email data transfer
US10572665B2 (en) 2012-12-28 2020-02-25 Fireeye, Inc. System and method to create a number of breakpoints in a virtual machine via virtual machine trapping events
US8990944B1 (en) 2013-02-23 2015-03-24 Fireeye, Inc. Systems and methods for automatically detecting backdoors
US9792196B1 (en) 2013-02-23 2017-10-17 Fireeye, Inc. Framework for efficient security coverage of mobile software applications
US9367681B1 (en) 2013-02-23 2016-06-14 Fireeye, Inc. Framework for efficient security coverage of mobile software applications using symbolic execution to reach regions of interest within an application
US9009823B1 (en) 2013-02-23 2015-04-14 Fireeye, Inc. Framework for efficient security coverage of mobile software applications installed on mobile devices
US10296437B2 (en) 2013-02-23 2019-05-21 Fireeye, Inc. Framework for efficient security coverage of mobile software applications
US10929266B1 (en) 2013-02-23 2021-02-23 Fireeye, Inc. Real-time visual playback with synchronous textual analysis log display and event/time indexing
US9176843B1 (en) 2013-02-23 2015-11-03 Fireeye, Inc. Framework for efficient security coverage of mobile software applications
US9225740B1 (en) 2013-02-23 2015-12-29 Fireeye, Inc. Framework for iterative analysis of mobile software applications
US10848521B1 (en) 2013-03-13 2020-11-24 Fireeye, Inc. Malicious content analysis using simulated user interaction without user involvement
US9626509B1 (en) 2013-03-13 2017-04-18 Fireeye, Inc. Malicious content analysis with multi-version application support within single operating environment
US11210390B1 (en) 2013-03-13 2021-12-28 Fireeye Security Holdings Us Llc Multi-version application support and registration within a single operating system environment
US9355247B1 (en) 2013-03-13 2016-05-31 Fireeye, Inc. File extraction from memory dump for malicious content analysis
US10025927B1 (en) 2013-03-13 2018-07-17 Fireeye, Inc. Malicious content analysis with multi-version application support within single operating environment
US10198574B1 (en) 2013-03-13 2019-02-05 Fireeye, Inc. System and method for analysis of a memory dump associated with a potentially malicious content suspect
US10122746B1 (en) 2013-03-14 2018-11-06 Fireeye, Inc. Correlation and consolidation of analytic data for holistic view of malware attack
US9311479B1 (en) 2013-03-14 2016-04-12 Fireeye, Inc. Correlation and consolidation of analytic data for holistic view of a malware attack
US10200384B1 (en) 2013-03-14 2019-02-05 Fireeye, Inc. Distributed systems and methods for automatically detecting unknown bots and botnets
US9641546B1 (en) 2013-03-14 2017-05-02 Fireeye, Inc. Electronic device for aggregation, correlation and consolidation of analysis attributes
US10649970B1 (en) * 2013-03-14 2020-05-12 Invincea, Inc. Methods and apparatus for detection of functionality
US9430646B1 (en) 2013-03-14 2016-08-30 Fireeye, Inc. Distributed systems and methods for automatically detecting unknown bots and botnets
US10812513B1 (en) 2013-03-14 2020-10-20 Fireeye, Inc. Correlation and consolidation holistic views of analytic data pertaining to a malware attack
US10701091B1 (en) 2013-03-15 2020-06-30 Fireeye, Inc. System and method for verifying a cyberthreat
US10713358B2 (en) 2013-03-15 2020-07-14 Fireeye, Inc. System and method to extract and utilize disassembly features to classify software intent
US10447634B2 (en) 2013-04-30 2019-10-15 Proofpoint, Inc. Apparatus and method for augmenting a message to facilitate spam identification
US20140324985A1 (en) * 2013-04-30 2014-10-30 Cloudmark, Inc. Apparatus and Method for Augmenting a Message to Facilitate Spam Identification
US9634970B2 (en) * 2013-04-30 2017-04-25 Cloudmark, Inc. Apparatus and method for augmenting a message to facilitate spam identification
US10469512B1 (en) 2013-05-10 2019-11-05 Fireeye, Inc. Optimized resource allocation for virtual machines within a malware content detection system
US9495180B2 (en) 2013-05-10 2016-11-15 Fireeye, Inc. Optimized resource allocation for virtual machines within a malware content detection system
US10637880B1 (en) 2013-05-13 2020-04-28 Fireeye, Inc. Classifying sets of malicious indicators for detecting command and control communications associated with malware
US10133863B2 (en) 2013-06-24 2018-11-20 Fireeye, Inc. Zero-day discovery system
US9888019B1 (en) 2013-06-28 2018-02-06 Fireeye, Inc. System and method for detecting malicious links in electronic messages
US10505956B1 (en) 2013-06-28 2019-12-10 Fireeye, Inc. System and method for detecting malicious links in electronic messages
US9300686B2 (en) 2013-06-28 2016-03-29 Fireeye, Inc. System and method for detecting malicious links in electronic messages
WO2014210289A1 (en) * 2013-06-28 2014-12-31 Symantec Corporation Techniques for detecting a security vulnerability
US10515214B1 (en) 2013-09-30 2019-12-24 Fireeye, Inc. System and method for classifying malware within content created during analysis of a specimen
US10735458B1 (en) 2013-09-30 2020-08-04 Fireeye, Inc. Detection center to detect targeted malware
US9912691B2 (en) 2013-09-30 2018-03-06 Fireeye, Inc. Fuzzy hash of behavioral results
US10657251B1 (en) 2013-09-30 2020-05-19 Fireeye, Inc. Multistage system and method for analyzing obfuscated content for malware
US9910988B1 (en) 2013-09-30 2018-03-06 Fireeye, Inc. Malware analysis in accordance with an analysis plan
US9736179B2 (en) 2013-09-30 2017-08-15 Fireeye, Inc. System, apparatus and method for using malware analysis results to drive adaptive instrumentation of virtual machines to improve exploit detection
US9690936B1 (en) 2013-09-30 2017-06-27 Fireeye, Inc. Multistage system and method for analyzing obfuscated content for malware
US9294501B2 (en) 2013-09-30 2016-03-22 Fireeye, Inc. Fuzzy hash of behavioral results
US11075945B2 (en) 2013-09-30 2021-07-27 Fireeye, Inc. System, apparatus and method for reconfiguring virtual machines
US9628507B2 (en) 2013-09-30 2017-04-18 Fireeye, Inc. Advanced persistent threat (APT) detection center
US10218740B1 (en) 2013-09-30 2019-02-26 Fireeye, Inc. Fuzzy hash of behavioral results
US10713362B1 (en) 2013-09-30 2020-07-14 Fireeye, Inc. Dynamically adaptive framework and method for classifying malware using intelligent static, emulation, and dynamic analyses
US9578052B2 (en) 2013-10-24 2017-02-21 Mcafee, Inc. Agent assisted malicious application blocking in a network environment
US9921978B1 (en) 2013-11-08 2018-03-20 Fireeye, Inc. System and method for enhanced security of storage devices
US9747446B1 (en) 2013-12-26 2017-08-29 Fireeye, Inc. System and method for run-time object classification
US11089057B1 (en) 2013-12-26 2021-08-10 Fireeye, Inc. System, apparatus and method for automatically verifying exploits within suspect objects and highlighting the display information associated with the verified exploits
US10467411B1 (en) 2013-12-26 2019-11-05 Fireeye, Inc. System and method for generating a malware identifier
US9306974B1 (en) 2013-12-26 2016-04-05 Fireeye, Inc. System, apparatus and method for automatically verifying exploits within suspect objects and highlighting the display information associated with the verified exploits
US10476909B1 (en) 2013-12-26 2019-11-12 Fireeye, Inc. System, apparatus and method for automatically verifying exploits within suspect objects and highlighting the display information associated with the verified exploits
US9756074B2 (en) 2013-12-26 2017-09-05 Fireeye, Inc. System and method for IPS and VM-based detection of suspicious objects
US10740456B1 (en) 2014-01-16 2020-08-11 Fireeye, Inc. Threat-aware architecture
US9262635B2 (en) 2014-02-05 2016-02-16 Fireeye, Inc. Detection efficacy of virtual machine-based analysis with application specific events
US9916440B1 (en) 2014-02-05 2018-03-13 Fireeye, Inc. Detection efficacy of virtual machine-based analysis with application specific events
US10534906B1 (en) 2014-02-05 2020-01-14 Fireeye, Inc. Detection efficacy of virtual machine-based analysis with application specific events
US10432649B1 (en) 2014-03-20 2019-10-01 Fireeye, Inc. System and method for classifying an object based on an aggregated behavior results
US10242185B1 (en) 2014-03-21 2019-03-26 Fireeye, Inc. Dynamic guest image creation and rollback
US11068587B1 (en) 2014-03-21 2021-07-20 Fireeye, Inc. Dynamic guest image creation and rollback
US9591015B1 (en) 2014-03-28 2017-03-07 Fireeye, Inc. System and method for offloading packet processing and static analysis operations
US9787700B1 (en) 2014-03-28 2017-10-10 Fireeye, Inc. System and method for offloading packet processing and static analysis operations
US10454953B1 (en) 2014-03-28 2019-10-22 Fireeye, Inc. System and method for separated packet processing and static analysis
US11082436B1 (en) 2014-03-28 2021-08-03 Fireeye, Inc. System and method for offloading packet processing and static analysis operations
US10341363B1 (en) 2014-03-31 2019-07-02 Fireeye, Inc. Dynamically remote tuning of a malware content detection system
US9223972B1 (en) 2014-03-31 2015-12-29 Fireeye, Inc. Dynamically remote tuning of a malware content detection system
US11297074B1 (en) 2014-03-31 2022-04-05 FireEye Security Holdings, Inc. Dynamically remote tuning of a malware content detection system
US9432389B1 (en) 2014-03-31 2016-08-30 Fireeye, Inc. System, apparatus and method for detecting a malicious attack based on static analysis of a multi-flow object
US9594912B1 (en) 2014-06-06 2017-03-14 Fireeye, Inc. Return-oriented programming detection
US9973531B1 (en) 2014-06-06 2018-05-15 Fireeye, Inc. Shellcode detection
US9438623B1 (en) 2014-06-06 2016-09-06 Fireeye, Inc. Computer exploit detection using heap spray pattern matching
US10474820B2 (en) 2014-06-17 2019-11-12 Hewlett Packard Enterprise Development Lp DNS based infection scores
US10084813B2 (en) 2014-06-24 2018-09-25 Fireeye, Inc. Intrusion prevention and remedy system
US10757134B1 (en) 2014-06-24 2020-08-25 Fireeye, Inc. System and method for detecting and remediating a cybersecurity attack
US9661009B1 (en) 2014-06-26 2017-05-23 Fireeye, Inc. Network-based malware detection
US10805340B1 (en) 2014-06-26 2020-10-13 Fireeye, Inc. Infection vector and malware tracking with an interactive user display
US9838408B1 (en) 2014-06-26 2017-12-05 Fireeye, Inc. System, device and method for detecting a malicious attack based on direct communications between remotely hosted virtual machines and malicious web servers
US9398028B1 (en) 2014-06-26 2016-07-19 Fireeye, Inc. System, device and method for detecting a malicious attack based on communcations between remotely hosted virtual machines and malicious web servers
US11244056B1 (en) 2014-07-01 2022-02-08 Fireeye Security Holdings Us Llc Verification of trusted threat-aware visualization layer
US9609007B1 (en) 2014-08-22 2017-03-28 Fireeye, Inc. System and method of detecting delivery of malware based on indicators of compromise from different sources
US9363280B1 (en) 2014-08-22 2016-06-07 Fireeye, Inc. System and method of detecting delivery of malware using cross-customer data
US10027696B1 (en) 2014-08-22 2018-07-17 Fireeye, Inc. System and method for determining a threat based on correlation of indicators of compromise from other sources
US10404725B1 (en) 2014-08-22 2019-09-03 Fireeye, Inc. System and method of detecting delivery of malware using cross-customer data
US9729565B2 (en) * 2014-09-17 2017-08-08 Cisco Technology, Inc. Provisional bot activity recognition
US10671726B1 (en) 2014-09-22 2020-06-02 Fireeye Inc. System and method for malware analysis using thread-level event monitoring
US10868818B1 (en) 2014-09-29 2020-12-15 Fireeye, Inc. Systems and methods for generation of signature generation using interactive infection visualizations
US10027689B1 (en) 2014-09-29 2018-07-17 Fireeye, Inc. Interactive infection visualization for improved exploit detection and signature generation for malware and malware families
US9773112B1 (en) 2014-09-29 2017-09-26 Fireeye, Inc. Exploit detection of malware and malware families
US10902117B1 (en) 2014-12-22 2021-01-26 Fireeye, Inc. Framework for classifying an object as malicious with machine learning for deploying updated predictive models
US10366231B1 (en) 2014-12-22 2019-07-30 Fireeye, Inc. Framework for classifying an object as malicious with machine learning for deploying updated predictive models
US9690933B1 (en) 2014-12-22 2017-06-27 Fireeye, Inc. Framework for classifying an object as malicious with machine learning for deploying updated predictive models
US10075455B2 (en) 2014-12-26 2018-09-11 Fireeye, Inc. Zero-day rotating guest image profile
US10528726B1 (en) 2014-12-29 2020-01-07 Fireeye, Inc. Microvisor-based malware detection appliance architecture
US9838417B1 (en) 2014-12-30 2017-12-05 Fireeye, Inc. Intelligent context aware user interaction for malware detection
US10798121B1 (en) 2014-12-30 2020-10-06 Fireeye, Inc. Intelligent context aware user interaction for malware detection
US20160241575A1 (en) * 2015-02-12 2016-08-18 Fujitsu Limited Information processing system and information processing method
US10148693B2 (en) 2015-03-25 2018-12-04 Fireeye, Inc. Exploit detection system
US9690606B1 (en) 2015-03-25 2017-06-27 Fireeye, Inc. Selective system call monitoring
US10666686B1 (en) 2015-03-25 2020-05-26 Fireeye, Inc. Virtualized exploit detection system
US9438613B1 (en) 2015-03-30 2016-09-06 Fireeye, Inc. Dynamic content activation for automated analysis of embedded objects
US9483644B1 (en) 2015-03-31 2016-11-01 Fireeye, Inc. Methods for detecting file altering malware in VM based analysis
US10417031B2 (en) 2015-03-31 2019-09-17 Fireeye, Inc. Selective virtualization for security threat detection
US11294705B1 (en) 2015-03-31 2022-04-05 Fireeye Security Holdings Us Llc Selective virtualization for security threat detection
US9846776B1 (en) 2015-03-31 2017-12-19 Fireeye, Inc. System and method for detecting file altering behaviors pertaining to a malicious attack
US10474813B1 (en) 2015-03-31 2019-11-12 Fireeye, Inc. Code injection technique for remediation at an endpoint of a network
US11868795B1 (en) 2015-03-31 2024-01-09 Musarubra Us Llc Selective virtualization for security threat detection
US10728263B1 (en) 2015-04-13 2020-07-28 Fireeye, Inc. Analytic-based security monitoring system and method
US9594904B1 (en) 2015-04-23 2017-03-14 Fireeye, Inc. Detecting malware based on reflection
US11113086B1 (en) 2015-06-30 2021-09-07 Fireeye, Inc. Virtual system and method for securing external network connectivity
US10642753B1 (en) 2015-06-30 2020-05-05 Fireeye, Inc. System and method for protecting a software component running in virtual machine using a virtualization layer
US10454950B1 (en) 2015-06-30 2019-10-22 Fireeye, Inc. Centralized aggregation technique for detecting lateral movement of stealthy cyber-attacks
US10726127B1 (en) 2015-06-30 2020-07-28 Fireeye, Inc. System and method for protecting a software component running in a virtual machine through virtual interrupts by the virtualization layer
US11841947B1 (en) 2015-08-05 2023-12-12 Invincea, Inc. Methods and apparatus for machine learning based malware detection
US10715542B1 (en) 2015-08-14 2020-07-14 Fireeye, Inc. Mobile application risk analysis
US10536408B2 (en) * 2015-09-16 2020-01-14 Litéra Corporation Systems and methods for detecting, reporting and cleaning metadata from inbound attachments
US20170078234A1 (en) * 2015-09-16 2017-03-16 Litera Technologies, LLC. Systems and methods for detecting, reporting and cleaning metadata from inbound attachments
US10176321B2 (en) 2015-09-22 2019-01-08 Fireeye, Inc. Leveraging behavior-based rules for malware family classification
US10033747B1 (en) 2015-09-29 2018-07-24 Fireeye, Inc. System and method for detecting interpreter-based exploit attacks
US10887328B1 (en) 2015-09-29 2021-01-05 Fireeye, Inc. System and method for detecting interpreter-based exploit attacks
US9825989B1 (en) 2015-09-30 2017-11-21 Fireeye, Inc. Cyber attack early warning system
US10817606B1 (en) 2015-09-30 2020-10-27 Fireeye, Inc. Detecting delayed activation malware using a run-time monitoring agent and time-dilation logic
US10706149B1 (en) 2015-09-30 2020-07-07 Fireeye, Inc. Detecting delayed activation malware using a primary controller and plural time controllers
US11244044B1 (en) 2015-09-30 2022-02-08 Fireeye Security Holdings Us Llc Method to detect application execution hijacking using memory protection
US10601865B1 (en) 2015-09-30 2020-03-24 Fireeye, Inc. Detection of credential spearphishing attacks using email analysis
US9825976B1 (en) 2015-09-30 2017-11-21 Fireeye, Inc. Detection and classification of exploit kits
US10210329B1 (en) 2015-09-30 2019-02-19 Fireeye, Inc. Method to detect application execution hijacking using memory protection
US10873597B1 (en) * 2015-09-30 2020-12-22 Fireeye, Inc. Cyber attack early warning system
US10284575B2 (en) 2015-11-10 2019-05-07 Fireeye, Inc. Launcher for setting analysis environment variations for malware detection
US10834107B1 (en) 2015-11-10 2020-11-10 Fireeye, Inc. Launcher for setting analysis environment variations for malware detection
US10447728B1 (en) 2015-12-10 2019-10-15 Fireeye, Inc. Technique for protecting guest processes using a layered virtualization architecture
US10846117B1 (en) 2015-12-10 2020-11-24 Fireeye, Inc. Technique for establishing secure communication between host and guest processes of a virtualization architecture
US11200080B1 (en) 2015-12-11 2021-12-14 Fireeye Security Holdings Us Llc Late load technique for deploying a virtualization layer underneath a running operating system
US10133866B1 (en) 2015-12-30 2018-11-20 Fireeye, Inc. System and method for triggering analysis of an object for malware in response to modification of that object
US10872151B1 (en) 2015-12-30 2020-12-22 Fireeye, Inc. System and method for triggering analysis of an object for malware in response to modification of that object
US10565378B1 (en) 2015-12-30 2020-02-18 Fireeye, Inc. Exploit of privilege detection framework
US10050998B1 (en) 2015-12-30 2018-08-14 Fireeye, Inc. Malicious message analysis system
US10341365B1 (en) 2015-12-30 2019-07-02 Fireeye, Inc. Methods and system for hiding transition events for malware detection
US10581898B1 (en) 2015-12-30 2020-03-03 Fireeye, Inc. Malicious message analysis system
US10581874B1 (en) 2015-12-31 2020-03-03 Fireeye, Inc. Malware detection system with contextual analysis
US9824216B1 (en) 2015-12-31 2017-11-21 Fireeye, Inc. Susceptible environment detection system
US10445502B1 (en) * 2015-12-31 2019-10-15 Fireeye, Inc. Susceptible environment detection system
US11552986B1 (en) 2015-12-31 2023-01-10 Fireeye Security Holdings Us Llc Cyber-security framework for application of virtual features
US10601863B1 (en) 2016-03-25 2020-03-24 Fireeye, Inc. System and method for managing sensor enrollment
US10616266B1 (en) 2016-03-25 2020-04-07 Fireeye, Inc. Distributed malware detection system and submission workflow thereof
US10476906B1 (en) 2016-03-25 2019-11-12 Fireeye, Inc. System and method for managing formation and modification of a cluster within a malware detection system
US10671721B1 (en) 2016-03-25 2020-06-02 Fireeye, Inc. Timeout management services
US11632392B1 (en) 2016-03-25 2023-04-18 Fireeye Security Holdings Us Llc Distributed malware detection system and submission workflow thereof
US10785255B1 (en) 2016-03-25 2020-09-22 Fireeye, Inc. Cluster configuration within a scalable malware detection system
US10893059B1 (en) 2016-03-31 2021-01-12 Fireeye, Inc. Verification and enhancement using detection systems located at the network periphery and endpoint devices
US11853427B2 (en) 2016-06-22 2023-12-26 Invincea, Inc. Methods and apparatus for detecting whether a string of characters represents malicious activity using machine learning
US10169585B1 (en) 2016-06-22 2019-01-01 Fireeye, Inc. System and methods for advanced malware detection through placement of transition events
US11240262B1 (en) 2016-06-30 2022-02-01 Fireeye Security Holdings Us Llc Malware detection verification and enhancement by coordinating endpoint and malware detection systems
US10462173B1 (en) 2016-06-30 2019-10-29 Fireeye, Inc. Malware detection verification and enhancement by coordinating endpoint and malware detection systems
US10592678B1 (en) 2016-09-09 2020-03-17 Fireeye, Inc. Secure communications between peers using a verified virtual trusted platform module
US10491627B1 (en) 2016-09-29 2019-11-26 Fireeye, Inc. Advanced malware detection using similarity analysis
US10795991B1 (en) 2016-11-08 2020-10-06 Fireeye, Inc. Enterprise search
US10587647B1 (en) 2016-11-22 2020-03-10 Fireeye, Inc. Technique for malware detection capability comparison of network security devices
US10581879B1 (en) 2016-12-22 2020-03-03 Fireeye, Inc. Enhanced malware detection for generated objects
US10552610B1 (en) 2016-12-22 2020-02-04 Fireeye, Inc. Adaptive virtual machine snapshot update framework for malware behavioral analysis
US10523609B1 (en) 2016-12-27 2019-12-31 Fireeye, Inc. Multi-vector malware detection and analysis
US11570211B1 (en) 2017-03-24 2023-01-31 Fireeye Security Holdings Us Llc Detection of phishing attacks using similarity analysis
US10904286B1 (en) 2017-03-24 2021-01-26 Fireeye, Inc. Detection of phishing attacks using similarity analysis
US10791138B1 (en) 2017-03-30 2020-09-29 Fireeye, Inc. Subscription-based malware detection
US10902119B1 (en) 2017-03-30 2021-01-26 Fireeye, Inc. Data extraction system for malware analysis
US10848397B1 (en) 2017-03-30 2020-11-24 Fireeye, Inc. System and method for enforcing compliance with subscription requirements for cyber-attack detection service
US11863581B1 (en) 2017-03-30 2024-01-02 Musarubra Us Llc Subscription-based malware detection
US10554507B1 (en) 2017-03-30 2020-02-04 Fireeye, Inc. Multi-level control for enhanced resource and object evaluation management of malware detection system
US10798112B2 (en) 2017-03-30 2020-10-06 Fireeye, Inc. Attribute-controlled malware detection
US11399040B1 (en) 2017-03-30 2022-07-26 Fireeye Security Holdings Us Llc Subscription-based malware detection
US10601848B1 (en) 2017-06-29 2020-03-24 Fireeye, Inc. Cyber-security system and method for weak indicator detection and correlation to generate strong indicators
US10503904B1 (en) 2017-06-29 2019-12-10 Fireeye, Inc. Ransomware detection and mitigation
US10855700B1 (en) 2017-06-29 2020-12-01 Fireeye, Inc. Post-intrusion detection of cyber-attacks during lateral movement within networks
US10893068B1 (en) 2017-06-30 2021-01-12 Fireeye, Inc. Ransomware file modification prevention technique
US10747872B1 (en) 2017-09-27 2020-08-18 Fireeye, Inc. System and method for preventing malware evasion
US10805346B2 (en) 2017-10-01 2020-10-13 Fireeye, Inc. Phishing attack detection
US11637859B1 (en) 2017-10-27 2023-04-25 Mandiant, Inc. System and method for analyzing binary code for malware classification using artificial neural network techniques
US11108809B2 (en) 2017-10-27 2021-08-31 Fireeye, Inc. System and method for analyzing binary code for malware classification using artificial neural network techniques
US11271955B2 (en) 2017-12-28 2022-03-08 Fireeye Security Holdings Us Llc Platform and method for retroactive reclassification employing a cybersecurity-based global data store
US11005860B1 (en) 2017-12-28 2021-05-11 Fireeye, Inc. Method and system for efficient cybersecurity analysis of endpoint events
US11240275B1 (en) 2017-12-28 2022-02-01 Fireeye Security Holdings Us Llc Platform and method for performing cybersecurity analyses employing an intelligence hub with a modular architecture
US10826931B1 (en) 2018-03-29 2020-11-03 Fireeye, Inc. System and method for predicting and mitigating cybersecurity system misconfigurations
US10956477B1 (en) 2018-03-30 2021-03-23 Fireeye, Inc. System and method for detecting malicious scripts through natural language processing modeling
US11558401B1 (en) 2018-03-30 2023-01-17 Fireeye Security Holdings Us Llc Multi-vector malware detection data sharing system for improved detection
US11856011B1 (en) 2018-03-30 2023-12-26 Musarubra Us Llc Multi-vector malware detection data sharing system for improved detection
US11003773B1 (en) 2018-03-30 2021-05-11 Fireeye, Inc. System and method for automatically generating malware detection rule recommendations
US11075930B1 (en) 2018-06-27 2021-07-27 Fireeye, Inc. System and method for detecting repetitive cybersecurity attacks constituting an email campaign
US11314859B1 (en) 2018-06-27 2022-04-26 FireEye Security Holdings, Inc. Cyber-security system and method for detecting escalation of privileges within an access token
US11882140B1 (en) 2018-06-27 2024-01-23 Musarubra Us Llc System and method for detecting repetitive cybersecurity attacks constituting an email campaign
US11228491B1 (en) 2018-06-28 2022-01-18 Fireeye Security Holdings Us Llc System and method for distributed cluster configuration monitoring and management
US11316900B1 (en) 2018-06-29 2022-04-26 FireEye Security Holdings Inc. System and method for automatically prioritizing rules for cyber-threat detection and mitigation
US11182473B1 (en) 2018-09-13 2021-11-23 Fireeye Security Holdings Us Llc System and method for mitigating cyberattacks against processor operability by a guest process
US11763004B1 (en) 2018-09-27 2023-09-19 Fireeye Security Holdings Us Llc System and method for bootkit detection
US11368475B1 (en) 2018-12-21 2022-06-21 Fireeye Security Holdings Us Llc System and method for scanning remote services to locate stored objects with malware
US11258806B1 (en) 2019-06-24 2022-02-22 Mandiant, Inc. System and method for automatically associating cybersecurity intelligence to cyberthreat actors
US11556640B1 (en) 2019-06-27 2023-01-17 Mandiant, Inc. Systems and methods for automated cybersecurity analysis of extracted binary string sets
US11392700B1 (en) 2019-06-28 2022-07-19 Fireeye Security Holdings Us Llc System and method for supporting cross-platform data verification
US11159464B2 (en) * 2019-08-02 2021-10-26 Dell Products L.P. System and method for detecting and removing electronic mail storms
US11886585B1 (en) 2019-09-27 2024-01-30 Musarubra Us Llc System and method for identifying and mitigating cyberattacks through malicious position-independent code execution
US11637862B1 (en) 2019-09-30 2023-04-25 Mandiant, Inc. System and method for surfacing cyber-security threats with a self-learning recommendation engine
US11843639B2 (en) * 2020-05-29 2023-12-12 Siemens Ltd., China Industrial control system security analysis method and apparatus
US20230199029A1 (en) * 2020-05-29 2023-06-22 Siemens Ltd., China Industrial Control System Security Analysis Method and Apparatus

Also Published As

Publication number Publication date
JP5118020B2 (en) 2013-01-16
EP1877904B1 (en) 2015-12-30
WO2006119508A2 (en) 2006-11-09
WO2006119506A3 (en) 2009-04-16
JP2008545177A (en) 2008-12-11
JP2008547067A (en) 2008-12-25
EP1877904A4 (en) 2013-09-11
CN101495969A (en) 2009-07-29
CA2607005C (en) 2012-02-07
US7836133B2 (en) 2010-11-16
WO2006119508A3 (en) 2009-04-16
CN101558398B (en) 2012-11-28
CA2607005A1 (en) 2006-11-09
WO2006119506A2 (en) 2006-11-09
EP1877904A2 (en) 2008-01-16
WO2006122055A2 (en) 2006-11-16
US20070070921A1 (en) 2007-03-29
US20070079379A1 (en) 2007-04-05
US7548544B2 (en) 2009-06-16
US20070083929A1 (en) 2007-04-12
CA2606998C (en) 2014-09-09
CN101558398A (en) 2009-10-14
EP1877905B1 (en) 2014-10-22
US7854007B2 (en) 2010-12-14
WO2006119509A3 (en) 2009-04-16
WO2006122055A3 (en) 2009-04-30
WO2006119509A2 (en) 2006-11-09
US7712136B2 (en) 2010-05-04
EP1877905A4 (en) 2013-10-09
CN101495969B (en) 2012-10-10
CA2606998A1 (en) 2006-11-09
US7877493B2 (en) 2011-01-25
US20070073660A1 (en) 2007-03-29
EP1877905A2 (en) 2008-01-16
US20070078936A1 (en) 2007-04-05
JP4880675B2 (en) 2012-02-22

Similar Documents

Publication Publication Date Title
US7712136B2 (en) Controlling a message quarantine
US7748038B2 (en) Method and apparatus for managing computer virus outbreaks
US9992165B2 (en) Detection of undesired computer files using digital certificates
US7921063B1 (en) Evaluating electronic mail messages based on probabilistic analysis
US8583787B2 (en) Zero-minute virus and spam detection
US7877807B2 (en) Method of and system for, processing email
US6941348B2 (en) Systems and methods for managing the transmission of electronic messages through active message date updating
US8069481B2 (en) Systems and methods for message threat management
KR20080073301A (en) Electronic message authentication
US20060265459A1 (en) Systems and methods for managing the transmission of synchronous electronic messages
US7958187B2 (en) Systems and methods for managing directory harvest attacks via electronic messages

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION