US9781140B2 - High-yielding detection of remote abusive content - Google Patents

High-yielding detection of remote abusive content Download PDF

Info

Publication number
US9781140B2
US9781140B2 US14/827,494 US201514827494A US9781140B2 US 9781140 B2 US9781140 B2 US 9781140B2 US 201514827494 A US201514827494 A US 201514827494A US 9781140 B2 US9781140 B2 US 9781140B2
Authority
US
United States
Prior art keywords
web link
distributed server
server machines
test
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/827,494
Other versions
US20170054739A1 (en
Inventor
Bradley Wardman
Blake Butler
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PayPal Inc
Original Assignee
PayPal Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PayPal Inc filed Critical PayPal Inc
Priority to US14/827,494 priority Critical patent/US9781140B2/en
Assigned to PAYPAL, INC. reassignment PAYPAL, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BUTLER, BLAKE, WARDMAN, BRADLEY
Publication of US20170054739A1 publication Critical patent/US20170054739A1/en
Application granted granted Critical
Publication of US9781140B2 publication Critical patent/US9781140B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1425Traffic logging, e.g. anomaly detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic

Definitions

  • the present disclosure generally relates to computer systems and, more particularly, to the security of computer systems and Internet services.
  • Cybercrime generally refers to criminal activity involving computer systems. Such criminal activity includes the use of computer systems to perpetrate crimes and illegally access private electronic data. Cybercriminals may gain access to private user account information in a number of ways. For example, cybercriminals may obtain user account credentials and information by exploiting weaknesses in centralized computer systems, by infiltrating local computer systems of users, by tricking users into providing account information, by stealing user account information directly from a company, and by intercepting user account information on a network.
  • Phishing is a type of cybercrime that criminals use to acquire sensitive information such as usernames, passwords, financial account details, social security numbers, and other private data from users.
  • cybercriminals masquerade as a trustworthy entity in an electronic communication, such as an e-mail, instant message, mistyped website, etc.
  • a cybercriminal may send a communication purporting to be from an e-mail provider, social networking website, auction website, bank, or an online payment processor to lure unsuspecting users into providing private and sensitive data.
  • cybercriminals use phishing and other fraudulent schemes to gain trust from users who are unable to determine that an electronic message or website is malicious.
  • Phishing and other fraudulent online schemes continue to increase both in number and in sophistication. Therefore, providing new and improved ways of identifying, mitigating, and blocking malicious content to protect users and organizations are of importance.
  • FIG. 1 is a block diagram illustrating a system architecture, in accordance with various examples of the present disclosure.
  • FIG. 2 is a flow diagram for providing high-yielding detection of remote abusive content, according to an example of the present disclosure.
  • FIG. 3 is a flow diagram for identifying targeted audiences of malicious actors using high-yielding detection of remote abusive content, according to an example of the present disclosure.
  • FIG. 4 is a diagram illustrating an example software application providing high-yielding detection of remove abusive content, in accordance with various examples of the present disclosure.
  • FIG. 5 is a block diagram of an exemplary computer system that may perform one or more of the operations described herein.
  • Systems, methods, and computer program products for providing high-yielding detection of remote abusive content are disclosed.
  • the amount of abusive content on the Internet and the sophistication by which cybercriminals deliver such content continue to increase.
  • cybercriminals have employed a new type of phishing scheme that delivers abusive content to targeted audiences while concealing the same content from service providers, security authorities, law enforcement, and other parties.
  • Cybercriminals conceal abusive content to make it more difficult to detect, identify, and shut down fraudulent schemes.
  • a security authority prevented from accessing abusive cannot confirm the existence or nature of the content. Further, the security authority generally is unable to block access to or remove the abusive content without proper evidence or justification.
  • a malicious content detection system uses a plurality of distributed server machines to perform testing of a web link from different geographic locations using multiple client configurations. Such testing provides a more comprehensive picture of content returned by a web link under a variety of conditions, thereby allowing a security provider to access and gain visibility into fraudulent schemes concealed by cybercriminals.
  • a malicious content detection system provides a graphical user interface that allows a user to submit a web link for analysis to determine whether the web link is associated with malicious content.
  • the malicious content detection system receives the web link submitted via the graphical user interface and provides the web link to a plurality of geographically distributed server machines having different configurations to test the web link.
  • the malicious content detection system then receives and analyzes web link test results from each of the distributed server machines and displays test results and analysis to the user via the graphical user interface.
  • aspects of the present disclosure provide insight into the behavior of online content accessed from different geographic locations under various conditions to allow discovery, identification, and removal of abusive content that would otherwise remain undetected and unknown.
  • FIG. 1 illustrates an exemplary system architecture 100 in which examples of the present disclosure may be implemented.
  • System architecture 100 includes a plurality of server machines 110 , 110 A, 110 N, one or more data stores 180 , and one or more client devices 102 A, 102 N connected via one or more networks 104 .
  • Network 104 may be a public network (e.g., the Internet), a private network (e.g., local area network (LAN) or wide area network (WAN)), or any combination thereof.
  • network 104 may include the Internet, one or more intranets, wired networks, wireless networks, and/or other appropriate types of communication networks.
  • Network 104 also may comprise a wireless telecommunications network (e.g., cellular phone network) adapted to communicate with other communication networks, such as the Internet.
  • network 104 may include one or more short-range wireless networks or beacon networks.
  • Data store 180 generally refers to persistent storage capable of storing various types of data, such as text, audio, video, and image content.
  • data store 180 may include a network-attached file server, while in other examples data store 180 may include other forms of persistent storage such as an object-oriented database, a relational database, and so forth.
  • Client devices 102 A, 102 N generally may be a personal computer (PC), laptop, mobile phone, tablet computer, server computer, wearable computing device, or any other type of computing device (i.e., a client machine).
  • Client devices 102 A- 102 N may run an operating system (OS) that manages hardware and software of the client devices 102 A- 102 N.
  • OS operating system
  • a browser (not shown) may run on client devices 102 A- 102 N (e.g., on the OS of client devices 102 A- 102 N).
  • the browser may be a web browser that can access content and services provided by web server 120 , application server 122 , or a combination of web server 120 and application server 122 .
  • Other types of computer programs and computer scripts also may run on client devices 102 A- 102 N.
  • Server machines 110 , 110 A, 110 N each may include one or more web servers 120 , 120 A, 120 N and application servers 122 , 122 A, 122 N.
  • Web servers 120 - 120 N may provide text, audio, image, and video content to and from server machines 110 - 110 N or other sources (e.g., data store 180 ) and client devices 102 A- 102 N.
  • Web servers 120 - 120 N also may provide web-based application services, business logic, and updates to server machines 110 - 110 N and client devices 102 A- 102 N.
  • Server machines 110 - 110 N may locate, access, and consume various forms of content and services from various trusted (e.g., internal, known) web servers 120 - 120 N and application servers 122 - 122 N and various untrusted (e.g., external, unknown) web and application servers using applications, such as a web browser, web servers, various other types of computer applications, etc.
  • Web servers 120 - 120 N also may receive text, audio, video, and image content from client devices 102 A- 102 N, which may be stored in data store 180 for preservation and/or sharing of content.
  • web servers 120 - 120 N are coupled to one or more respective application servers 122 - 122 N that provide application services, data, business logic, and/or APIs to various server machines 110 - 110 N and client devices 102 A- 102 N.
  • application servers 122 - 122 N provide one or more such services independently, without use of web servers 120 - 120 N.
  • web servers 120 - 120 N may provide server machines 110 - 110 N and client devices 102 A- 102 N with access to one or more application server 122 - 122 N services associated with malicious content detection systems 130 - 130 N.
  • Such functionality also may be provided as part of one or more different web applications, standalone applications, systems, plug-ins, web browser extensions, and application programming interfaces (APIs), etc.
  • plug-ins and extensions generally may be referred to, individually or collectively, as “add-ons.”
  • client devices 102 A- 102 N may include an application associated with a service provided by one or more server machines 110 - 110 N (e.g., malicious content detection systems 130 - 130 N).
  • server machines 110 - 110 N e.g., malicious content detection systems 130 - 130 N
  • various types of computing devices e.g., smart phones, smart televisions, tablet computers, smart wearable devices, smart home computer systems, etc.
  • Server machines 110 - 110 N each include respective user interface manager 140 - 140 N modules, content distributor 150 - 150 N modules, profile generator 160 - 160 N modules, content analyzer 170 - 170 N modules, and report manager 180 - 180 N modules. In various examples, such modules may be combined, divided, and organized in various arrangements on one or more computing devices.
  • server machines 110 A- 110 N may be performed by one or more other server machines 110 A- 110 N, in whole or in part.
  • functionality attributed to a particular component may be performed by different or multiple components operating together.
  • Server machines 110 - 110 N may be accessed as a service provided by systems or devices via appropriate application programming interfaces (APIs) and data feeds, and thus are not limited to use with websites. Further, server machines 110 - 110 N may be associated with and/or utilize one or more malicious content detection systems 130 - 130 N.
  • APIs application programming interfaces
  • server machines 110 - 110 N may be associated with and/or utilize one or more malicious content detection systems 130 - 130 N.
  • one or more server machines 110 - 110 N may be specialized security devices dedicated to providing malicious content detection system 130 - 130 N services.
  • server machines 110 - 110 N may include one or more of a server computer, router, a switch, a firewall, a dedicated computing device, a shared computing device, a virtual machine, virtual machine guests, etc.
  • server machines 110 - 110 N perform activities associated with malicious content detection systems 130 - 130 N in addition to other security activities, such as network security, application security, file security, data security, etc.
  • FIG. 2 is a flow diagram for providing high-yielding detection of remote abusive content, according to an example of the present disclosure.
  • the method 200 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a general purpose computer system, dedicated machine, or processing device), firmware, or a combination thereof. Examples of method 200 are described with respect to malicious content detection system 130 for the sake of consistency. In addition, such examples generally apply to other malicious content detection systems 130 A- 130 N, as described herein.
  • Method 200 begins at block 202 when user interface manager 140 of malicious content detection system 130 generates a graphical user interface that allows a user to submit a web link for determining whether the web link is associated with malicious content.
  • user interface manager 140 generates a user interface allowing a user to submit one or more pieces of content for analysis to determine whether the content is associated with potential or actual malicious activity.
  • user interface manager 140 may generate and display a graphical user interface that allows the user to input or paste one or more web links (e.g., uniform resource locators, web addresses, IP addresses, or other identifiers of a location where content is located) for analysis.
  • Web links generally may refer to a location of content that is accessible over any network 104 , such as web content, hyperlinked documents, cloud storage, network storage, etc.
  • user interface manager 140 provides a graphical user interface that allows a user to forward or upload an e-mail or other document comprising one or more web links for analysis.
  • the graphical user interface provided by user interface manager 140 may allow users to forward one or more e-mails comprising one or more web links, forward one or more documents comprising web links via e-mail, and/or upload e-mail or other documents comprising one or more web links for analysis.
  • user interface manager 140 of malicious content detection system 130 receives the web link from the user via the graphical user interface.
  • user interface manager 140 receives one or more web links for analysis from the user via the graphical user interface.
  • user interface manager 140 validates and formats the web links prior to analysis and testing.
  • User interface manager 140 then provides one or more of the web links received to content distributor 150 for testing across one or more distributed server machines 110 A- 110 N.
  • content distributor 150 of malicious content detection system 130 sends the web link to multiple distributed server machines 110 - 110 N to allow each of the distributed server machines 110 - 110 N to test the web link.
  • content distributor 150 determines one or more testing scenarios for testing web links provided by a user.
  • content distributor 150 may determine one or more geographic locations, one or more service providers, one or more networks, one or more network addresses (e.g., IP addresses), one or more network address ranges, one or more days, one or more times, one or more computer system versions and settings, one or more web browser versions and settings, one or more software application versions and settings, one or more installed web browsers, one or more installed applications, one or more installed character sets, one or more installed languages, one or more installed fonts, and/or one or more other user profile attributes to use for testing the web link.
  • network addresses e.g., IP addresses
  • content distributor 150 determines one or more of the above-noted user profile attributes to use for testing the web link based on available distributed server machines 110 A- 110 N, configurations available for current or future use on distributed server machines 110 A- 110 N, and one or more profile attributes available for current or future use on distributed server machines 110 A- 110 N, etc.
  • content distributor 150 searches for and identifies one or more distributed server machines 110 A- 110 N configured to test or capable of testing the web link based on one or more of the user profile attributes determined for testing the web link.
  • one or more of web link testing scenarios, user profile attributes to use for web link testing, and distributed server machine 110 A- 110 N configurations are determined, at least in part, based on one or more attributes or criteria provided by the user via the graphical user interface.
  • content distributor 150 sends a web link to multiple distributed server machines 110 A- 110 N instructing each of the distributed server machines 110 A- 110 N to test the web link.
  • content distributor 150 may provide a general instruction for one or more of the distributed server machines 110 A- 110 N to test the web link.
  • Content distributor 150 also may provide specific instructions to each of one or more of the distributed server machines 110 A- 110 N to test the web link under one or more particular configurations or testing scenarios.
  • content distributor 150 sends the web link to distributed server machines 110 A- 110 N associated with a particular organization, such as a network service provider, a computer security provider, a financial services company, or another type of organization.
  • Content distributor 150 also may send the web link to distributed server machines 110 A- 110 N operated by a plurality of different organizations.
  • content distributor 150 may distribute the web link to at least one distributed server machine 110 A- 110 N in each of the plurality of different, unrelated companies participating in a community effort to identify and remove malicious content from the Internet for the benefit of users around the world.
  • various organizations may participate collectively in providing testing and services associated with malicious content detection systems 130 - 130 N.
  • content analyzer 170 of malicious content detection system 130 receives a test result for the web link from each of the distributed server machines 110 A- 110 N.
  • each of a plurality of distributed server machines 110 A- 110 N first receives the web link to test from content distributor 150 .
  • Content distributor 150 also may provide general or specific instructions for testing the web link to one or more of the distributed server machines 110 A- 110 N either together or separate from the web link.
  • one or more of the distributed server machines 110 A- 110 N adjusts or updates configuration settings to perform testing of the web link received from content distributor 150 .
  • each of one or more distributed server machines 110 A- 110 N may adjust or update various configuration settings including, but not limited to, a physical computer system configuration, a virtual computer system configuration, a web browser configuration, a software application configuration, a software installation, one or more user agent attributes, one or more user profile attributes, etc.
  • profile generators 160 A- 160 N of one or more of the distributed server machines 110 A- 110 N generate randomized or semi-randomized configuration settings and user profile attributes to use when testing the web link based on a geographic location associated with a respective distributed server machine.
  • Such configuration settings also may be generated by and provided to respective distributed server machines 110 A- 110 N, in whole or in part, by profile generator 160 of server machine 110 .
  • randomized configurations based on a statistical sampling of user profile attributes associated with a geographic location may be used, for example, to prevent malicious content providers and other cybercriminals from detecting, redirecting, and/or blocking testing of web links performed by distributed server machines 110 A- 110 N.
  • distributed server machines 110 A- 110 N perform one or more tests using the web link.
  • one or more distributed server machines 110 A- 110 N update one or more respective configuration settings and/or user profile attributes for one or more different tests of the web link.
  • Each distributed server machine 110 A- 110 N then receives and records results of the web link testing.
  • each distributed server machine 110 A- 110 N may record configuration settings and user profile attributes used to test the web link along with results of testing the web link.
  • web link testing results may include, but are not limited to, a resolved network address (e.g., IP address) for the web link, a subnet range for the resolved network address, a network used to perform the test, a network service provider used to perform the test, a content length value received or determined based on testing the web link, full or partial hashed or unmodified content received when testing the web link, etc.
  • a resolved network address e.g., IP address
  • subnet range for the resolved network address
  • a network used to perform the test e.g., a network service provider used to perform the test
  • a content length value received or determined based on testing the web link e.g., full or partial hashed or unmodified content received when testing the web link, etc.
  • one or more distributed server machines 110 A- 110 N capture an image or copy of content retrieved when testing the web link.
  • the image or copy of the retrieved content then may be provided to a security authority (e.g., a blacklist operator or maintainer) that is unable to access or confirm existence of abusive content associated with the web link, thus providing the security authority with tangible evidence to justify blocking and/or taking down the content.
  • a security authority e.g., a blacklist operator or maintainer
  • the security authority also may perform independent testing of the web link (or a copy of the content) using one or more configurations and/or user profiles, for example, in a sandbox environment to determine behavior of the web link and associated content.
  • content analyzer 170 then receives and analyzes test results for the web link from each of the distributed server machines 110 A- 110 N. In one example, content analyzer compares test results received for the web link to determine whether distributed server machines 110 A- 110 N from a particular location and/or having certain configuration or user profile attributes are being allowed access or are being directly or indirectly (e.g., via subtle, redirection) denied access to malicious or safe content associated with a web link. For example, content analyzer 170 may analyze respective client profiles used by different distributed server machines 110 A- 110 N to test the web link based on respective test results received from each of the distributed server machines 110 A- 110 N. Content analyzer 170 then may determine similarities between client profiles used by the different distributed server machines 110 A- 110 N that were allowed or denied access to one or more different forms of content associated with the web link.
  • content analyzer 170 may compare the respective test results received for the web link in view of the client profiles to determine one or more client profile attributes being targeted by malicious content delivered via the web link. In addition, content analyzer 170 may determine based on the test results that the web link is associated with malicious content when the web link returns different, non-similar, or unrelated content to various distributed server machines 110 A- 110 N associated with different geographic locations, having different configuration settings, and/or having different user profile attributes.
  • report manager 180 of malicious content detection system 130 generates a report comprising the test results for the web link received from the distributed server machines 110 A- 110 N.
  • report manager 180 generates a report comprising the test results received for the web link and analysis performed by content analyzer 170 .
  • report manager 180 generates a web link test report based on one or more user configuration settings, query criteria, and/or reporting criteria to provide and format web link test results, web link test result analysis, and/or web link test results based on user preferences.
  • report manager 180 of malicious content detection system 130 provides the report comprising the test results for the web link to the user.
  • report manager 180 sends the test results for the web link to a user device 102 A- 102 N for display to the user.
  • report manager 180 may display a new or updated version of a graphical user interface comprising the test results for the web link.
  • report manager 180 may refresh an existing graphical user interface to provide the test results.
  • Report manager 180 also may store the results in a particular location of data store 180 (e.g., local storage, cloud storage, etc.) that is communicated to the user via an electronic message.
  • FIG. 3 is a flow diagram for identifying targeted audiences of malicious actors using high-yielding detection of remote abusive content, according to an example of the present disclosure.
  • the method 300 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a general purpose computer system, dedicated machine, or processing device), firmware, or a combination thereof. Examples of method 300 are described with respect to malicious content detection system 130 for the sake of consistency. In addition, such examples generally apply to other malicious content detection systems 130 A- 130 N, as described herein.
  • Method 300 begins at block 302 when user interface manager 150 of malicious content detection system 130 displays an interactive graphical user interface on a computing device that allows a user to submit a web link for analysis to determine whether the web link is associated with malicious content.
  • user interface manager 140 generates and displays a user interface that allows a user to input one or more web links or to upload one or more web, e-mail, or other documents comprising at least one web link for analysis to determine whether the web links are associated with malicious activity, such as a phishing campaign targeting one or more user audiences.
  • content distributor 150 of malicious content detection system 130 sends the web link provided by the user via the graphical user interface to a plurality of distributed server machines 110 A- 110 N to test the web link under various conditions in different geographic locations.
  • content distributor 150 receives one or more web links or a document comprising one or more web links submitted by the user via the graphical user interface.
  • user interface manager 140 validates and formats the web links prior to testing and analyzing behavior of the web links.
  • User interface manager then may provide the web links to content distributor 150 for distributed testing using one or more distributed server machines 110 A- 110 N situated in various geographic locations.
  • profile generator 160 of malicious content detection system 130 generates a plurality of test user profiles for testing the web link.
  • content distributor 150 determines one or more geographic locations, one or more service providers, one or more networks, one or more network addresses (e.g., IP addresses), one or more network address ranges, one or more days, one or more times, one or more computer system versions and settings, one or more web browser versions and settings, one or more software application versions and settings, one or more installed web browsers, one or more installed applications, one or more installed character sets, one or more installed languages, one or more installed fonts, and/or one or more other user profile attributes to use for testing the web link.
  • Profile generator 160 then generates one or more test user profiles for each of a plurality of distributed server machines 110 A- 110 N to use for testing the web link.
  • profile generator 160 of server machine generates a high-level test plan and general test user profiles to use for testing a web link. For example, profile generator 160 may determine that a web link is to be tested using at least ten different IP addresses in each of 100 countries to identify targets of a suspected or known phishing campaign. Respective profile generators 160 A- 160 N of each distributed server machine 110 A- 110 N then may generate additional test user profiles based on, and to mimic, known or expected localized computer system configurations, web browser configurations, software application configurations, software installations, user agent attributes, user profile attributes, etc.
  • one or more of the server machine 110 and distributed server machines 110 A- 110 N may generate and use test user profile data based on a statistical sampling of user attributes corresponding to a location to test the web link.
  • test user profile data may be generated randomly or semi-randomly to avoid detection of the web link testing and/or to avoid concealment of malicious content from security authorities or other non-targeted enforcement organizations by cybercriminals.
  • content analyzer 170 of malicious content detection system 130 receives test results for the web link from each of the respective distributed server machines.
  • content analyzer 170 receives at least one test result for the web link from each of a plurality of distributed server machines 110 A- 110 N.
  • Web link test results may comprise, for example, one or more of a resolved network address (e.g., IP address) for the web link, a subnet range for the resolved network address, a network used to perform the test, a network service provider used to perform the test, a content length value received or determined based on a test of the web link, full or partial hashed and/or unmodified web or other content received when testing the web link, etc.
  • web link test results may comprise or be accompanied by one or more test user profiles, computer system configurations, software application configurations, user agent data, and/or other information pertaining to conditions that a respective distributed server machine 110 A- 110 N uses to test the web link.
  • one or more test results for the web link may comprise or be accompanied by an image or copy of content captured by a distributed server machine 110 A- 110 N when testing the web link.
  • the image or copy of the content then may be provided to a security authority (e.g., blacklist maintainer, spam filter, virus protection service, etc.) that is being directly or passively blocked from accessing and proving the existence of malicious content associated with the web link.
  • a security authority e.g., blacklist maintainer, spam filter, virus protection service, etc.
  • Such evidence of malicious content may be provided indirectly to give the security authority the power (e.g., authority or legal justification) to block or remove of the malicious content, even when cybercriminals have directly or passively concealed the abusive content from an authority.
  • content analyzer 170 analyzes test results for the web link and instructs one or more of the same or different distributed server machines 110 A- 110 N to perform additional testing of the web link. For example, content analyzer 170 may determine based on a first round of web link testing using a few different distributed server machines 110 A- 110 N in a plurality of countries that access to content associated with a web link is being allowed in some countries and blocked in other countries. Based on the result, content analyzer may determine that the countries where content associated with the web link is accessible are countries potentially being targeted in a phishing campaign. Content analyzer 170 then may instruct various distributed server machines 110 A- 110 N in certain geographic locations to perform one or more rounds of additional and more focused testing of a web link to determine more specific details about one or more targeted or blocked audiences.
  • content analyzer 170 of malicious content detection system 130 determines, based on the test results, that the web link is associated with malicious content.
  • content analyzer 170 compares a full or partial amount of raw or hashed content retrieved by various distributed server machines 110 A- 110 N that tested the web link.
  • content analyzer 170 determines that a web link delivers similar content or the same content to a plurality of distributed server machines 110 A- 110 N with different test user profiles and configuration settings in various geographic locations.
  • content analyzer 170 determines that a web link delivers different, altered, or modified content to groups of one or more distributed server machines 110 A- 110 N with different test user profiles and configuration settings in various geographic locations.
  • content analyzer 170 may determine that there is an elevated or actual risk that the web link is associated with malicious content.
  • content analyzer 170 compares web content or computer code gathered by respective distributed server machines 110 A- 110 N that performed testing of a web link using different user profile attributes and configuration settings. For example, a plurality of distributed server machines 110 A- 110 N may test a web link simultaneously, in groups, or separately during an assigned, scheduled, or unscheduled period of time.
  • content analyzer 170 compares web content or computer code associated with the same web link from a plurality of distributed server machines 110 A- 110 N. Content analyzer then determines whether the web content or computer code is the same, is partially the same (or different), is substantially the same (or different), or do not have similarities.
  • the web content or computer code may be identical, associated or derivative copies (e.g., translations), partial matches (e.g., include at least some of the same content or code), unrelated, etc.
  • content analyzer 170 determines whether web test results are the same or different based on a threshold, for example, to account for minor variations occurring across different technology environments or periodic updates of web content.
  • a user or an administrator specifies a threshold on a graphical user interface to indicate whether content analyzer 170 is to determine matching content based on 100% similarity, at least 90% similarity, at least 75% similarity, at least 50% similarity, or some other amount of similarity between test results received for a web link.
  • content analyzer 170 determines that web content or computer code returned by a web link to various distributed server machines 110 A- 110 N is different, meaning that the web content or code is unrelated semantically (e.g., based on words, phrases, sentences, paragraphs, organization, functions, procedures, computer code variables, computer code objects, computer code structures, etc.) and/or based on subject matter.
  • different web results based on subject matter may exist when test results from one distributed server machine 110 A discuss “vacations” and test results from another distributed server machine 110 N discuss “college tuition”.
  • different computer code based on subject matter may exist when test results from one distributed server machine 110 A use one computer language while test results from another distributed server machine 110 N (for the same web link) use a different computer language.
  • content analyzer 170 of malicious content detection system 130 compares the test results received for the web link in view of client profiles used by each distributed server machine 110 A- 110 N to determine one or more client profile attributes being targeted by the malicious content associated with the web link.
  • content analyzer 170 determines that web link test results indicate that the web link delivers different content to different simulated audiences or conceals the content from certain audiences (e.g., as tested and as reported by a plurality of distributed server machines 110 A- 110 N situated in different geographic locations).
  • Content analyzer 170 then may examine user profile attributes, configuration settings and other information associated with test user profiles to determine one or more representative user audience profiles being allowed and/or denied access to content associated with a web link.
  • representative audience profiles for targeted user audiences determined by content analyzer 170 are reported to a user via a graphical user interface with or independent of web link test results and analysis.
  • representative audience profiles for users that are unable to access the content due to concealment by a cybercriminal or other party may be reported to a user via a graphical user interface along with or separate from web link test results.
  • report manager 180 of malicious content detection system 130 displays a report comprising the test results for the web link to the user via the interactive graphical user interface.
  • report manager 180 generates and displays a report comprising the test results for the web link received from the distributed server machines 110 A- 110 N and test result analysis performed by content analyzer 170 .
  • the report generated and displayed by report manager 180 includes a navigable two-dimensional or three-dimensional map that illustrates web link test results for each of various geographically distributed server machines 110 A- 110 N using one or more colors and informational annotations.
  • the map may illustrate one or more servers and/or geographic locations where access to content associated with a web link was allowed, blocked, hidden, redirected, altered, etc.
  • the report may indicate one or more distributed server machines 110 A- 110 N or geographic locations where testing of the web link resulted in a redirection to substitute content or another website to conceal malicious content directed at particular users, user profile attributes, software configurations, computer system configurations, etc.
  • FIG. 4 is a diagram illustrating an example software application providing high-yielding detection of remove abusive content, in accordance with various examples of the present disclosure.
  • Diagram 400 includes an example interactive graphical user interface associated with a malicious content detection system software application 402 .
  • Malicious content detection system software application 402 includes a content identifier control 404 with a web link for testing, a text size adjustment control 406 , a configuration settings control 408 , a search initiation control 410 , a server machine listing 412 comprising a plurality of server machines 110 - 110 N used to test the web link, a listing of resolved IP addresses 414 received for the web link by each of the server machines 110 - 110 N, a listing of corresponding subnet ranges 416 determined for the web link by each of the server machines 110 - 110 N, a listing of corresponding service providers 418 used by each of the server machines 110 - 110 N to test the web link, a listing of content length values 420 received or determined by each of the server machines when testing the
  • user interface manager 140 of malicious content detection system 130 displays the graphical user interface 402 for use on a client device 102 A- 102 N of a user.
  • the user may adjust text size using text size adjustment control 406 and configuration settings using configuration settings control 408 .
  • the user enters a web link into content identifier control 404 and selects the search initiation control 410 to submit the web link for analysis to determine whether the web link is associated with malicious content.
  • user interface manager 140 receives the web link submitted by the user via the graphical user interface 402 .
  • Content distributor 150 then provides the web link to a plurality of geographically distributed server machines 110 A- 110 N to test the web link for the purposes of determining whether the web link is associated with malicious content.
  • Content analyzers 170 - 170 A of respective distributed server machines 110 A- 110 N then each generate test user profiles to use for testing the web link.
  • Content analyzers 170 - 170 A then each test the web link, collect respective results from testing the web link, and return the respective test results to content analyzer 170 of server machine 110 .
  • content analyzer 170 collects and analyzes the web link test results received from the distributed server machines 110 A- 110 N.
  • Web link test results provided by each of the distributed server machines 110 A- 110 N may include identification and location information of a server machine, a resolved IP address received for the web link by server machine, a subnet range for the resolved IP address of the web link, a network or service provider used access the web link, a content length value received or determined when testing web link, a full or partial listing of hashed content retrieved when testing web link (e.g., where the hashing of the retrieved content was performed by a respective distributed server machine 110 A- 110 N), etc.
  • report manager 180 displays the web link test results and associated analysis on the graphical user interface associated with the malicious content detection system software application 402 .
  • server machine listing 412 indicates that the web link entered in content identifier control 404 was tested using a plurality of distributed server machines 110 A- 110 N in various geographic locations.
  • the listing of resolved IP addresses 414 and the listing of corresponding subnet ranges 416 indicate that most of the distributed server machines 110 A- 110 N were directed to the same location by the web link. However, a server machine in Turkey was directed to a different location by the web link, and server machines in China and Malaysia were not used in testing the web link.
  • the listing of content length values 420 received or determined by each of the server machines 110 A- 110 N and the listing of hashed content 422 received by each of the server machines when testing web link indicates that servers in the United States, Canada, Germany, Pakistan, and Australia were forbidden from accessing content associated with the web link.
  • server machines in Brazil, the United Kingdom, France, Italy, Russia, India, and Thailand received matching content when testing the web link.
  • a server machine in Turkey received different content compared to the other server machines 110 A- 110 N that received content when testing the web link (e.g., possibly because a phishing scheme is being targeted at users located in Turkey).
  • the listing of quick indicators 424 indicate server machines 110 A- 110 N that were unable to access or were denied access to content associated with the web link (e.g., “ ⁇ ”), that accessed content associated with the web link and received a result matching a majority of other server machines 110 A- 110 N (e.g., “+”), that accessed content associated with the web link and received a result that does not match a majority of other server machines 110 A- 110 N (e.g., “ ⁇ ”), and that were not used to test the web link or had other issues in testing the web link (e.g., “ ⁇ ”).
  • FIG. 5 illustrates a diagram of a machine in the exemplary form of a computer system 500 , within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein may be executed.
  • the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, or the Internet.
  • the machine may operate in the capacity of a server or a client device in client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
  • the machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a wearable computing device, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • PC personal computer
  • PDA Personal Digital Assistant
  • STB set-top box
  • STB set-top box
  • PDA Personal Digital Assistant
  • a cellular telephone a wearable computing device
  • web appliance a web appliance
  • server a server
  • network router switch or bridge
  • the exemplary computer system 500 includes a processing device (processor) 502 , a main memory 504 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), double data rate (DDR SDRAM), or DRAM (RDRAM), etc.), a static memory 506 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 518 , which communicate with each other via a bus 530 .
  • a processing device e.g., a main memory 504
  • main memory 504 e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), double data rate (DDR SDRAM), or DRAM (RDRAM), etc.
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate
  • RDRAM DRAM
  • static memory 506 e.g., flash memory, static random access memory
  • Processor 502 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processor 502 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processor 502 also may be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processor 502 is configured to execute instructions 522 for performing the operations and steps discussed herein.
  • CISC complex instruction set computing
  • RISC reduced instruction set computing
  • VLIW very long instruction word
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • DSP digital signal processor
  • network processor or the like.
  • the processor 502 is configured to execute instructions 522 for performing the operations and steps discussed here
  • the computer system 500 also may include a network interface device 508 .
  • the computer system 500 may further include a video display unit 510 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 512 (e.g., a keyboard), a cursor control device 514 (e.g., a mouse), and a signal generation device 516 (e.g., a speaker).
  • a video display unit 510 e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)
  • an alphanumeric input device 512 e.g., a keyboard
  • a cursor control device 514 e.g., a mouse
  • a signal generation device 516 e.g., a speaker
  • the data storage device 518 may include a computer-readable storage medium 528 on which is stored one or more sets of instructions 522 (e.g., software computer instructions) embodying any one or more of the methodologies or functions described herein.
  • the instructions 522 also may reside, completely or at least partially, within the main memory 504 and/or within the processor 502 during execution thereof by the computer system 500 , the main memory 504 and the processor 502 also constituting computer-readable storage media.
  • the instructions 522 may be transmitted or received over a network 520 via the network interface device 508 .
  • the instructions 522 include instructions for one or more modules of an malicious content detection system (e.g., malicious content detection system 130 , 130 A, 130 N of FIG. 1 ) and/or a software library containing methods that call a malicious content detection system 130 , 130 A, 130 N.
  • an malicious content detection system e.g., malicious content detection system 130 , 130 A, 130 N of FIG. 1
  • a software library containing methods that call a malicious content detection system 130 , 130 A, 130 N e.g., malicious content detection system 130 , 130 A, 130 N of FIG. 1
  • the computer-readable storage medium 528 is shown as an example to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
  • computer-readable storage medium also may include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure.
  • the term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
  • This apparatus may be constructed for the intended purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions.

Abstract

Methods, systems, and computer program products for providing high-yielding detection of remote abusive content are disclosed. A computer-implemented method may include generating a graphical user interface allowing users to submit a web link for analysis to determine whether the web link is associated with malicious content, receiving the web link from the user via the graphical user interface, sending the web link to a plurality of distributed server machines to allow each of the distributed server machines to test the web link, generating a plurality of test user profiles to test the web link, testing the web link by each of the distributed server machines using one or more of the test user profiles, receiving a test result for the web link from each of the distributed server machines, and displaying a report comprising the test results for the web link to the user via the graphical user interface.

Description

TECHNICAL FIELD
The present disclosure generally relates to computer systems and, more particularly, to the security of computer systems and Internet services.
BACKGROUND
Cybercrime generally refers to criminal activity involving computer systems. Such criminal activity includes the use of computer systems to perpetrate crimes and illegally access private electronic data. Cybercriminals may gain access to private user account information in a number of ways. For example, cybercriminals may obtain user account credentials and information by exploiting weaknesses in centralized computer systems, by infiltrating local computer systems of users, by tricking users into providing account information, by stealing user account information directly from a company, and by intercepting user account information on a network.
Phishing is a type of cybercrime that criminals use to acquire sensitive information such as usernames, passwords, financial account details, social security numbers, and other private data from users. In phishing, cybercriminals masquerade as a trustworthy entity in an electronic communication, such as an e-mail, instant message, mistyped website, etc. For example, a cybercriminal may send a communication purporting to be from an e-mail provider, social networking website, auction website, bank, or an online payment processor to lure unsuspecting users into providing private and sensitive data. Thus, cybercriminals use phishing and other fraudulent schemes to gain trust from users who are unable to determine that an electronic message or website is malicious.
Phishing and other fraudulent online schemes continue to increase both in number and in sophistication. Therefore, providing new and improved ways of identifying, mitigating, and blocking malicious content to protect users and organizations are of importance.
BRIEF DESCRIPTION OF THE DRAWINGS
Various examples of the present disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various examples of the disclosure. In the drawings, like reference numbers may indicate identical or functionally similar elements. The drawing in which an element first appears is generally indicated by the left-most digit in the corresponding reference number.
FIG. 1 is a block diagram illustrating a system architecture, in accordance with various examples of the present disclosure.
FIG. 2 is a flow diagram for providing high-yielding detection of remote abusive content, according to an example of the present disclosure.
FIG. 3 is a flow diagram for identifying targeted audiences of malicious actors using high-yielding detection of remote abusive content, according to an example of the present disclosure.
FIG. 4 is a diagram illustrating an example software application providing high-yielding detection of remove abusive content, in accordance with various examples of the present disclosure.
FIG. 5 is a block diagram of an exemplary computer system that may perform one or more of the operations described herein.
DETAILED DESCRIPTION
Systems, methods, and computer program products for providing high-yielding detection of remote abusive content are disclosed. The amount of abusive content on the Internet and the sophistication by which cybercriminals deliver such content continue to increase. For example, cybercriminals have employed a new type of phishing scheme that delivers abusive content to targeted audiences while concealing the same content from service providers, security authorities, law enforcement, and other parties. Cybercriminals conceal abusive content to make it more difficult to detect, identify, and shut down fraudulent schemes. For example, a security authority prevented from accessing abusive cannot confirm the existence or nature of the content. Further, the security authority generally is unable to block access to or remove the abusive content without proper evidence or justification.
In examples of the present disclosure, a malicious content detection system uses a plurality of distributed server machines to perform testing of a web link from different geographic locations using multiple client configurations. Such testing provides a more comprehensive picture of content returned by a web link under a variety of conditions, thereby allowing a security provider to access and gain visibility into fraudulent schemes concealed by cybercriminals.
In an example, a malicious content detection system provides a graphical user interface that allows a user to submit a web link for analysis to determine whether the web link is associated with malicious content. The malicious content detection system receives the web link submitted via the graphical user interface and provides the web link to a plurality of geographically distributed server machines having different configurations to test the web link. The malicious content detection system then receives and analyzes web link test results from each of the distributed server machines and displays test results and analysis to the user via the graphical user interface.
Accordingly, aspects of the present disclosure provide insight into the behavior of online content accessed from different geographic locations under various conditions to allow discovery, identification, and removal of abusive content that would otherwise remain undetected and unknown.
FIG. 1 illustrates an exemplary system architecture 100 in which examples of the present disclosure may be implemented. System architecture 100 includes a plurality of server machines 110, 110A, 110N, one or more data stores 180, and one or more client devices 102A, 102N connected via one or more networks 104.
Network 104 may be a public network (e.g., the Internet), a private network (e.g., local area network (LAN) or wide area network (WAN)), or any combination thereof. In an example, network 104 may include the Internet, one or more intranets, wired networks, wireless networks, and/or other appropriate types of communication networks. Network 104 also may comprise a wireless telecommunications network (e.g., cellular phone network) adapted to communicate with other communication networks, such as the Internet. In addition, network 104 may include one or more short-range wireless networks or beacon networks.
Data store 180 generally refers to persistent storage capable of storing various types of data, such as text, audio, video, and image content. In some examples, data store 180 may include a network-attached file server, while in other examples data store 180 may include other forms of persistent storage such as an object-oriented database, a relational database, and so forth.
Client devices 102A, 102N generally may be a personal computer (PC), laptop, mobile phone, tablet computer, server computer, wearable computing device, or any other type of computing device (i.e., a client machine). Client devices 102A-102N may run an operating system (OS) that manages hardware and software of the client devices 102A-102N. A browser (not shown) may run on client devices 102A-102N (e.g., on the OS of client devices 102A-102N). The browser may be a web browser that can access content and services provided by web server 120, application server 122, or a combination of web server 120 and application server 122. Other types of computer programs and computer scripts also may run on client devices 102A-102N.
Server machines 110, 110A, 110N each may include one or more web servers 120, 120A, 120N and application servers 122, 122A, 122N. Web servers 120-120N may provide text, audio, image, and video content to and from server machines 110-110N or other sources (e.g., data store 180) and client devices 102A-102N. Web servers 120-120N also may provide web-based application services, business logic, and updates to server machines 110-110N and client devices 102A-102N. Server machines 110-110N may locate, access, and consume various forms of content and services from various trusted (e.g., internal, known) web servers 120-120N and application servers 122-122N and various untrusted (e.g., external, unknown) web and application servers using applications, such as a web browser, web servers, various other types of computer applications, etc. Web servers 120-120N also may receive text, audio, video, and image content from client devices 102A-102N, which may be stored in data store 180 for preservation and/or sharing of content.
In an example, web servers 120-120N are coupled to one or more respective application servers 122-122N that provide application services, data, business logic, and/or APIs to various server machines 110-110N and client devices 102A-102N. In some examples, application servers 122-122N provide one or more such services independently, without use of web servers 120-120N.
In an example, web servers 120-120N may provide server machines 110-110N and client devices 102A-102N with access to one or more application server 122-122N services associated with malicious content detection systems 130-130N. Such functionality also may be provided as part of one or more different web applications, standalone applications, systems, plug-ins, web browser extensions, and application programming interfaces (APIs), etc. In some examples, plug-ins and extensions generally may be referred to, individually or collectively, as “add-ons.”
In an example, client devices 102A-102N may include an application associated with a service provided by one or more server machines 110-110N (e.g., malicious content detection systems 130-130N). For example, various types of computing devices (e.g., smart phones, smart televisions, tablet computers, smart wearable devices, smart home computer systems, etc.) may use specialized applications to access services provided by server machines 110-110N, to issue commands to server machines 110-110N, and/or to receive content from server machines 110-110N without visiting or using web pages.
Server machines 110-110N each include respective user interface manager 140-140N modules, content distributor 150-150N modules, profile generator 160-160N modules, content analyzer 170-170N modules, and report manager 180-180N modules. In various examples, such modules may be combined, divided, and organized in various arrangements on one or more computing devices.
In an example, functions performed by one or more of the server machines 110A-110N also may be performed by one or more other server machines 110A-110N, in whole or in part. In addition, the functionality attributed to a particular component may be performed by different or multiple components operating together. Server machines 110-110N may be accessed as a service provided by systems or devices via appropriate application programming interfaces (APIs) and data feeds, and thus are not limited to use with websites. Further, server machines 110-110N may be associated with and/or utilize one or more malicious content detection systems 130-130N.
In an example, one or more server machines 110-110N may be specialized security devices dedicated to providing malicious content detection system 130-130N services. In an example, server machines 110-110N may include one or more of a server computer, router, a switch, a firewall, a dedicated computing device, a shared computing device, a virtual machine, virtual machine guests, etc. In one example, server machines 110-110N perform activities associated with malicious content detection systems 130-130N in addition to other security activities, such as network security, application security, file security, data security, etc.
FIG. 2 is a flow diagram for providing high-yielding detection of remote abusive content, according to an example of the present disclosure. The method 200 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a general purpose computer system, dedicated machine, or processing device), firmware, or a combination thereof. Examples of method 200 are described with respect to malicious content detection system 130 for the sake of consistency. In addition, such examples generally apply to other malicious content detection systems 130A-130N, as described herein.
Method 200 begins at block 202 when user interface manager 140 of malicious content detection system 130 generates a graphical user interface that allows a user to submit a web link for determining whether the web link is associated with malicious content. In an example, user interface manager 140 generates a user interface allowing a user to submit one or more pieces of content for analysis to determine whether the content is associated with potential or actual malicious activity. For example, user interface manager 140 may generate and display a graphical user interface that allows the user to input or paste one or more web links (e.g., uniform resource locators, web addresses, IP addresses, or other identifiers of a location where content is located) for analysis. Web links generally may refer to a location of content that is accessible over any network 104, such as web content, hyperlinked documents, cloud storage, network storage, etc.
In an example, user interface manager 140 provides a graphical user interface that allows a user to forward or upload an e-mail or other document comprising one or more web links for analysis. For example, the graphical user interface provided by user interface manager 140 may allow users to forward one or more e-mails comprising one or more web links, forward one or more documents comprising web links via e-mail, and/or upload e-mail or other documents comprising one or more web links for analysis.
At block 204, user interface manager 140 of malicious content detection system 130 receives the web link from the user via the graphical user interface. In an example, user interface manager 140 receives one or more web links for analysis from the user via the graphical user interface. In one example, user interface manager 140 validates and formats the web links prior to analysis and testing. User interface manager 140 then provides one or more of the web links received to content distributor 150 for testing across one or more distributed server machines 110A-110N.
At block 206, content distributor 150 of malicious content detection system 130 sends the web link to multiple distributed server machines 110-110N to allow each of the distributed server machines 110-110N to test the web link. In an example, content distributor 150 determines one or more testing scenarios for testing web links provided by a user. For example, content distributor 150 may determine one or more geographic locations, one or more service providers, one or more networks, one or more network addresses (e.g., IP addresses), one or more network address ranges, one or more days, one or more times, one or more computer system versions and settings, one or more web browser versions and settings, one or more software application versions and settings, one or more installed web browsers, one or more installed applications, one or more installed character sets, one or more installed languages, one or more installed fonts, and/or one or more other user profile attributes to use for testing the web link.
In an example, content distributor 150 determines one or more of the above-noted user profile attributes to use for testing the web link based on available distributed server machines 110A-110N, configurations available for current or future use on distributed server machines 110A-110N, and one or more profile attributes available for current or future use on distributed server machines 110A-110N, etc. In some examples, content distributor 150 searches for and identifies one or more distributed server machines 110A-110N configured to test or capable of testing the web link based on one or more of the user profile attributes determined for testing the web link. In one example, one or more of web link testing scenarios, user profile attributes to use for web link testing, and distributed server machine 110A-110N configurations are determined, at least in part, based on one or more attributes or criteria provided by the user via the graphical user interface.
In an example, content distributor 150 sends a web link to multiple distributed server machines 110A-110N instructing each of the distributed server machines 110A-110N to test the web link. For example, content distributor 150 may provide a general instruction for one or more of the distributed server machines 110A-110N to test the web link. Content distributor 150 also may provide specific instructions to each of one or more of the distributed server machines 110A-110N to test the web link under one or more particular configurations or testing scenarios.
In an example, content distributor 150 sends the web link to distributed server machines 110A-110N associated with a particular organization, such as a network service provider, a computer security provider, a financial services company, or another type of organization. Content distributor 150 also may send the web link to distributed server machines 110A-110N operated by a plurality of different organizations. For example, content distributor 150 may distribute the web link to at least one distributed server machine 110A-110N in each of the plurality of different, unrelated companies participating in a community effort to identify and remove malicious content from the Internet for the benefit of users around the world. Thus, various organizations may participate collectively in providing testing and services associated with malicious content detection systems 130-130N.
At block 208, content analyzer 170 of malicious content detection system 130 receives a test result for the web link from each of the distributed server machines 110A-110N. In an example, each of a plurality of distributed server machines 110A-110N first receives the web link to test from content distributor 150. Content distributor 150 also may provide general or specific instructions for testing the web link to one or more of the distributed server machines 110A-110N either together or separate from the web link.
In an example, one or more of the distributed server machines 110A-110N adjusts or updates configuration settings to perform testing of the web link received from content distributor 150. For example, each of one or more distributed server machines 110A-110N may adjust or update various configuration settings including, but not limited to, a physical computer system configuration, a virtual computer system configuration, a web browser configuration, a software application configuration, a software installation, one or more user agent attributes, one or more user profile attributes, etc.
In an example, profile generators 160A-160N of one or more of the distributed server machines 110A-110N generate randomized or semi-randomized configuration settings and user profile attributes to use when testing the web link based on a geographic location associated with a respective distributed server machine. Such configuration settings also may be generated by and provided to respective distributed server machines 110A-110N, in whole or in part, by profile generator 160 of server machine 110. In one example, randomized configurations based on a statistical sampling of user profile attributes associated with a geographic location may be used, for example, to prevent malicious content providers and other cybercriminals from detecting, redirecting, and/or blocking testing of web links performed by distributed server machines 110A-110N.
In an example, once configured, distributed server machines 110A-110N perform one or more tests using the web link. In some examples, one or more distributed server machines 110A-110N update one or more respective configuration settings and/or user profile attributes for one or more different tests of the web link. Each distributed server machine 110A-110N then receives and records results of the web link testing. For example, each distributed server machine 110A-110N may record configuration settings and user profile attributes used to test the web link along with results of testing the web link.
In an example, web link testing results may include, but are not limited to, a resolved network address (e.g., IP address) for the web link, a subnet range for the resolved network address, a network used to perform the test, a network service provider used to perform the test, a content length value received or determined based on testing the web link, full or partial hashed or unmodified content received when testing the web link, etc. In one example, one or more distributed server machines 110A-110N capture an image or copy of content retrieved when testing the web link. The image or copy of the retrieved content then may be provided to a security authority (e.g., a blacklist operator or maintainer) that is unable to access or confirm existence of abusive content associated with the web link, thus providing the security authority with tangible evidence to justify blocking and/or taking down the content. The security authority also may perform independent testing of the web link (or a copy of the content) using one or more configurations and/or user profiles, for example, in a sandbox environment to determine behavior of the web link and associated content.
In an example, content analyzer 170 then receives and analyzes test results for the web link from each of the distributed server machines 110A-110N. In one example, content analyzer compares test results received for the web link to determine whether distributed server machines 110A-110N from a particular location and/or having certain configuration or user profile attributes are being allowed access or are being directly or indirectly (e.g., via subtle, redirection) denied access to malicious or safe content associated with a web link. For example, content analyzer 170 may analyze respective client profiles used by different distributed server machines 110A-110N to test the web link based on respective test results received from each of the distributed server machines 110A-110N. Content analyzer 170 then may determine similarities between client profiles used by the different distributed server machines 110A-110N that were allowed or denied access to one or more different forms of content associated with the web link.
Thus, content analyzer 170 may compare the respective test results received for the web link in view of the client profiles to determine one or more client profile attributes being targeted by malicious content delivered via the web link. In addition, content analyzer 170 may determine based on the test results that the web link is associated with malicious content when the web link returns different, non-similar, or unrelated content to various distributed server machines 110A-110N associated with different geographic locations, having different configuration settings, and/or having different user profile attributes.
At block 210, report manager 180 of malicious content detection system 130 generates a report comprising the test results for the web link received from the distributed server machines 110A-110N. In an example, report manager 180 generates a report comprising the test results received for the web link and analysis performed by content analyzer 170. In some examples, report manager 180 generates a web link test report based on one or more user configuration settings, query criteria, and/or reporting criteria to provide and format web link test results, web link test result analysis, and/or web link test results based on user preferences.
At block 212, report manager 180 of malicious content detection system 130 provides the report comprising the test results for the web link to the user. In an example, report manager 180 sends the test results for the web link to a user device 102A-102N for display to the user. For example, report manager 180 may display a new or updated version of a graphical user interface comprising the test results for the web link. In some examples, report manager 180 may refresh an existing graphical user interface to provide the test results. Report manager 180 also may store the results in a particular location of data store 180 (e.g., local storage, cloud storage, etc.) that is communicated to the user via an electronic message.
FIG. 3 is a flow diagram for identifying targeted audiences of malicious actors using high-yielding detection of remote abusive content, according to an example of the present disclosure. The method 300 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a general purpose computer system, dedicated machine, or processing device), firmware, or a combination thereof. Examples of method 300 are described with respect to malicious content detection system 130 for the sake of consistency. In addition, such examples generally apply to other malicious content detection systems 130A-130N, as described herein.
Method 300 begins at block 302 when user interface manager 150 of malicious content detection system 130 displays an interactive graphical user interface on a computing device that allows a user to submit a web link for analysis to determine whether the web link is associated with malicious content. In an example, user interface manager 140 generates and displays a user interface that allows a user to input one or more web links or to upload one or more web, e-mail, or other documents comprising at least one web link for analysis to determine whether the web links are associated with malicious activity, such as a phishing campaign targeting one or more user audiences.
At block 304, content distributor 150 of malicious content detection system 130 sends the web link provided by the user via the graphical user interface to a plurality of distributed server machines 110A-110N to test the web link under various conditions in different geographic locations. In an example, content distributor 150 receives one or more web links or a document comprising one or more web links submitted by the user via the graphical user interface. In some examples, user interface manager 140 validates and formats the web links prior to testing and analyzing behavior of the web links. User interface manager then may provide the web links to content distributor 150 for distributed testing using one or more distributed server machines 110A-110N situated in various geographic locations.
At block 306, profile generator 160 of malicious content detection system 130 generates a plurality of test user profiles for testing the web link. In an example, content distributor 150 determines one or more geographic locations, one or more service providers, one or more networks, one or more network addresses (e.g., IP addresses), one or more network address ranges, one or more days, one or more times, one or more computer system versions and settings, one or more web browser versions and settings, one or more software application versions and settings, one or more installed web browsers, one or more installed applications, one or more installed character sets, one or more installed languages, one or more installed fonts, and/or one or more other user profile attributes to use for testing the web link. Profile generator 160 then generates one or more test user profiles for each of a plurality of distributed server machines 110A-110N to use for testing the web link.
In an example, profile generator 160 of server machine generates a high-level test plan and general test user profiles to use for testing a web link. For example, profile generator 160 may determine that a web link is to be tested using at least ten different IP addresses in each of 100 countries to identify targets of a suspected or known phishing campaign. Respective profile generators 160A-160N of each distributed server machine 110A-110N then may generate additional test user profiles based on, and to mimic, known or expected localized computer system configurations, web browser configurations, software application configurations, software installations, user agent attributes, user profile attributes, etc. For example, one or more of the server machine 110 and distributed server machines 110A-110N may generate and use test user profile data based on a statistical sampling of user attributes corresponding to a location to test the web link. Such test user profile data may be generated randomly or semi-randomly to avoid detection of the web link testing and/or to avoid concealment of malicious content from security authorities or other non-targeted enforcement organizations by cybercriminals.
At block 308, content analyzer 170 of malicious content detection system 130 receives test results for the web link from each of the respective distributed server machines. In an example, content analyzer 170 receives at least one test result for the web link from each of a plurality of distributed server machines 110A-110N. Web link test results may comprise, for example, one or more of a resolved network address (e.g., IP address) for the web link, a subnet range for the resolved network address, a network used to perform the test, a network service provider used to perform the test, a content length value received or determined based on a test of the web link, full or partial hashed and/or unmodified web or other content received when testing the web link, etc. In addition, web link test results may comprise or be accompanied by one or more test user profiles, computer system configurations, software application configurations, user agent data, and/or other information pertaining to conditions that a respective distributed server machine 110A-110N uses to test the web link.
In an example, one or more test results for the web link may comprise or be accompanied by an image or copy of content captured by a distributed server machine 110A-110N when testing the web link. The image or copy of the content then may be provided to a security authority (e.g., blacklist maintainer, spam filter, virus protection service, etc.) that is being directly or passively blocked from accessing and proving the existence of malicious content associated with the web link. Such evidence of malicious content may be provided indirectly to give the security authority the power (e.g., authority or legal justification) to block or remove of the malicious content, even when cybercriminals have directly or passively concealed the abusive content from an authority.
In an example, content analyzer 170 analyzes test results for the web link and instructs one or more of the same or different distributed server machines 110A-110N to perform additional testing of the web link. For example, content analyzer 170 may determine based on a first round of web link testing using a few different distributed server machines 110A-110N in a plurality of countries that access to content associated with a web link is being allowed in some countries and blocked in other countries. Based on the result, content analyzer may determine that the countries where content associated with the web link is accessible are countries potentially being targeted in a phishing campaign. Content analyzer 170 then may instruct various distributed server machines 110A-110N in certain geographic locations to perform one or more rounds of additional and more focused testing of a web link to determine more specific details about one or more targeted or blocked audiences.
At block 310, content analyzer 170 of malicious content detection system 130 determines, based on the test results, that the web link is associated with malicious content. In an example, content analyzer 170 compares a full or partial amount of raw or hashed content retrieved by various distributed server machines 110A-110N that tested the web link. In one example, content analyzer 170 determines that a web link delivers similar content or the same content to a plurality of distributed server machines 110A-110N with different test user profiles and configuration settings in various geographic locations. In another example, content analyzer 170 determines that a web link delivers different, altered, or modified content to groups of one or more distributed server machines 110A-110N with different test user profiles and configuration settings in various geographic locations. In some examples, where the web link test results indicate that the web link delivers different content to different simulated audiences or actively/passively prevents certain audiences from accessing content, content analyzer 170 may determine that there is an elevated or actual risk that the web link is associated with malicious content.
In an example, content analyzer 170 compares web content or computer code gathered by respective distributed server machines 110A-110N that performed testing of a web link using different user profile attributes and configuration settings. For example, a plurality of distributed server machines 110A-110N may test a web link simultaneously, in groups, or separately during an assigned, scheduled, or unscheduled period of time.
In an example, content analyzer 170 compares web content or computer code associated with the same web link from a plurality of distributed server machines 110A-110N. Content analyzer then determines whether the web content or computer code is the same, is partially the same (or different), is substantially the same (or different), or do not have similarities. For example, the web content or computer code may be identical, associated or derivative copies (e.g., translations), partial matches (e.g., include at least some of the same content or code), unrelated, etc.
In an example, content analyzer 170 determines whether web test results are the same or different based on a threshold, for example, to account for minor variations occurring across different technology environments or periodic updates of web content. In one example, a user or an administrator specifies a threshold on a graphical user interface to indicate whether content analyzer 170 is to determine matching content based on 100% similarity, at least 90% similarity, at least 75% similarity, at least 50% similarity, or some other amount of similarity between test results received for a web link.
In an example, content analyzer 170 determines that web content or computer code returned by a web link to various distributed server machines 110A-110N is different, meaning that the web content or code is unrelated semantically (e.g., based on words, phrases, sentences, paragraphs, organization, functions, procedures, computer code variables, computer code objects, computer code structures, etc.) and/or based on subject matter. In one example, different web results based on subject matter may exist when test results from one distributed server machine 110A discuss “vacations” and test results from another distributed server machine 110N discuss “college tuition”. In another example, different computer code based on subject matter may exist when test results from one distributed server machine 110A use one computer language while test results from another distributed server machine 110N (for the same web link) use a different computer language.
At block 312, content analyzer 170 of malicious content detection system 130 compares the test results received for the web link in view of client profiles used by each distributed server machine 110A-110N to determine one or more client profile attributes being targeted by the malicious content associated with the web link. In an example, content analyzer 170 determines that web link test results indicate that the web link delivers different content to different simulated audiences or conceals the content from certain audiences (e.g., as tested and as reported by a plurality of distributed server machines 110A-110N situated in different geographic locations). Content analyzer 170 then may examine user profile attributes, configuration settings and other information associated with test user profiles to determine one or more representative user audience profiles being allowed and/or denied access to content associated with a web link.
In some examples, representative audience profiles for targeted user audiences determined by content analyzer 170 are reported to a user via a graphical user interface with or independent of web link test results and analysis. In addition, representative audience profiles for users that are unable to access the content due to concealment by a cybercriminal or other party may be reported to a user via a graphical user interface along with or separate from web link test results.
At block 314, report manager 180 of malicious content detection system 130 displays a report comprising the test results for the web link to the user via the interactive graphical user interface. In an example, report manager 180 generates and displays a report comprising the test results for the web link received from the distributed server machines 110A-110N and test result analysis performed by content analyzer 170.
In an example, the report generated and displayed by report manager 180 includes a navigable two-dimensional or three-dimensional map that illustrates web link test results for each of various geographically distributed server machines 110A-110N using one or more colors and informational annotations. For example, the map may illustrate one or more servers and/or geographic locations where access to content associated with a web link was allowed, blocked, hidden, redirected, altered, etc. In addition, the report may indicate one or more distributed server machines 110A-110N or geographic locations where testing of the web link resulted in a redirection to substitute content or another website to conceal malicious content directed at particular users, user profile attributes, software configurations, computer system configurations, etc.
FIG. 4 is a diagram illustrating an example software application providing high-yielding detection of remove abusive content, in accordance with various examples of the present disclosure. Diagram 400 includes an example interactive graphical user interface associated with a malicious content detection system software application 402. Malicious content detection system software application 402 includes a content identifier control 404 with a web link for testing, a text size adjustment control 406, a configuration settings control 408, a search initiation control 410, a server machine listing 412 comprising a plurality of server machines 110-110N used to test the web link, a listing of resolved IP addresses 414 received for the web link by each of the server machines 110-110N, a listing of corresponding subnet ranges 416 determined for the web link by each of the server machines 110-110N, a listing of corresponding service providers 418 used by each of the server machines 110-110N to test the web link, a listing of content length values 420 received or determined by each of the server machines when testing the web link, a full or partial listing of hashed content 422 received by each of the server machines when testing web link, and corresponding quick indicators 424 or images describing and/or summarizing a testing result received from corresponding server machines 110-110N.
In an example, user interface manager 140 of malicious content detection system 130 displays the graphical user interface 402 for use on a client device 102A-102N of a user. The user may adjust text size using text size adjustment control 406 and configuration settings using configuration settings control 408. The user enters a web link into content identifier control 404 and selects the search initiation control 410 to submit the web link for analysis to determine whether the web link is associated with malicious content.
In an example, user interface manager 140 receives the web link submitted by the user via the graphical user interface 402. Content distributor 150 then provides the web link to a plurality of geographically distributed server machines 110A-110N to test the web link for the purposes of determining whether the web link is associated with malicious content. Content analyzers 170-170A of respective distributed server machines 110A-110N then each generate test user profiles to use for testing the web link. Content analyzers 170-170A then each test the web link, collect respective results from testing the web link, and return the respective test results to content analyzer 170 of server machine 110.
In an example, content analyzer 170 collects and analyzes the web link test results received from the distributed server machines 110A-110N. Web link test results provided by each of the distributed server machines 110A-110N may include identification and location information of a server machine, a resolved IP address received for the web link by server machine, a subnet range for the resolved IP address of the web link, a network or service provider used access the web link, a content length value received or determined when testing web link, a full or partial listing of hashed content retrieved when testing web link (e.g., where the hashing of the retrieved content was performed by a respective distributed server machine 110A-110N), etc.
In an example, report manager 180 displays the web link test results and associated analysis on the graphical user interface associated with the malicious content detection system software application 402. In diagram 400, server machine listing 412 indicates that the web link entered in content identifier control 404 was tested using a plurality of distributed server machines 110A-110N in various geographic locations. The listing of resolved IP addresses 414 and the listing of corresponding subnet ranges 416 indicate that most of the distributed server machines 110A-110N were directed to the same location by the web link. However, a server machine in Turkey was directed to a different location by the web link, and server machines in China and Malaysia were not used in testing the web link.
Continuing with the example in diagram 400, the listing of content length values 420 received or determined by each of the server machines 110A-110N and the listing of hashed content 422 received by each of the server machines when testing web link indicates that servers in the United States, Canada, Germany, Pakistan, and Australia were forbidden from accessing content associated with the web link. In addition, server machines in Brazil, the United Kingdom, France, Italy, Russia, India, and Thailand received matching content when testing the web link. However, a server machine in Turkey received different content compared to the other server machines 110A-110N that received content when testing the web link (e.g., possibly because a phishing scheme is being targeted at users located in Turkey). Further, the listing of quick indicators 424 indicate server machines 110A-110N that were unable to access or were denied access to content associated with the web link (e.g., “−”), that accessed content associated with the web link and received a result matching a majority of other server machines 110A-110N (e.g., “+”), that accessed content associated with the web link and received a result that does not match a majority of other server machines 110A-110N (e.g., “˜”), and that were not used to test the web link or had other issues in testing the web link (e.g., “×”).
FIG. 5 illustrates a diagram of a machine in the exemplary form of a computer system 500, within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein may be executed. In other examples, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server or a client device in client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a wearable computing device, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
The exemplary computer system 500 includes a processing device (processor) 502, a main memory 504 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), double data rate (DDR SDRAM), or DRAM (RDRAM), etc.), a static memory 506 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 518, which communicate with each other via a bus 530.
Processor 502 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processor 502 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processor 502 also may be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processor 502 is configured to execute instructions 522 for performing the operations and steps discussed herein.
The computer system 500 also may include a network interface device 508. The computer system 500 may further include a video display unit 510 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 512 (e.g., a keyboard), a cursor control device 514 (e.g., a mouse), and a signal generation device 516 (e.g., a speaker).
The data storage device 518 may include a computer-readable storage medium 528 on which is stored one or more sets of instructions 522 (e.g., software computer instructions) embodying any one or more of the methodologies or functions described herein. The instructions 522 also may reside, completely or at least partially, within the main memory 504 and/or within the processor 502 during execution thereof by the computer system 500, the main memory 504 and the processor 502 also constituting computer-readable storage media. The instructions 522 may be transmitted or received over a network 520 via the network interface device 508.
In one example, the instructions 522 include instructions for one or more modules of an malicious content detection system (e.g., malicious content detection system 130, 130A, 130N of FIG. 1) and/or a software library containing methods that call a malicious content detection system 130, 130A, 130N. While the computer-readable storage medium 528 (machine-readable storage medium) is shown as an example to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable storage medium” also may include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
Numerous details are set forth in the foregoing description. However, it will be apparent to one of ordinary skill in the art having the benefit of this disclosure that the present disclosure may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, to avoid obscuring the present disclosure.
Some portions of the detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. Here, an algorithm is generally conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “computing,” “comparing,” “associating,” “applying,” “transmitting,” “receiving,” “processing” or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Certain examples of the present disclosure also relate to an apparatus for performing the operations herein. This apparatus may be constructed for the intended purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions.
It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other examples will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the disclosure therefore should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims (20)

What is claimed is:
1. A computer system for providing anonymous inter-organizational data security, comprising:
a connection to a computer network;
a non-transitory memory comprising instructions; and
a processing device coupled to the non-transitory memory and configured to execute the instructions to cause the computer system to:
generate a graphical user interface that allows a user to submit a web link for analysis to determine whether the web link is associated with malicious content;
receive the web link provided by the user for analysis via the graphical user interface;
send, via the computer network, the web link provided by the user to a plurality of distributed server machines to allow each of the distributed server machines to test the web link;
receive, via the computer network, a test result for the web link from each of the distributed server machines;
compare the respective test results received for the web link in view of client profiles used by each distributed server machine to test the web link to determine one or more client profile attributes being targeted by malicious content associated with the web link;
generate a report comprising the test results for the web link received from the distributed server machines, the test results comprising an indication of profile attributes that cause the malicious content to be concealed; and
provide the report comprising the test results for the web link to the user.
2. The computer system of claim 1, wherein the processing device further is to:
compare the respective test results received for the web link to determine whether content associated with the web link is accessible from each of the distributed server machines.
3. The computer system of claim 1, wherein the processing device further is to:
determine differences in respective client profiles used by different ones of the distributed server machines for causes of different respective test results received from each of the distributed server machines.
4. The computer system of claim 1, wherein the processing device further is to: determine similarities between respective client profiles used by different ones of the distributed server machines that were denied access to content associated with the web link.
5. The computer system of claim 1, wherein the processing device further is to: determine, based on the test results for the web link, that the web link is associated with the malicious content.
6. The computer system of claim 1, wherein the report includes the one or more client profile attributes determined as being targeted by malicious content associated with the web link.
7. The computer system of claim 1, wherein respective test results received from each of the distributed server machines each comprise a hash of a response received from testing the web link.
8. The computer system of claim 1, wherein the processing device further is to: provide a list of one or more common client profile attributes from respective distributed server machines that were allowed access to content associated with the web link.
9. The computer system of claim 1, wherein the test result received from a respective distributed server machine used to test the web link comprises an image of malicious content retrieved when testing the web link.
10. The computer system of claim 1, wherein the processing device further is to:
provide proof of the malicious content to a security authority, wherein the proof is from one or more of the distributed server machines that retrieved the malicious content when testing the web link, and wherein the malicious content is being hidden from the security authority through denial of access.
11. The computer system of claim 10, wherein the proof of the malicious content is provided to the security authority to allow the security authority to blacklist the web link in response to the security authority being denied access to the malicious content.
12. The computer system of claim 1, wherein the processing device further is to: generate a plurality of test user profiles for testing the web link.
13. The computer system of claim 12, wherein the test user profiles are generated randomly based on a statistical sampling of user attributes for a location associated with one of the distributed server machines.
14. The computer system of claim 1, wherein the processing device further is to:
provide one or more test user profiles to each of the distributed server machines for use in testing the web link.
15. The computer system of claim 14, wherein the one or more test user profiles are generated based on a statistical sampling of user attributes corresponding to respective locations of distributed server machines.
16. The computer system of claim 1, wherein the each of the distributed server machines generate one or more test user profiles to use when testing the web link.
17. The computer system of claim 1, wherein the report further comprises a map illustrating one or more locations where access to the web link is available and one or more other locations where access to the web link is blocked.
18. The computer system of claim 1, wherein the report indicates that at least one of the distributed server machines was able to access the malicious content associated with the web link while one or more other distributed server machines were redirected to another site to conceal the malicious content.
19. A non-transitory computer-readable medium comprising computer-readable instructions that, when executed by one or more processors of a computer system, cause the one or more processors to perform operations comprising:
displaying a graphical user interface that allows a user to submit a web link for analysis to determine whether the web link is associated with malicious content;
sending, via a computer network, the web link provided by the user via the graphical user interface to a plurality of distributed server machines in different countries to test the web link in various geographic locations;
receiving, via the computer network, a test result for the web link from each of the distributed server machines;
comparing the respective test results received for the web link in view of client profiles used by each distributed server machine to test the web link to determine one or more client profile attributes being targeted by malicious content associated with the web link;
generating a report comprising the test results for the web link received from the distributed server machines in the different countries, the test results comprising an indication of profile attributes that cause the malicious content to be concealed; and
displaying the report comprising the test results for the web link to the user via the graphical user interface.
20. A computer-implemented method, comprising:
displaying, by one or more processors, an interactive graphical user interface that allows a user to submit a web link for analysis to determine whether the web link is associated with malicious content;
sending, via a computer network by the one or more processors, the web link provided by the user via the interactive graphical user interface to a plurality of distributed server machines in different locations to test the web link using various user profile attributes;
receiving, via the computer network by the one or more processors, one or more test results for the web link from each of the distributed server machines;
comparing the respective test results received for the web link in view of client profiles used by each distributed server machine to test the web link to determine one or more client profile attributes being targeted by malicious content associated with the web link;
generating, by one or more processors, a report comprising the test results from the distributed server machines, the test results comprising an indication of profile attributes that cause the malicious content to be concealed; and
displaying, by one or more processors, the generated report comprising the test results for the web link via the interactive graphical user interface.
US14/827,494 2015-08-17 2015-08-17 High-yielding detection of remote abusive content Active 2035-09-30 US9781140B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/827,494 US9781140B2 (en) 2015-08-17 2015-08-17 High-yielding detection of remote abusive content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/827,494 US9781140B2 (en) 2015-08-17 2015-08-17 High-yielding detection of remote abusive content

Publications (2)

Publication Number Publication Date
US20170054739A1 US20170054739A1 (en) 2017-02-23
US9781140B2 true US9781140B2 (en) 2017-10-03

Family

ID=58158120

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/827,494 Active 2035-09-30 US9781140B2 (en) 2015-08-17 2015-08-17 High-yielding detection of remote abusive content

Country Status (1)

Country Link
US (1) US9781140B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210067498A1 (en) * 2010-03-30 2021-03-04 Authentic8, Inc. Disposable Browsers and Authentication Techniques for a Secure Online User Environment

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10303898B2 (en) * 2015-05-11 2019-05-28 Finjan Mobile, Inc. Detection and blocking of web trackers for mobile browsers
US11132717B2 (en) 2016-02-22 2021-09-28 Ad Lightning Inc. Synthetic user profiles and monitoring online advertisements
US10826936B2 (en) 2017-05-10 2020-11-03 Ad Lightning, Inc. Detecting and attributing undesirable automatic redirects

Citations (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6167568A (en) * 1998-06-30 2000-12-26 Sun Microsystems, Inc. Method and apparatus for implementing electronic software distribution
US20050228860A1 (en) * 2004-04-12 2005-10-13 Kimmo Hamynen Methods and apparatus for geographically based Web services
US7013323B1 (en) * 2000-05-23 2006-03-14 Cyveillance, Inc. System and method for developing and interpreting e-commerce metrics by utilizing a list of rules wherein each rule contain at least one of entity-specific criteria
US20080250159A1 (en) * 2007-04-04 2008-10-09 Microsoft Corporation Cybersquatter Patrol
US20090125444A1 (en) * 2007-08-02 2009-05-14 William Cochran Graphical user interface and methods of ensuring legitimate pay-per-click advertising
US20090300768A1 (en) * 2008-05-30 2009-12-03 Balachander Krishnamurthy Method and apparatus for identifying phishing websites in network traffic using generated regular expressions
US7801868B1 (en) * 2006-04-20 2010-09-21 Datascout, Inc. Surrogate hashing
US20100281535A1 (en) * 2002-11-20 2010-11-04 Perry Jr George Thomas Electronic message delivery with estimation approaches
US20110107413A1 (en) * 2009-11-02 2011-05-05 At&T Intellectual Property I, L.P. Methods, systems, and computer program products for providing a virtual private gateway between user devices and various networks
US8041632B1 (en) * 1999-10-28 2011-10-18 Citibank, N.A. Method and system for using a Bayesian belief network to ensure data integrity
US20120166276A1 (en) * 2010-12-28 2012-06-28 Microsoft Corporation Framework that facilitates third party integration of applications into a search engine
US20120317622A1 (en) * 2011-06-13 2012-12-13 Uniloc Usa, Inc. Hardware identity in multi-factor authentication at the application layer
US20130073358A1 (en) * 2011-09-15 2013-03-21 Thomas E. Sandholm Recommending print locations
US20130072233A1 (en) * 2011-09-15 2013-03-21 Thomas E. Sandholm Geographically partitioned online content services
US20130073581A1 (en) * 2011-09-15 2013-03-21 Thomas E. Sandholm Geographically partitioned onlne search system
US8424091B1 (en) * 2010-01-12 2013-04-16 Trend Micro Incorporated Automatic local detection of computer security threats
US20130159826A1 (en) * 2011-12-20 2013-06-20 Hilary Mason Systems and methods for recommending a list of urls
US20130159233A1 (en) * 2011-12-20 2013-06-20 Hilary Mason Systems and methods for relevance scoring of a digital resource
US20130205370A1 (en) * 2012-02-07 2013-08-08 Avinash Kalgi Mobile human challenge-response test
US8516590B1 (en) * 2009-04-25 2013-08-20 Dasient, Inc. Malicious advertisement detection and remediation
US8543675B1 (en) * 2009-12-17 2013-09-24 Amazon Technologies, Inc. Consistent link sharing
US20130276129A1 (en) * 2012-04-13 2013-10-17 Daniel Nelson Methods, apparatus, and articles of manufacture to identify media delivery
US8666811B1 (en) * 2004-03-29 2014-03-04 Google Inc. Systems and methods for determining advertising activity
US20140096246A1 (en) * 2012-10-01 2014-04-03 Google Inc. Protecting users from undesirable content
US8763120B1 (en) * 2008-07-15 2014-06-24 Zscaler, Inc. Exploitation detection
US20140199664A1 (en) * 2011-04-08 2014-07-17 Wombat Security Technologies, Inc. Mock attack cybersecurity training system and methods
US8826426B1 (en) * 2011-05-05 2014-09-02 Symantec Corporation Systems and methods for generating reputation-based ratings for uniform resource locators
US20140279624A1 (en) * 2013-03-15 2014-09-18 Cybeye, Inc. Social campaign network and method for dynamic content delivery in same
US20140283078A1 (en) * 2013-03-15 2014-09-18 Go Daddy Operating Company, LLC Scanning and filtering of hosted content
US20150007312A1 (en) * 2013-06-28 2015-01-01 Vinay Pidathala System and method for detecting malicious links in electronic messages
US9178901B2 (en) * 2013-03-26 2015-11-03 Microsoft Technology Licensing, Llc Malicious uniform resource locator detection
US9356941B1 (en) * 2010-08-16 2016-05-31 Symantec Corporation Systems and methods for detecting suspicious web pages

Patent Citations (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6167568A (en) * 1998-06-30 2000-12-26 Sun Microsystems, Inc. Method and apparatus for implementing electronic software distribution
US8041632B1 (en) * 1999-10-28 2011-10-18 Citibank, N.A. Method and system for using a Bayesian belief network to ensure data integrity
US7013323B1 (en) * 2000-05-23 2006-03-14 Cyveillance, Inc. System and method for developing and interpreting e-commerce metrics by utilizing a list of rules wherein each rule contain at least one of entity-specific criteria
US20100281535A1 (en) * 2002-11-20 2010-11-04 Perry Jr George Thomas Electronic message delivery with estimation approaches
US8666811B1 (en) * 2004-03-29 2014-03-04 Google Inc. Systems and methods for determining advertising activity
US20050228860A1 (en) * 2004-04-12 2005-10-13 Kimmo Hamynen Methods and apparatus for geographically based Web services
US7801868B1 (en) * 2006-04-20 2010-09-21 Datascout, Inc. Surrogate hashing
US20080250159A1 (en) * 2007-04-04 2008-10-09 Microsoft Corporation Cybersquatter Patrol
US20090125444A1 (en) * 2007-08-02 2009-05-14 William Cochran Graphical user interface and methods of ensuring legitimate pay-per-click advertising
US20090300768A1 (en) * 2008-05-30 2009-12-03 Balachander Krishnamurthy Method and apparatus for identifying phishing websites in network traffic using generated regular expressions
US8763120B1 (en) * 2008-07-15 2014-06-24 Zscaler, Inc. Exploitation detection
US8516590B1 (en) * 2009-04-25 2013-08-20 Dasient, Inc. Malicious advertisement detection and remediation
US20110107413A1 (en) * 2009-11-02 2011-05-05 At&T Intellectual Property I, L.P. Methods, systems, and computer program products for providing a virtual private gateway between user devices and various networks
US8543675B1 (en) * 2009-12-17 2013-09-24 Amazon Technologies, Inc. Consistent link sharing
US8424091B1 (en) * 2010-01-12 2013-04-16 Trend Micro Incorporated Automatic local detection of computer security threats
US9356941B1 (en) * 2010-08-16 2016-05-31 Symantec Corporation Systems and methods for detecting suspicious web pages
US20120166276A1 (en) * 2010-12-28 2012-06-28 Microsoft Corporation Framework that facilitates third party integration of applications into a search engine
US20140199664A1 (en) * 2011-04-08 2014-07-17 Wombat Security Technologies, Inc. Mock attack cybersecurity training system and methods
US8826426B1 (en) * 2011-05-05 2014-09-02 Symantec Corporation Systems and methods for generating reputation-based ratings for uniform resource locators
US20120317622A1 (en) * 2011-06-13 2012-12-13 Uniloc Usa, Inc. Hardware identity in multi-factor authentication at the application layer
US20130073581A1 (en) * 2011-09-15 2013-03-21 Thomas E. Sandholm Geographically partitioned onlne search system
US20130072233A1 (en) * 2011-09-15 2013-03-21 Thomas E. Sandholm Geographically partitioned online content services
US20130073358A1 (en) * 2011-09-15 2013-03-21 Thomas E. Sandholm Recommending print locations
US20130159233A1 (en) * 2011-12-20 2013-06-20 Hilary Mason Systems and methods for relevance scoring of a digital resource
US20130159826A1 (en) * 2011-12-20 2013-06-20 Hilary Mason Systems and methods for recommending a list of urls
US20130205370A1 (en) * 2012-02-07 2013-08-08 Avinash Kalgi Mobile human challenge-response test
US20130276129A1 (en) * 2012-04-13 2013-10-17 Daniel Nelson Methods, apparatus, and articles of manufacture to identify media delivery
US20140096246A1 (en) * 2012-10-01 2014-04-03 Google Inc. Protecting users from undesirable content
US20140279624A1 (en) * 2013-03-15 2014-09-18 Cybeye, Inc. Social campaign network and method for dynamic content delivery in same
US20140283078A1 (en) * 2013-03-15 2014-09-18 Go Daddy Operating Company, LLC Scanning and filtering of hosted content
US9178901B2 (en) * 2013-03-26 2015-11-03 Microsoft Technology Licensing, Llc Malicious uniform resource locator detection
US20150007312A1 (en) * 2013-06-28 2015-01-01 Vinay Pidathala System and method for detecting malicious links in electronic messages

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210067498A1 (en) * 2010-03-30 2021-03-04 Authentic8, Inc. Disposable Browsers and Authentication Techniques for a Secure Online User Environment
US11716315B2 (en) * 2010-03-30 2023-08-01 Authentic8, Inc. Disposable browsers and authentication techniques for a secure online user environment

Also Published As

Publication number Publication date
US20170054739A1 (en) 2017-02-23

Similar Documents

Publication Publication Date Title
US20210344711A1 (en) Phishing Detection Method And System
US11714906B2 (en) Reducing threat detection processing by applying similarity measures to entropy measures of files
US9292881B2 (en) Social sharing of security information in a group
US8495742B2 (en) Identifying malicious queries
Skopik et al. Under false flag: using technical artifacts for cyber attack attribution
Bottazzi et al. MP-shield: A framework for phishing detection in mobile devices
US9781140B2 (en) High-yielding detection of remote abusive content
US10021118B2 (en) Predicting account takeover tsunami using dump quakes
US9594903B1 (en) Reputation scoring of social networking applications
US10505736B1 (en) Remote cyber security validation system
Alani Big data in cybersecurity: a survey of applications and future trends
Siby et al. {WebGraph}: Capturing advertising and tracking information flows for robust blocking
US11611572B2 (en) System and method of processing information security events to detect cyberattacks
Serketzis et al. Actionable threat intelligence for digital forensics readiness
Lee et al. Fileless cyberattacks: Analysis and classification
Riadi et al. Vulnerability analysis of E-voting application using open web application security project (OWASP) framework
Mishra et al. Intelligent phishing detection system using similarity matching algorithms
Stivala et al. Deceptive previews: A study of the link preview trustworthiness in social platforms
Vecchiato et al. A security configuration assessment for android devices
Baker et al. Analyzing security threats as reported by the united states computer emergency readiness team (US-CERT)
US20230033919A1 (en) Information security system and method for phishing website classification based on image hashing
Etuh et al. Social Media Networks Attacks and their Preventive Mechanisms: A Review
EP3926501B1 (en) System and method of processing information security events to detect cyberattacks
Spataro Iranian Cyber Espionage
Bartoli et al. A security-oriented analysis of web inclusions in the italian public administration

Legal Events

Date Code Title Description
AS Assignment

Owner name: PAYPAL, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WARDMAN, BRADLEY;BUTLER, BLAKE;REEL/FRAME:036337/0512

Effective date: 20150811

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4