US20130211985A1 - System, method and computer program product for assessing risk of identity theft - Google Patents

System, method and computer program product for assessing risk of identity theft Download PDF

Info

Publication number
US20130211985A1
US20130211985A1 US13/736,240 US201313736240A US2013211985A1 US 20130211985 A1 US20130211985 A1 US 20130211985A1 US 201313736240 A US201313736240 A US 201313736240A US 2013211985 A1 US2013211985 A1 US 2013211985A1
Authority
US
United States
Prior art keywords
address
fraud
risk
identity theft
new
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/736,240
Inventor
Robert T. Clark
Adam Elliott
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ID Insight Inc
Original Assignee
ID Insight Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ID Insight Inc filed Critical ID Insight Inc
Priority to US13/736,240 priority Critical patent/US20130211985A1/en
Publication of US20130211985A1 publication Critical patent/US20130211985A1/en
Priority to US16/660,289 priority patent/US11610278B2/en
Priority to US18/097,572 priority patent/US20230177635A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • G06Q50/265Personal security, identity or safety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/018Certifying business or products
    • G06Q30/0185Product, service or business identity fraud
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/02Banking, e.g. interest calculation or account maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/03Credit; Loans; Processing thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M15/00Arrangements for metering, time-control or time indication ; Metering, charging or billing arrangements for voice wireline or wireless communications, e.g. VoIP
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M15/00Arrangements for metering, time-control or time indication ; Metering, charging or billing arrangements for voice wireline or wireless communications, e.g. VoIP
    • H04M15/47Fraud detection or prevention means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2215/00Metering arrangements; Time controlling arrangements; Time indicating arrangements
    • H04M2215/01Details of billing arrangements
    • H04M2215/0148Fraud detection or prevention means

Definitions

  • This invention relates, in general, to apparatuses and methods for identifying account fraud.
  • this invention relates to detecting fraud and assisting in fraud prevention due to identity theft including to but not limited to address change, account takeover fraud, and new account application fraud.
  • this invention may be practiced using batch or real-time, online processing or using customer hosted software applications.
  • Such accounts may include instant credit or credit accounts with a department store or other retail outlet, or accounts involving checks, credit cards, debit cards, or ATM cards of a bank, credit or other financial institution.
  • Identity theft may include account takeover, wherein a thief steals the identity of an individual and then uses that information to take over ownership of that individual's account; or new account fraud, wherein the identity thief uses stolen information to open new accounts in another person's name.
  • embodiments of the present invention may utilize demographic data based on addresses associated with the account to determine whether an account request may involve identity theft fraud, and scores may be generated indicating the likelihood that the account request may involve identity theft fraud.
  • this invention analyzes demographic data that is associated with a specific street address when presented as an address change on an existing account or an address included on a new account application when that address is different from the reference address (e.g., a credit bureau type header data).
  • the old or reference address and the new address, the new account application address or fulfillment address demographic attributes are gathered, analyzed, compared for divergence and scaled to reflect the relative fraud risk.
  • Another embodiment of the present invention relates to a method for assessing a risk of fraud.
  • the method comprises receiving at least information relating to a first address relating to one of an account holder or an applicant; receiving information relating to a second address; and measuring demographic differences between the first and second addresses.
  • Another embodiment of the present invention relates to a method for assessing a risk of identity theft fraud with respect to new applications.
  • the method comprises receiving first address information relating to an applicant for an account; and using demographic data relating to the address information.
  • Another embodiment of the present invention relates to a method for detecting a risk of identity theft fraud.
  • the method comprises combining warm address, known fraud address information, USPS Deliverable Address File, NCOA files with address specific, single point, demographic information; and coupling differential information relating to the addresses to predict a risk of fraud for at least one of account takeover new account application and fulfillment fraud.
  • the system includes a processor, memory; computer instructions operable by the processor to append data to at least one variable used in assessing a risk of identity theft fraud; computer instructions operable by the processor to analyze differences in demographic data for two different street address; computer instructions operable by the processor to calculate a score indicative of a level of risk of fraud; and computer instructions operable by the processor to output an assessment of a risk of level of fraud.
  • the formula used is of the form:
  • Y is the dependent or outcome variable is the result used to predict the risk of identity theft fraud
  • A is a constant value
  • B1 . . . Bn are the coefficients or weights assigned to the independent variables
  • x1 . . . xn are the independent variables.
  • FIG. 1 is a block diagram of an overall process in accordance with an embodiment of the present invention.
  • FIG. 2 illustrates an example of logical operations for processing new account requests, in accordance with an embodiment of the present invention.
  • FIG. 3 is a block diagram showing the address information used in an embodiment of the present invention to detect identity theft via account takeover or via applications for new accounts.
  • FIG. 4 illustrates examples of logical operations for processing new account requests as illustrated in FIG. 2 , in accordance with an embodiment of the present invention.
  • FIG. 5 illustrates examples of logical operations for processing new account requests as illustrated in FIG. 2 , in accordance with an embodiment of the present invention.
  • FIG. 6 a illustrates an example of the logical operations for determining a risk of identity theft fraud, in accordance with an embodiment of the present invention.
  • FIG. 6 b is a block diagram showing logical operations for appending certain information to addresses in performing analysis for determining a risk of identity theft fraud, in accordance with an embodiment of the present invention.
  • FIG. 7 illustrates a Power of Sementation summary chart.
  • FIG. 8 illustrates another example for processing new account request, in accordance with an embodiment of the present invention.
  • FIG. 9 illustrates an example of logical operations for processing requests to take over an account, in accordance with an embodiment of the present invention.
  • FIG. 10 illustrates another example of logical operations for processing a request to take over an account, in accordance with an embodiment of the present invention.
  • FIG. 11 illustrates examples of operations of FIGS. 9-10 , in accordance with an embodiment of the present invention.
  • FIG. 12 illustrates examples of operations of FIGS. 9-10 , in accordance with an embodiment of the present invention.
  • FIG. 13 illustrates another example of logical operations for processing a request to take over an account, in accordance with an embodiment of the present invention.
  • FIG. 14 illustrates examples of logical operations for FIG. 13 , in accordance with an embodiment of the present invention.
  • FIG. 15 illustrates examples of logical operations for FIG. 13 , in accordance with an embodiment of the present invention.
  • the present invention generally relates to a system and method for detecting or assessing the risk of identity theft fraud.
  • the present invention will be described in the context of detecting or assessing the risk of identity theft fraud in two contexts: new account application fraud and account takeover fraud.
  • the present invention is not limited to only detecting these type of fraud schemes.
  • FIG. 1 shows the general steps used in an embodiment of the present invention for detecting fraud.
  • new address information is obtained. In the context of a new account application, this may be the address provided on the application and in the context of takeover of an account, this may be the new address provided that is to replace the current address on the account.
  • this new street address information is compared to a reference address (which may be an address obtained from a credit report for the person or the current address prior to the change of address).
  • demographic data associated with the addresses is gathered and analyzed.
  • an assessment of relative risk of identity theft fraud is made based on the analysis.
  • the present invention analyzes demographic data that is associated with a specific street address when presented as an address change on an existing account or an address included on a new account application when that address is different from a reference address (e.g., whether provided by the applicant or obtained from a credit bureau). For the two addresses, demographic attributes are gathered, analyzed, compared for divergence and scaled to reflect the relative risk of identity theft fraud. Risk may be expressed in a number of ways. In one embodiment, risk is expressed as an upper bound numerical score from 1 to 100 that is returned with reason codes to the customer for follow up.
  • an address is the one element that a criminal cannot manipulate. That is, when a criminal steals an identity, the criminal may be able to obtain identity information relating to the victim. However, the criminal cannot receive mail at the victim's house. Consequently, the criminal needs to use an address where he/she can receive mail (e.g., to obtain media or goods). As such, the present invention compares addresses. The present invention recognizes that there are demographic differences between addresses. For instance, one address may have an upscale socio-economic demographic as compared to the other address that has a more downscale socio-economic demographic.
  • the present invention uses elements that can be independently verified and analyzed to determine a risk of identity theft. Also, in addition to the demographic data, additional data elements such as warm address information or undeliverable address information may be used to assist in assessing the risk of identity theft fraud.
  • “Account” as used in this application includes its ordinary meaning and is also intended to cover any business relationship where there is financial risk on part of the product or service provider including but not limited to relationships of credit, debit brokerage, retail, non-face to face fulfillment activities (e.g., on-line sales).
  • the risk assessment is performed when a business or service user sends/transmits the old or reference address and the new (requested changed) or new account application address with other identifying information for use by the software application embodying the present invention.
  • Input data is matched to address specific demographic data which in turn is delivered to the decision engine to produce a risk score.
  • Data processing can occur in batch, real time online or on customer or processor hosted software application. Communications can occur through telephone, data line, internet or tape/disk or other commercially available method.
  • the application output may be returned to the service user via an internet accessed system, telephone, data line, or other commercially available method.
  • the present invention uses statistical modeling of negative and demographic/socio-economic data elements associated with a street address to identify suspected identity theft fraud activity when there is a change in address or an address on a new application that is different from a reference address (e.g., one provided by the applicant or one obtained from a third party such as a credit bureau).
  • this invention may be used to detect identity theft fraud in existing accounts, new credit account applications or other business risks associated with address manipulation.
  • the process generally analyzes the differences in demographic data between an old address or reference address and an address on a new application or an address change on an account to a new address.
  • a reference address may be a credit bureau header data or an address secured from a third party database.
  • other negative and logical data sources are used in the risk evaluation, such as warm address information, driver's license syntax specific to a state, or the year a social security number is issued is compared to the date of birth for rationality. Analysis may performed through the use of regression models, neural network and expert rules based technology. A score that scales risk is developed to identify the likelihood of identity theft fraud. The score is returned along with supportive investigative data to the customer/business for use in determining the level of risk it is willing to take in entering into a business relationship with the investigated person. Consequently, an embodiment of the present invention provides businesses with the opportunity investigate a potential identity theft fraud and take steps to prevent economic loss. As will be discussed, in the preferred embodiment, the present invention is implemented in software.
  • FIG. 2 illustrates an example of logical operations for detecting fraud in the context of receiving a new account request.
  • a new request is received including the client data, and the received data is reformatted, normalized, or otherwise processed so that the data can be further processed.
  • An input data stream or data inputs from the client/customer are delivered to the host system for processing. Examples of the type of message elements or data inputs include the following:
  • New Account Application/Address Change (New Address) data inputs Customer identifier First name Transaction type Middle initial/name Street directional Last name Street name Surname Unit number Account or reference number City name Address type code State Name Social Security Number Zip Code plus 4 Date of birth Driver's license information Loss potential - for takeover only
  • Account Access Device Requests, Normal or Emergency requests input file (Address change process only) Transaction type State name Media type Zip Code plus 4 Request type First name Account number Middle initial/name Street directional Last name Street name Surname Unit number Address type code City name Loss potential - Open to buy/balance Driver's license information
  • input data includes name and address listed on the new application.
  • input data includes name, current address or reference address, and the address to which it was changed.
  • information that may be provided by a business that wants an assessment of risk of identity theft may provide the type of transaction (e.g., new application, change of address, etc.), information to identify the person that is to be investigated (e.g., name, social security number, date of birth etc.), address information as will be discussed with reference to FIG. 3 , account information, and whether there has been a media request (e.g., a request for checks, credit cards, PIN number, or other items of value).
  • the type of transaction e.g., new application, change of address, etc.
  • information to identify the person that is to be investigated e.g., name, social security number, date of birth etc.
  • address information e.g., address information as will be discussed with reference to FIG. 3
  • account information e.g., a request for checks, credit cards, PIN number, or other items of value.
  • FIG. 3 is a block diagram showing two forms of identity theft fraud (block 150 ): takeover identity theft fraud (block 152 ) and new application identity theft fraud (block 154 ).
  • account takeover occurs when a person (e.g., a criminal) poses as the customer of a business and changes the address from the customers' address to the another address (i.e., the criminal's address).
  • the criminal then had media, such as checks, credit cards, PIN number, or other items of value (including other goods) to the new/criminal address.
  • the criminal then may commit fraud from the unauthorized use of the financial instrument or benefits from the illegally obtained goods.
  • New application identity theft fraud involves a criminal submitting a new application that includes information of another and attempts to obtain media or other goods and services from the business.
  • the current address prior to the address change may be referred to as the old address, the reference address, or the FROM address.
  • the new address (i.e., the address that the reference address was changed to) is sometimes referred to as the TO address.
  • the reference address is the old address or FROM address. It may be provided by the applicant or it may be obtained from a third party such as a credit bureau. Also, in the new application, address provided on the new application may be referred to as the new address or TO address.
  • the business that is going to have an assessment made of the risk of identity theft fraud has an old address or reference address and a new address.
  • a business that is going to have assessment made will have the address stated on the application but may not have a reference address.
  • a third party source it is more common to use a third party source to obtain a reference address for analysis of a risk of identity theft in a new application situation.
  • the present invention may be used when, in a new application situation, a reference address is provided by the business that wants to analyze the applicant for identity theft fraud.
  • Some of the information provided by the business in requesting an analysis for the risk of identity theft is to provide other information such as a social security number to assist in obtaining information a reference address for the person named on the application from a third party source.
  • An embodiment of the present invention uses an input data stream from the client/customer in a processing scenario or delivers required data inputs to the customer hosted software application.
  • data inputs for account takeover may include a customer name, account number and the old or FROM address, and new or TO address.
  • new account application input data may include name, institutional reference number, reference address and application addresses. If the reference address is not available, a third party address database will be consulted.
  • Emergency “Over night” replacement” processing inputs may include name, address, account or reference number, account type and open to buy/available credit balance.
  • input data is compared against the warm address, known fraud data, USPS deliverable Address File and the NCOA files.
  • the outcomes of these comparisons are appended to the inquiry record.
  • the inquiry is then matched to the demographic data file and appended to the inquiry record.
  • the inquiry record is written to the inquiry log.
  • the inquiry will be logged to an inquiry database and no further action will be taken.
  • the inquiry will be logged to an inquiry database and the address will be checked to make sure it is not a warm address or that it is not an undeliverable address. Also, when the address on a new application matches the reference address, then the business may not want the analysis conducted.
  • the information appended includes demographic data (block 30 ), U.S. postal service data (block 40 ), other data ( 50 ), previous history file data (block 60 ), client fraud data (block 70 ) received from a particular client, and address velocity data (block 80 ).
  • a brief description of the logical operations performed in determining the data appended from demographic data (block 30 ).
  • selecting demographic data to append to an address first an attempt is made to match the name and address (block 27 ). If there is a match, then the demographic data is the appended from that file. However, if there is not a match for both name and address, then there is an attempt made to match the address. If a match is made, then the demographic data for the address is appended. Also, for the area defined by a Zip+4 or Zip code+4, a demographic data for that area is appended.
  • a search would be made to match the name and address to the file containing such information. If a match is made, the length of residence data from that file would be appended. If such a match is not made, then an attempt would be made to match the address only. If there is a match, then the length of residence for the last person at the address would be appended. Also, the length of 5 residence for the residences in the Zip+4 would be appended (or an average of the length of residences for the residences in the Zip+4 would be appended).
  • Demographic data may come from a number of national databases. Such data is compiled by companies such as Experian, Equifax, InfoUSA, and Acxiom. These databases include publicly available demographic data from sources such as vehicle registration data, county assessor information, warranty cards, and department of motor vehicle data among other sources. These databases may be accessed to obtain demographic data information. As shown in FIG. 3 , demographic data appended to the addresses as shown in block 30 may include appending demographic data related to income (block 32 ), demographic census data (block 34 ), demographic data relating to housing characteristics (block 36 ) and data relating to household 15 membership characteristics. Example of such data include:
  • demographic data related to income include:
  • United States Postal Service data appended to each address as shown in block 40 may include application of Zip code+4 address standardization programs (block 42 ), national change of address (block 44 ), delivery point validation and service (block 46 ), locatable address conversion system (block 48 ), NES/Nixie (block 52 ), delivery sequence file (block 54 ), and deceased, pandering and suppression files (block 56 ).
  • the deliverable address file and the national change of address file are searched to match the address. Examples of the delivery validation file and the national change of address file is as follows:
  • Warm address file components may include:
  • Warm Address File Components Address type: Street directional Mail receiving agent Street name Other high risk Unit number Hotel/Motel City name Street number State Name Zip Code plus 4
  • an attempt is made to match the address to an address in the warm address file. If there is a match, then in one embodiment, the type (e.g., a description on the place where the mail would be delivered such as a prison) of address would be appended.
  • the type e.g., a description on the place where the mail would be delivered such as a prison
  • Other data may include non-client fraud address files comprising third party sourced fraud address records (block 60 ).
  • Other data may further include Department of Justice county level crime statistics that scale the geographic propensity to crime frequency.
  • Other similar information may be appended to the addresses. This information may be search to match an address, and append the information if there is a match.
  • any data from a client fraud file may be appended to the addresses (block 70 ).
  • This data may be contributed by the business making the request (block 66 ). That is, the business provides fraud address data records.
  • An example of such a record is as follows:
  • These records may be from on-line case management system that have stored accessible addresses for confirmed fraud incidents. This information will be used in the process for determining a risk of fraud, which may be indicated by a score.
  • information is derived relating to inquiry activity relating to both new address and the reference addresses. This information is stored and updated in an address velocity file. Information is appended to the addresses relating to frequency of inquiries. (block 80 ). Also, a previous history file is reviewed for information relating to the new application and reference addresses. This information may be appended to the addresses (block 60 ). This previous history file includes previously scored addresses. This file may include date of scoring, address scored, and the score. This file may be updated to reflect any scoring performed on an address. False positive rates are improved through the use of warm address data, customer maintained known fraud address file coupled with the U.S. Postal Service National Change of Address Database. These data sources will be used in the score development process.
  • FIG. 6 a shows the logical operations for determining a score in accordance with one embodiment of the present embodiment.
  • the first step is to analyze the demographic data appended to each of the addresses and derive information used to predict the risk of fraud.
  • a score is calculated based on the weights placed for each of the selected variables.
  • the following variables have been selected to be used in the model to predict the risk of fraud: (1) a variable that is based on the change in the financial make-up of the two addresses; (2) a variable that identifies records that were confirmed through third party data to match the name at a given address; (3) a variable that is based on the home value between the two addresses; (4) a variable that is based on the distance of the move for the change of address; (5) a variable that is based on whether the type of housing (e.g., apartment, non-apartment, single family home) has changed for the current address in comparison with the reference address or old address; (6) a variable that is based on whether the application address or the new address is a building (i.e., not an apartment or a home, rather something other than an apartment or a home); (7) a variable based on whether the new application address, the new address or current address is a warm address; (8) a variable that is based on the difference in Internet usages for the Zipcode+
  • the first variable is based on the change in the financial make-up of the two addresses.
  • this variable is called “Value1.”
  • This variable analyzes the change in the financial make-up of the reference address, the old address (e.g., in address change or account takeover situations), or FROM address (e.g., old address) and new application address, the new address, or the TO address (e.g., the address to which it has been changed). It is a composite of three demographic variables: Income, Net Worth and Home Ownership. In one embodiment, to derive the composite information the following steps are used. First, the difference in income is determined. As described with respect to FIG.
  • income for the respective address is appended by matching name and address to the appropriate demographic file. If there is not a match by both name and address, then a search is made to match at by address only to find income. If there is not a match by address only, then the Zip+4 for an address is used and the average income for that Zip+4 is appended to the address. If there is still not a match, then the mean income for all individuals is assigned. For instance, the mean income for all individuals may be assigned, when a Zip+4 for a particular address cannot be determined or when demographic data cannot be located for the address of a Zip+4 area.
  • DF_INCOME refers to the difference in income between the two addresses
  • INCOME(FROM) refers income appended to the reference address or old address
  • INCOME(TO) refers to income appended to new application address or the new address.
  • net worth ranking is appended by first trying to match by name and address to the demographic file. If a match is not found, then match by address only is attempted to find net worth ranking. If there is still no match, then a match is made to the Zip+4 of the address and the average net worth ranking for that Zip+4 is appended. If there is still no match, then the mean net worth ranking for all individuals is appended to the address. For instance, as with income, the mean net work ranking for all individuals may be appended when a Zip+4 for a particular address cannot be determined or when demographic data cannot be located for the address of a Zip+4 area.
  • DF_NETWR refers to the difference in net worth.
  • NETWR(FROM) refers to the net worth of the reference address or old address and NETWR(TO) refers to the net worth of the new application address or the new address.
  • the difference in homeownership is constructed.
  • a homeowner indicator is appended to both addresses by matching name and address to the appropriate demographic file. If there is not match, then a homeowner indicator is appended by matching by address only to find homeowner indicator. If there is still no match, the average homeownership percentage for that Zip+4 is appended. If there is still no match, the mean homeowner percentage for all individuals is assigned. For instance, as with income, the mean homeowner percentage for all individuals may be appended, when a Zip+4 for a particular address cannot be determined or when demographic data cannot be located for the address of a Zip+4 area.
  • DF_HOMEON refers to the difference in homeownership
  • HOMEON(FROM) refers to homeownership for reference address or old address
  • HOMEON(TO) refers to homeownership for the new application address or new address.
  • the numerical values are derived from a statistical analysis using known methods of actual identity theft fraud data, which was used to build this model.
  • the next variable identifies records that were confirmed through third party data to match the name at a given address. This variable is titled “MATCH.” If a match is found to the third party database (demographics) via name and address, this variable is coded as a value of 1. If it is not confirmed, it is coded as a 0.
  • the next variable is based on the home value between the two addresses. To determine the value for this variable an analysis of the change in the home value is performed. This variable is named “DF_HOMVL.”
  • the difference between the home value of the FROM address e.g., reference address in a new application situation or the old address in takeover situations
  • the TO address e.g., the new application address in a new application or a new address in takeover situations.
  • a home value is appended by matching by name and address to the appropriate demographic file. If there is not a match, then the home value is appended based on a match by address only. If there is still no match, then the average home value for that Zip+4 of the address is appended. If there is still no match, then the mean home value for all individuals is appended.
  • HOMEVAL(FROM) refers to the home value of the address prior in time to the one reflected as the address in a new application or in a change of address
  • HOMEVAL(TO) refers to the address on the new application form as the current address or the new address provided in changing the address.
  • the next variable in the model is based on the distance of the move for the change of address.
  • This variable is named “DF_DISTN.” In one embodiment, this variable measures the distance of the move for the change of address. Using the delivery point for both the FROM and TO address, we then determine the longitude and latitude of the delivery point. We then calculate the distance of the move as follows:
  • DF_DIST N SQRT[(DF_DIST X *DF_DIST X )+(DF_DIST Y *DF_DIST Y )]
  • DF_DISTX refers to the change in latitude from the TO and FROM addresses multiplied by 69.1
  • DF_DISTY refers to the change in longitude from the TO and FROM addresses multiplied by the cos of the latituted of the FROM address divided by 57.3, all of which is multiplied by 69.1
  • DF_DISTN is calculated by the square root of the sum of the squares of DF DISTX and DF_DISTY.
  • the mathematical calculation is a known formula for converting latitudinal and longitudinal information into a distance.
  • the next variable is based on whether the type of housing (e.g., apartment, non-apartment, single family home) has changed for the current address in comparison with the reference address or old address.
  • This variable is called “HOMAPT.”
  • this variable indicates whether or not a person has moved from a non-apartment to an apartment.
  • the FROM address is not an apartment and the TO address is an apartment, this variable is coded as a 1. Otherwise this variable is coded as a 0.
  • the next variable is based on whether the new application address or the new address is a building. This variable is named “BLDNG.” This variable indicates whether or not the TO address is a building. In the model, If the TO Address is a Building, this variable is coded as a 1. Otherwise this variable is coded as a 0.
  • the next variable is based on whether the new application address, the new address or current address is a warm address. In short, this variable indicates if the second address is “warm”.
  • Warm addresses are addresses that are non-standard delivery addresses. This type of address includes addresses such as UPS Stores, Mail Boxes, Etc., hotels/motels, etc.
  • the variable is named “WARMADD.” In the model, if a match is made by TO the address to the Warm Address file, this variable is coded as a 1. Otherwise this variable is coded as a 0.
  • the next variable is based on the difference in internet usages for the Zipcode+4 area (sometimes also referred to as Zip+4) for the two addresses.
  • this variable measures the difference in internet usage for the area defined by Zip+4 for the FROM address to the area defined by the Zip+4 for the TO address.
  • This variable is named “Z4_WEB.”
  • this information is derived as follows. First, the average internet usage is calculated for the Zip+4 area for both the FROM address and the TO address. This data is resident on the demographic file, where a value of 1 indicates lowest likelihood of internet usage and 9 indicates the highest. Then, the average value for all addresses in the specific Zip+4 area is calculated. Once the value for each the FROM and TO addresses is determined, the difference variable is coded as follows:
  • Z4 WEB refers to the difference is web usages for the area defined by the Zip+4 for each of the addresses
  • WEBUSE(FROM) refers to the average internet usage for area defined by the Zip+4 for the FROM address (e.g., the reference address in a new application situation or the old address in a takeover situation)
  • WEBUSE(TO) refers to the average internet usage for the Zip+4 for the area defined by the TO address (e.g., the new application address or the new address in the takeover situation). While average internet usage is used as the measure, other measures such as median internet usage may be used in the appropriate model.
  • the last variable used in this embodiment of the model is based on the average length of stay at the residence at the Zip+4 area code for the reference address or the old address (when there is an address change requested).
  • This variable is named “Z4 LORF.”
  • this variable measures the average length of residence for the area defined by the Zip+4 for the 5 FROM address.
  • this information is derived as follows. First, the average length of residence for the area defined by the Zip+4 is calculated for the FROM address. This data is resident on the demographic file, where the values indicate the number of years a person has resided at that residence. Then, the average value for all addresses in that Zip+4 area is calculated. The variable then indicates the average length of residence for people living in the area defined by the Zip+4 for the FROM address.
  • the model used to predict has nine variables.
  • the model used to predict may have any number of variables.
  • the variables used may evolve based on information collected on the characteristics of confirmed fraud accounts. Another factor that may change the variables used relates to the evolution of methods used by the people committing the fraud. As the methods change, the variables may have to be varied.
  • the present invention is not limited to the number of factors used on the types of factor used in the model to predict the risk of identity theft fraud.
  • Y is the dependent or outcome variable is the result used to predict the risk of identity theft fraud
  • A is a constant value
  • B1 . . . Bn are the coefficients or weights assigned to the independent variables
  • x1 . . . xn are the independent variables themselves.
  • the independent variables include VALUE1, MATCH, DF_HOMVLDF_DISTN, HOMAPT, BLDNG, WARMADD, Z4WEB, and Z4_LORF.
  • COMPUTE SCORE refers to the score that will be used, at least in part, to predict a risk of identity fraud.
  • the coefficients were determined using ordinary least squares regression.
  • other known statistical methods such as logistic regression, CHAID, CART, discriminant analysis, neural networks or the like may be used.
  • the score is between 0 and 1 with 1 being most likely to be fraud.
  • the scale may be any range.
  • the score may be in a range of 1 to 100.
  • the score may be converted to a description. So depending on the risk tolerance of the institution making the inquiry, ranges may be provided that would indicate likelihood of identity theft fraud. For instance, on a scale of 0 to 1, a 0.8 or above may be designated as a high risk for fraud and the report to the company making the inquiry may be a descriptive assessment based on a numerical score rather than the score itself.
  • the score itself shows some level of risk of identity theft fraud. Whether the level of risk is acceptable is one that must include input from the business as to its tolerance of this risk.
  • the score itself may be used to predict whether identity theft is being perpetrated
  • the score may be used with other data such as, without limitation, warm address files, undeliverable mail addresses, syntax of the drivers license for a particular state to assess a risk of fraud, or the year the social security number was issued is compared to the date of birth for rationality.
  • the model described for determining a score was developed using confirmed identify theft fraud data. However, while the variables selected are based on an analysis of this confirmed fraud data, other variables may be selected. Because the model described herein is based on a statistical analysis of confirmed fraud data, the model takes what is known about the past and applies it to future events. Over time, however, behaviors and relationships change. This is especially true in the area of identity theft fraud. As fraud models and tools are effectively deployed, the fraud migrates, creating new behaviors and relationships. Because of this, the model may be modified by using the same methods described herein to emphasize certain variables or add other variables from the information sources described herein. The model described herein was tested to understand how well the model “performs” or segments the entire population of applications. The effectiveness of the model described here is shown by the segmentation table and the ROC curve.
  • the confirmed fraud data is scored.
  • the scored data was categorized into equal sized buckets or categories from lowest to highest.
  • the identity theft fraud rate present within each bucket is shown by categorizing the worst 5% into the first bucket, the next worst 5% into the second bucket, etc.
  • the following chart shows the performance of the model.
  • segment 1 is the worst 5% of scored records from the test data set. As shown by the chart, this segment has a fraud rate that is over 9 times the average fraud rate for the entire population. (Note: the Indexed fraud rate is calculated by taking the segment level fraud rate divided by the overall population fraud rate*100.)
  • FIG. 7 Another way to look at the performance of the model is to look at a Power of Segmentation summary chart ( FIG. 7 ). This is sometimes also referred to as a ROC curve or Lorenz Diagram. This view shows how many cumulative fraud records are identified for each level of screening.
  • this curve indicates that the model is able to identify approximately 60% of the total frauds (y-axis) by only looking at the worst 10% of records as identified by the model (x-axis). Similarly, the curve shows that the worst 5% account for approximately 45% of the total fraud.
  • the top line shows how well the model performs, whereas the lower line shows how a randomly generated model performs (i.e., If one looked at 10% of the records, one would expect to identify about 10% of the fraud.)
  • the address velocity file is updated with the score.
  • This business rules are to ensure that regardless of the score, certain data elements are checked (e.g., whether the address is a warm address, whether the address is a undeliverable mail address, whether social security number is valid etc.) That is, create a file on this analyzed case and include in that data relating to whether a warm address was present, whether it was a reported fraud address, or whether the address was an undeliverable mailing address. Such information may be used in analysis of other inquiries in the future. Moreover, regardless of the score, if the new address or the address on an application is a warm address, then the rule may be to report that as a high risk of identity theft.
  • data relating to undeliverable mailing addresses would be useful information for the customer making the inquiry because sending media (e.g., checks, credit cards or the like) to an undeliverable providing address is expense to the business and creates a risk for fraud to be committed.
  • media e.g., checks, credit cards or the like
  • the customer making the inquiry that the address is an undeliverable mailing address would be useful to the customer and would save the customer the expense of mailing media to an undeliverable mailing address.
  • the customer would reduce the risk of fraud being committed with the media.
  • the business making the request may have some criteria (e.g., verify syntax of the driver's license).
  • Some criteria e.g., verify syntax of the driver's license.
  • Each may provide information related to score thresholds based on its tolerance for risk. Apply those requirements and append that information with the score and the other information discussed with respect to business rules to create an output for sharing with the business.
  • fraud alerts may be created with reason codes and transmitted to the business entity through a user interface at block 92 or a web server at block 98 .
  • the reason codes may be based on user defined criteria or codes based on the variables used in the analysis or data considered in the analysis.
  • the previous history file for this account may also be updated.
  • a case management system provides display screen functionality for the fraud alerts, management queuing functionality with operator and pending case tracking.
  • the output message content includes the following:
  • the output may be provided in a other ways.
  • the output may be provided by simply stating a level of risk or providing a statement of the level of risk of fraud in addition to the score.
  • the information related to the level of risk of fraud may be communicated via a data line, the internet, a facsimile or by voice (including an operator simply calling the customer with an oral report of the risk analysis).
  • the web server (block 98 ) may be used by the customer to provide confirmed fraud data, which would be used to update the client fraud data file for future use.
  • the business/customer makes an inquiry to assess a level of risk of fraud on a new application.
  • Data is appended to the address provided on the new application and the reference address (from a third party source such as a credit report or this information may be on the application).
  • a score is derived using the model described above.
  • the result may be provided real-time or via batch processing. In either case, the results maybe provided to the customer in any commercially practicable method including, but not limited to, a data line, the internet, a facsimile or by voice (electronic or human voice).
  • Customers may establish internal policies and procedures for handling accounts based on the score.
  • FIG. 2 The system described with reference to FIG. 2 is a client-server system.
  • the client transmitted the request and input information to a remote server for processing.
  • FIG. 8 shows the logical operations used in a system that is hosted at the client site. That is, the customer hosts the system for determining the risk of fraud in a new application process or on an account takeover situation.
  • the client hosts the software to perform the analysis to create the score.
  • the client hosted solution may be limited to Zip code plus+4 data variables.
  • the model may not be as rigorous as the model as described with respect to FIG. 2 . This type of system may be provide a risk analysis that while less rigorous useful in some situations.
  • FIGS. 8-15 show alternatives to the basic method described with respect to FIG. 2 for use in account takeover situations. That is, the basic logical operations of appending information to the addresses and calculating a score as described with references to FIGS. 2-6 b would be used.
  • the reference address is usually linked to the applicant's identity, not necessarily the address on the new application form.
  • the reference address is obtained from a credit bureau.
  • the old address or the FROM address would be the reference address and the address to which it is changed to is the new address (e.g., the TO address).
  • a customer may want each change of address analyzed to determine a risk of fraud and match to subsequent media requests, a customer may want the change of address analyzed only when such a request is matched to a media request, or a client may want each change of address analyzed for risk of fraud. Each of these situations will be discussed in turn with reference to FIGS. 6 b - 13 .
  • FIGS. 9 , 11 , and 12 show the logical operations for an embodiment in which an analysis is performed for each address change and a match is made for subsequent media requests.
  • the logical operations for analyzing the risk of fraud is the same as that described and shown in FIGS. 2-6 b. That is, information is appended to the old address (the address before the change of address request)—which for takeover situation would be considered a reference address—and to the new address (i.e., the address it was changed to). Then, a score would be derived using the model described with reference to FIG. 6 a. However, as shown in block 300 , there is an address change file that maintains the change in address for a particular account.
  • a media request file is maintained.
  • a media request may include a request for financial instruments such as checks or credit cards.
  • a scored history file is maintained to store the score based on the analysis done (consistent with the analysis as described in FIG. 2 ) for an account in which there was a change in address.
  • business rules which may be supplied by the customer—are used to determine whether to honor the media request. Some factors that may be used include the time lapse between the media request and the address change and the risk of identity theft fraud as determined by the scoring.
  • customers may only want an address change analyzed for risk of fraud if it is followed with a media requested within a period of time of the address change.
  • the media request may be prior to the address change request.
  • a media request file is maintained storing media request information on accounts.
  • a 90 day rolling address change file is maintained. While in one embodiment the rolling address change file has a 90 day window, the rolling address file is not limited to a 90 day window but rather may be constructed to any length of time.
  • a determination is made as to whether a media request matches a change in address request. If so, then the analysis to score the change in address as described with respect to FIG. 2-6 b is performed (as shown in FIGS. 10-12 ).
  • FIGS. 13-15 show the process described with respect to FIGS. 2-6 b being applied in the case when each address change is scored, but no additional steps are performed with respect to media requests.
  • a numerical score derived from this process may be used to assess risk.
  • the score may be considered along with data analyzed based on the business rules and client-defined parameters to make as assessment of the risk of identity theft.
  • This information may be provided in any number of ways including voice, data line, facsimile.
  • the processing for takeover accounts may be done in batch, real-time, and in a client-server structure where the server is in a remote location or in a structure where the system is hosted at the client site.
  • a purpose of this invention is to prevent fraud losses associated with account takeover.
  • An additional purpose of the invention is to prevent fraud losses that accrue from criminals submitting fraudulent credit account applications to financial institutions where the criminal assumes the credit identity of an unknowing person/victim. If the account is approved, the criminal receives the credit card, debit card, checks or merchandise or services at a street address other than that of the victim.
  • An additional purpose of this invention is to reduce fraud losses in a form of account takeover that is associated with “over night” emergency requests for the replacement of items such as credit/debit cards, personal checks, traveler check replacements.
  • criminals can affect an account take over by exploiting the Emergency replacement process through requesting that an unauthorized replacement, be sent to an address for which they have access. The criminal receives the replacement and commits unauthorized use fraud.
  • Emergency type credit and debit card replacements are often requested to be sent to an address other than the address of record.
  • a financial institution has a short processing window to establish the legitimacy of these requests. “This invention would help to identify potentially fraudulent requests using the analysis described above.
  • Another purpose of this invention is to reduce fraud losses where product or service fulfillment or billing activities involve a street address and the effects of fraudulent addresses that would be negative to business interests. This can occur in the retail environment particularly in non-face to face transactions.
  • the purpose of this invention is to reduce overhead and infrastructure expenses associated with low false positive rates, reduced infrastructure expenses that are necessary to process fraudulent claims and an improved customer experience.
  • embodiments of the present invention provide various systems and methods that can be used for detecting fraud in account requests.
  • Embodiments of the invention can be embodied in a computer program product. It will be understood that a computer program product including one or more features or operations of the present invention may be created in a computer usable medium (such as a CD-ROM or other medium) having computer readable code embodied therein.
  • the computer usable medium preferably contains a number of computer readable program code devices configured to cause a computer to affect one or more of the various functions or operations herein described.

Abstract

In one embodiment, this invention analyzes demographic data that is associated with a specific street address when presented as an address change on an existing account or an address included on a new account application when that address is different from the reference address (e.g., a credit bureau type header data). The old or reference address and the new address, the new account application address or fulfillment address demographic attributes are gathered, analyzed, compared for divergence and scaled to reflect the relative fraud risk.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. application Ser. No. 12/758,171, filed on Apr. 12, 2010, issued as U.S. Pat. No. 8,352,281, which is continuation of U.S. application Ser. No. 10/697,076, filed Oct. 30, 2003, issued as U.S. Pat. No. 7,870,078, which claims the priority from U.S. Provisional Application No. 60/423,298, filed on Nov. 1, 2002, all of which are hereby incorporated by reference in their entirety.
  • TECHNICAL FIELD
  • This invention relates, in general, to apparatuses and methods for identifying account fraud. In particular, this invention relates to detecting fraud and assisting in fraud prevention due to identity theft including to but not limited to address change, account takeover fraud, and new account application fraud. In addition, this invention may be practiced using batch or real-time, online processing or using customer hosted software applications.
  • BACKGROUND OF THE INVENTION
  • Numerous businesses, such as financial institutions, department stores, fulfillment businesses, on-line business, and businesses making sales over the telephone face the challenge of protecting the business from customers attempting to defraud it. These businesses regularly handle thousands of accounts from its users or consumers. Such accounts may include instant credit or credit accounts with a department store or other retail outlet, or accounts involving checks, credit cards, debit cards, or ATM cards of a bank, credit or other financial institution.
  • Identity theft may include account takeover, wherein a thief steals the identity of an individual and then uses that information to take over ownership of that individual's account; or new account fraud, wherein the identity thief uses stolen information to open new accounts in another person's name.
  • Conventional methods for detecting identity theft when opening new accounts or for modifying existing accounts may be problematic. Currently, to detect identity theft type fraud, businesses have used negative databases of suspicious addresses like mail receiving agents or known fraud addresses. This method is useful only if there is known negative information. Often, delivery addresses are not included in a negative database.
  • In the case of new account application fraud, contemporary detection methods focus on the verification of data elements that are ascertainable by the criminal. These approaches seek to verify the identity of the new account applicant based on the information that is provided in the application process. There are typically three methodologies used in the new account verification process. First, businesses check negative file resources to see whether there is negative information associated with a data element e.g., the provided social security number belongs to a deceased person. Second, businesses attempt to verify the applicant's identity through the use of matching those application data elements to independent data sources which often only serve to corroborate the stolen information that the crook is using. Third, there are logical references like; does the driver's license number fit the format from the issuing state? These techniques are generally used for both “in and out of wallet solutions.” “Out of Wallet” verification adds a level of complexity to the criminal enterprise through the presentation questions based on data not typically stored in a wallet or purse. For instance, asking a person to provide a the maiden name of his/her mother.
  • As recognized by the present inventors, what is needed is a system, method, and computer program product for detecting identity fraud theft using a method that may either supplant or complement some of the methods discussed above. There is a further need for a system, method and computer program that identifies both account takeover identity theft and new account identity theft.
  • SUMMARY OF THE INVENTION
  • In light of the above and according to one broad aspect of one embodiment of the invention, disclosed herein is a system and methods for detecting fraud in account requests such as requests for new accounts, requests for change of address of existing accounts, and requests for media such as bank checks, duplicate credits cards, ATM cards, debit cards, past financial statements, and the like. In one example, embodiments of the present invention may utilize demographic data based on addresses associated with the account to determine whether an account request may involve identity theft fraud, and scores may be generated indicating the likelihood that the account request may involve identity theft fraud.
  • In one embodiment, this invention analyzes demographic data that is associated with a specific street address when presented as an address change on an existing account or an address included on a new account application when that address is different from the reference address (e.g., a credit bureau type header data). The old or reference address and the new address, the new account application address or fulfillment address demographic attributes are gathered, analyzed, compared for divergence and scaled to reflect the relative fraud risk.
  • Another embodiment of the present invention relates to a method for assessing a risk of fraud. The method comprises receiving at least information relating to a first address relating to one of an account holder or an applicant; receiving information relating to a second address; and measuring demographic differences between the first and second addresses.
  • Another embodiment of the present invention relates to a method for assessing a risk of identity theft fraud with respect to new applications. The method comprises receiving first address information relating to an applicant for an account; and using demographic data relating to the address information.
  • Another embodiment of the present invention relates to a method for detecting a risk of identity theft fraud. The method comprises combining warm address, known fraud address information, USPS Deliverable Address File, NCOA files with address specific, single point, demographic information; and coupling differential information relating to the addresses to predict a risk of fraud for at least one of account takeover new account application and fulfillment fraud.
  • Another embodiment of the present invention relates to a system for assessing a risk of fraud. The system includes a processor, memory; computer instructions operable by the processor to append data to at least one variable used in assessing a risk of identity theft fraud; computer instructions operable by the processor to analyze differences in demographic data for two different street address; computer instructions operable by the processor to calculate a score indicative of a level of risk of fraud; and computer instructions operable by the processor to output an assessment of a risk of level of fraud. In calculating the score, the formula used is of the form:

  • Y=A+B1*x1+B2*x2+B3*x3 . . . +Bn*xn
  • where Y is the dependent or outcome variable is the result used to predict the risk of identity theft fraud, A is a constant value, B1 . . . Bn are the coefficients or weights assigned to the independent variables, and x1 . . . xn are the independent variables.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of an overall process in accordance with an embodiment of the present invention.
  • FIG. 2 illustrates an example of logical operations for processing new account requests, in accordance with an embodiment of the present invention.
  • FIG. 3 is a block diagram showing the address information used in an embodiment of the present invention to detect identity theft via account takeover or via applications for new accounts.
  • FIG. 4 illustrates examples of logical operations for processing new account requests as illustrated in FIG. 2, in accordance with an embodiment of the present invention.
  • FIG. 5 illustrates examples of logical operations for processing new account requests as illustrated in FIG. 2, in accordance with an embodiment of the present invention.
  • FIG. 6 a illustrates an example of the logical operations for determining a risk of identity theft fraud, in accordance with an embodiment of the present invention.
  • FIG. 6 b is a block diagram showing logical operations for appending certain information to addresses in performing analysis for determining a risk of identity theft fraud, in accordance with an embodiment of the present invention.
  • FIG. 7 illustrates a Power of Sementation summary chart.
  • FIG. 8 illustrates another example for processing new account request, in accordance with an embodiment of the present invention.
  • FIG. 9 illustrates an example of logical operations for processing requests to take over an account, in accordance with an embodiment of the present invention.
  • FIG. 10 illustrates another example of logical operations for processing a request to take over an account, in accordance with an embodiment of the present invention.
  • FIG. 11 illustrates examples of operations of FIGS. 9-10, in accordance with an embodiment of the present invention.
  • FIG. 12 illustrates examples of operations of FIGS. 9-10, in accordance with an embodiment of the present invention.
  • FIG. 13 illustrates another example of logical operations for processing a request to take over an account, in accordance with an embodiment of the present invention.
  • FIG. 14 illustrates examples of logical operations for FIG. 13, in accordance with an embodiment of the present invention.
  • FIG. 15 illustrates examples of logical operations for FIG. 13, in accordance with an embodiment of the present invention.
  • DETAILED DESCRIPTION
  • The present invention generally relates to a system and method for detecting or assessing the risk of identity theft fraud. The present invention will be described in the context of detecting or assessing the risk of identity theft fraud in two contexts: new account application fraud and account takeover fraud. However, the present invention is not limited to only detecting these type of fraud schemes.
  • FIG. 1 shows the general steps used in an embodiment of the present invention for detecting fraud. As shown in block 10, new address information is obtained. In the context of a new account application, this may be the address provided on the application and in the context of takeover of an account, this may be the new address provided that is to replace the current address on the account. As shown in block 12, this new street address information is compared to a reference address (which may be an address obtained from a credit report for the person or the current address prior to the change of address). Next, as shown in block 14, demographic data associated with the addresses is gathered and analyzed. As shown in block 16, an assessment of relative risk of identity theft fraud is made based on the analysis. As such, the present invention analyzes demographic data that is associated with a specific street address when presented as an address change on an existing account or an address included on a new account application when that address is different from a reference address (e.g., whether provided by the applicant or obtained from a credit bureau). For the two addresses, demographic attributes are gathered, analyzed, compared for divergence and scaled to reflect the relative risk of identity theft fraud. Risk may be expressed in a number of ways. In one embodiment, risk is expressed as an upper bound numerical score from 1 to 100 that is returned with reason codes to the customer for follow up.
  • One advantage to the present invention's use of address information is that an address is the one element that a criminal cannot manipulate. That is, when a criminal steals an identity, the criminal may be able to obtain identity information relating to the victim. However, the criminal cannot receive mail at the victim's house. Consequently, the criminal needs to use an address where he/she can receive mail (e.g., to obtain media or goods). As such, the present invention compares addresses. The present invention recognizes that there are demographic differences between addresses. For instance, one address may have an upscale socio-economic demographic as compared to the other address that has a more downscale socio-economic demographic. By using street address information as the basis for gathering, comparing and analyzing demographic data, the present invention uses elements that can be independently verified and analyzed to determine a risk of identity theft. Also, in addition to the demographic data, additional data elements such as warm address information or undeliverable address information may be used to assist in assessing the risk of identity theft fraud. Within the context of this document, “Account” as used in this application includes its ordinary meaning and is also intended to cover any business relationship where there is financial risk on part of the product or service provider including but not limited to relationships of credit, debit brokerage, retail, non-face to face fulfillment activities (e.g., on-line sales).
  • In general, the risk assessment is performed when a business or service user sends/transmits the old or reference address and the new (requested changed) or new account application address with other identifying information for use by the software application embodying the present invention. Input data is matched to address specific demographic data which in turn is delivered to the decision engine to produce a risk score. Data processing can occur in batch, real time online or on customer or processor hosted software application. Communications can occur through telephone, data line, internet or tape/disk or other commercially available method. The application output may be returned to the service user via an internet accessed system, telephone, data line, or other commercially available method.
  • In general, the present invention uses statistical modeling of negative and demographic/socio-economic data elements associated with a street address to identify suspected identity theft fraud activity when there is a change in address or an address on a new application that is different from a reference address (e.g., one provided by the applicant or one obtained from a third party such as a credit bureau). As such, this invention may be used to detect identity theft fraud in existing accounts, new credit account applications or other business risks associated with address manipulation. The process generally analyzes the differences in demographic data between an old address or reference address and an address on a new application or an address change on an account to a new address. If a reference address is not provided by the new applicant or is not the address that was changed to a new address, then a reference address may be a credit bureau header data or an address secured from a third party database. Additionally, other negative and logical data sources are used in the risk evaluation, such as warm address information, driver's license syntax specific to a state, or the year a social security number is issued is compared to the date of birth for rationality. Analysis may performed through the use of regression models, neural network and expert rules based technology. A score that scales risk is developed to identify the likelihood of identity theft fraud. The score is returned along with supportive investigative data to the customer/business for use in determining the level of risk it is willing to take in entering into a business relationship with the investigated person. Consequently, an embodiment of the present invention provides businesses with the opportunity investigate a potential identity theft fraud and take steps to prevent economic loss. As will be discussed, in the preferred embodiment, the present invention is implemented in software.
  • Referring to FIG. 2, the method for detecting new account application fraud will be described. FIG. 2 illustrates an example of logical operations for detecting fraud in the context of receiving a new account request. As shown in block 20, a new request is received including the client data, and the received data is reformatted, normalized, or otherwise processed so that the data can be further processed. An input data stream or data inputs from the client/customer are delivered to the host system for processing. Examples of the type of message elements or data inputs include the following:
  • New Account Application/Address
    Change (New Address) data inputs
    Customer identifier First name
    Transaction type Middle initial/name
    Street directional Last name
    Street name Surname
    Unit number Account or reference number
    City name Address type code
    State Name Social Security Number
    Zip Code plus 4 Date of Birth
    Driver's license information Loss potential - for takeover only
  • New Account Reference Address/Address Change Old Address
    Customer identifier First name
    Transaction type Middle initial/name
    Street directional Last name
    Street name Surname
    Unit number Account or reference number
    City name Address type code
    State name Social Security Number
    Zip Code plus 4 Date of Birth
    Driver's license information
  • Account Access Device Requests, Normal or Emergency
    (Credit/Debit Cards, Checks, 5 PIN) requests input
    file (Address change process only)
    Transaction type State name
    Media type Zip Code plus 4
    Request type First name
    Account number Middle initial/name
    Street directional Last name
    Street name Surname
    Unit number Address type code
    City name Loss potential - Open to buy/balance
    Driver's license information
  • However, depending on the implementation, not all of the data elements need to be sent by the client. In one embodiment, for assessing risk of new account application identity theft fraud, input data includes name and address listed on the new application. In one embodiment, for assessing risk of account takeover identity theft fraud, input data includes name, current address or reference address, and the address to which it was changed.
  • In general, information that may be provided by a business that wants an assessment of risk of identity theft may provide the type of transaction (e.g., new application, change of address, etc.), information to identify the person that is to be investigated (e.g., name, social security number, date of birth etc.), address information as will be discussed with reference to FIG. 3, account information, and whether there has been a media request (e.g., a request for checks, credit cards, PIN number, or other items of value).
  • FIG. 3 is a block diagram showing two forms of identity theft fraud (block 150): takeover identity theft fraud (block 152) and new application identity theft fraud (block 154). In general, account takeover occurs when a person (e.g., a criminal) poses as the customer of a business and changes the address from the customers' address to the another address (i.e., the criminal's address). The criminal then had media, such as checks, credit cards, PIN number, or other items of value (including other goods) to the new/criminal address. The criminal then may commit fraud from the unauthorized use of the financial instrument or benefits from the illegally obtained goods. New application identity theft fraud involves a criminal submitting a new application that includes information of another and attempts to obtain media or other goods and services from the business.
  • In the account take over situation, usually there is an address change to a new address. The current address prior to the address change may be referred to as the old address, the reference address, or the FROM address. The new address (i.e., the address that the reference address was changed to) is sometimes referred to as the TO address. Similarly, in the new application situation, the reference address is the old address or FROM address. It may be provided by the applicant or it may be obtained from a third party such as a credit bureau. Also, in the new application, address provided on the new application may be referred to as the new address or TO address.
  • Usually in the takeover situation, because of the address change, the business that is going to have an assessment made of the risk of identity theft fraud has an old address or reference address and a new address. In the new application situation, usually, a business that is going to have assessment made will have the address stated on the application but may not have a reference address. It is more common to use a third party source to obtain a reference address for analysis of a risk of identity theft in a new application situation. However, the present invention may be used when, in a new application situation, a reference address is provided by the business that wants to analyze the applicant for identity theft fraud. Some of the information provided by the business in requesting an analysis for the risk of identity theft is to provide other information such as a social security number to assist in obtaining information a reference address for the person named on the application from a third party source.
  • An embodiment of the present invention uses an input data stream from the client/customer in a processing scenario or delivers required data inputs to the customer hosted software application. As shown above, data inputs for account takeover may include a customer name, account number and the old or FROM address, and new or TO address. As shown above, new account application input data may include name, institutional reference number, reference address and application addresses. If the reference address is not available, a third party address database will be consulted. Emergency “Over night” replacement” processing inputs may include name, address, account or reference number, account type and open to buy/available credit balance.
  • As will be described, input data is compared against the warm address, known fraud data, USPS deliverable Address File and the NCOA files. The outcomes of these comparisons are appended to the inquiry record. The inquiry is then matched to the demographic data file and appended to the inquiry record. The inquiry record is written to the inquiry log.
  • At block 22, a determination is made as to whether a reference address is present. If a reference address is provided in the client data, then such address is also standardized (block 26). Otherwise, a reference address is appended to the data received (block 24). If the reference address is not available, a third party address database may be consulted. For instance, the reference address may be obtained from a credit bureau and appended to the data received. Then, the appended reference address is standardized (block 26).
  • In one embodiment, if the reference address and the new account application address are the same the inquiry will be logged to an inquiry database and no further action will be taken. In another embodiment, if the reference address and the new account application address are the same, the inquiry will be logged to an inquiry database and the address will be checked to make sure it is not a warm address or that it is not an undeliverable address. Also, when the address on a new application matches the reference address, then the business may not want the analysis conducted.
  • If there is a difference between the new account application address and the reference address, then additional information such as the information that will be described with respect to blocks 30, 40, 50, 60, 70, and 80 will be appended to both addresses (block 28). All information is appended to both the reference address and to the address provided in the application (block 28). In one embodiment, the information appended includes demographic data (block 30), U.S. postal service data (block 40), other data (50), previous history file data (block 60), client fraud data (block 70) received from a particular client, and address velocity data (block 80).
  • With respect to FIG. 6 b, a brief description of the logical operations performed in determining the data appended from demographic data (block 30). In selecting demographic data to append to an address, first an attempt is made to match the name and address (block 27). If there is a match, then the demographic data is the appended from that file. However, if there is not a match for both name and address, then there is an attempt made to match the address. If a match is made, then the demographic data for the address is appended. Also, for the area defined by a Zip+4 or Zip code+4, a demographic data for that area is appended. For instance, if information related to length of residence was being appended to each address, then first, a search would be made to match the name and address to the file containing such information. If a match is made, the length of residence data from that file would be appended. If such a match is not made, then an attempt would be made to match the address only. If there is a match, then the length of residence for the last person at the address would be appended. Also, the length of 5 residence for the residences in the Zip+4 would be appended (or an average of the length of residences for the residences in the Zip+4 would be appended).
  • Demographic data may come from a number of national databases. Such data is compiled by companies such as Experian, Equifax, InfoUSA, and Acxiom. These databases include publicly available demographic data from sources such as vehicle registration data, county assessor information, warranty cards, and department of motor vehicle data among other sources. These databases may be accessed to obtain demographic data information. As shown in FIG. 3, demographic data appended to the addresses as shown in block 30 may include appending demographic data related to income (block 32), demographic census data (block 34), demographic data relating to housing characteristics (block 36) and data relating to household 15 membership characteristics. Example of such data include:
  • Census/demographic data for reference/application/change address
    Address type - residence, single family Household income
    apartment, business Length of residence
    Owner/renter Number of children
    Single family/renter Deliverable address
    Primary and secondary names Longitude/latitude
    Age, primary and secondary Neighbor wealth
    Gender, primary and secondary Single family dwelling value
    Occupation, primary and secondary Relocation velocity
    Marital status Education
    Number of adults Vehicles
  • Further examples of demographic data related to income include:
    • RESEARCH—INCOME ESTIMATES
    • EXPENDABLE INCOME RANK
    • NET WORTH RANK
    • WEALTHFINDER CODE
    • POTENTIAL INVESTOR CONSUMER SCORE
    • REVOLVER MINIMUM PAYMENT MODEL
    • BUYER BEHAVIOR CLUSTER CODE
    • INTERNET USAGE MODEL
    • HIGH TECH HOUSEHOLD INDICATOR
    • HOUSEHOLD OWNS STOCKS OR BONDS
  • Examples of demographic data related to housing characteristics include:
    • LIKELIHOOD HOME IS OWNED OR RENTED ED
    • DELIVERY UNIT SIZE
    • HOMEOWNER INDICATOR
    • AGE OF HOME SOURCE CODE
    • AGE OF HOME
    • ESTIMATED HOME VALUE CODE
    • LOAN-TO-VALUE RATIO RANGE CODE
    • HOME LOAN AMOUNT
    • MORTGAGE AMOUNT SOURCE CODE
    • MORTGAGE BALANCE CODE
    • HOME EQUITY ESTIMATE
    • HOMEOWNER SOURCE CODE
    • HOUSEHOLD HAS MOVED FROM ADDRESS RESEARCH—ADDRESS VERIFICATION ADDRESS VERIFIED BY ANY DICTIONARY
    • PRIMARY SOURCE OF NAME AND ADDRESS RESEARCH—SOURCE FLAGS/RECENCY DATE
    • LENGTH OF RESIDENCE IN YEARS.
      Examples of demographic data related to household membership characteristics include:
    • HEAD OF HOUSEHOLD AGE CODE
    • HOUSEHOLD MEMBER 1 GENDER CODE
    • HOUSEHOLD MEMBER 1 TITLE CODE
    • HOUSEHOLD MEMBER 1 GIVEN NAME
    • HOUSEHOLD MEMBER 1 MIDDLE INITIAL
    • HOUSEHOLD MEMBER 1 SURNAME
    • HOUSEHOLD MEMBER 1 SURNAME SUFFIX
  • Also, the similar information about other members of the household may be included. Similarly, as shown in FIG. 3, United States Postal Service data appended to each address as shown in block 40 may include application of Zip code+4 address standardization programs (block 42), national change of address (block 44), delivery point validation and service (block 46), locatable address conversion system (block 48), NES/Nixie (block 52), delivery sequence file (block 54), and deceased, pandering and suppression files (block 56). The deliverable address file and the national change of address file are searched to match the address. Examples of the delivery validation file and the national change of address file is as follows:
  • U.S. Postal Service Deliverable Address File
    Street number Unit number
    Street directional City name
    Street name State Name
    Zip Code plus 4
  • National Change of Address - USPS
    Street number Unit number
    Street directional City name
    Street name State Name
    Zip Code plus 4 Confirmed change of address by USPS
    Move date
  • The following additional information may be gathered from the United States Postal Service data:
    • STREET DESIGNATOR
    • POST DIRECTION
    • UNIT TYPE
    • UNIT NUMBER
    • ZIP CODE
    • ZIP+4 CODE
    • DELIVERY POINT AND CHECK DIGIT
    • CARRIER ROUTE
    • ZIP+4 MATCH LEVEL
    • PRIMARY NUMBER IS A BOX
    • ZIP CODE STANDARDIZATION
    • CITY CHANGE INDICATOR
    • LOT
    • STATE CODE
    • COUNTY CODE
    • LACS INDICATOR
    • FINALIST UNIT RETURN CODE
    • VENDOR SOURCE
    • CITY TYPE INDICATOR
    • RECORD TYPE FROM ZIP+4 FILE
    Appendage
    • MATCH LEVEL
    • MOVE TYPE
    • EFFECTIVE MOVE DATA (YYYYMM)
    • UNIT TYPE
    • UNIT NUMBER
    • CITY NAME
    • STATE ABBREVIATION
    • ZIP CODE
    • ZIP+4 ADD-ON CODE
    • DELIVERY POINT AND CHECK DIGIT
    • CARRIER ROUTE
    • ZIP+4 MATCH LEVEL
    • PRIMARY NUMBER IS A BOX
    • LACS RECORD TYPE
    • MULTI SOURCE LEVEL
    • NCOA MATCH FOOTNOTES
    • INDIVIDUAL MATCH LOGIC REQUIRED
    • NIXIE MATCH
    • HOUSE NUMBER MISSING
    • CLIENT RECORD MISSING BOX
    • ADDRESSES DO NOT MATCH
    • STREET NAME DOES NOT MATCH
    • UNIT NUMBER MISSING IN CLIENT
    • UNIT NUMBER TRANSPOSITION
    • UNIT NUMBER MISMATCH
    • CLIENT MISSING 1ST NAME
    • 1ST NAME MATCHES 1ST INITIAL
    • MIDDLE NAME/INITIAL MISMATCH
    • GENDER MISMATCH
    • TITLE/SUFFIXES DO NOT MATCH
    • INDIVIDUAL MOVE AND 1ST NAMES DO NOT MATCH
    • INDIVIDUAL MATCH LOGIC AND 1ST NAMES DO NOT MATCH
    • SURNAME MATCH TO GEN. DELIVERY
    Appendage
    • MATCHED TO ZIP+4 FILE
    • NOT MATCHED TO ZIP+4 FILE
    • ALL COMPONENTS MATCHED TO DPV
    • DPV MATCHED BUT SECONDARY NUMBER INVALID
    • DPV MATCHED HIGHRISE DEFAULT
    • (MISSING SECONDARY
    • PRIMARY NUMBER MISSING
    • PRIMARY NUMBER INVALID
    • MISSING PO, RR, HC BOX NUMBER
    • MATCHED TO CMRA AND PMB,
    • DESIGNATOR PRESENT
    • MATCHED TO CMRA AND PMB,
    • DESIGNATOR NOT PRESENT
    • DPV CONFIRMATION INDICTOR
    • INVALID ADDRESS PO, RR, OR HC
    • BOX NUMBER INVALID
    • FUTURE EXPANSION
    • ZIP+4 MATCH LEVEL
    • ADDRESS SORT SEQUENCE NUMBER
    • VACANT INDICATOR
    • SEASONAL INDICATOR
    • RESIDENTIAL/BUSINESS INDICATOR
    • THROWBACK INDICATOR
    • DELIVERY TYPE CODE
    • DELIVERY POINT DROP INDICATOR
    • NUMBER OF DELIVERIES AT THE DROP
    • LOCATION ADDRESS CONVERSION
    • INDICATOR
    • NO STATISTICS INDICATOR
    Appendage
    • ADDRESS SOURCE CODE
    • ADDRESS DELIVERY CODES
    • PANDER CODE
    • LOCAL ADDRESS LINE
    • UNIT INFORMATION LINE
    • SECONDARY ADDRESS LINE/URBANIZATION CODE
    • LONG CITY NAME
    • ZIP CODE
    • ZIP+4 CODE
    • MAILABILITY CODE
    • MILITARY ZIP CODE
    • OPAC MATCH INDICATOR
    • NDI AFFIRMED APT INDICATOR
    • SECONDARY ADDRESS INDICATOR
    • POSTAL COUNTY CODE
    • LONG CITY NAME INDICATOR
    • CARRIER ROUTE CODE
    • LINE OF TRAVEL INFORMATION
    • LOT SORTATION NUMBER
    • PRESTIGE CITY NAME USED
    • ZIP/ADD-ON/DELIVERY POINT
    Appendage
    • MATCH CODE
    Appendage
    • MATCH CODE
    • ZIP PLUS FOUR CODE (4 DIGITS)
    • ZIP+4 MATCH LEVEL
    • 4 0 ADDRESS DSF GROUP CODE
    • USPS DELIVERY SERVICE TYPE
    • CARRIER ROUTE CODE
    • DELIVERY POINT
    • 1990 CENSUS CODES
    • ADDRESS LOCATION TYPE
    • LOCATION (DWELLING UNIT) ID
    • ADDRESS TYPE
    • ROUTE TYPE
    • ROUTE NUMBER
    • BOX TYPE
    • BOX NUMBER
    • UNIT TYPE
    • UNIT NUMBER.
  • Continuing to refer to FIG. 3, other data may be appended to these addresses (block 50). Other data may include information from warm address files comprising high risk addresses like mail receiving agents, jails, prisons, hotels and the like (block 58). Warm address file components may include:
  • Warm Address File Components
    Address type: Street directional
    Mail receiving agent Street name
    Other high risk Unit number
    Hotel/Motel City name
    Street number State Name
    Zip Code plus 4
  • Usually, an attempt is made to match the address to an address in the warm address file. If there is a match, then in one embodiment, the type (e.g., a description on the place where the mail would be delivered such as a prison) of address would be appended.
  • Other data may include non-client fraud address files comprising third party sourced fraud address records (block 60). Other data may further include Department of Justice county level crime statistics that scale the geographic propensity to crime frequency. Other similar information may be appended to the addresses. This information may be search to match an address, and append the information if there is a match.
  • Also, as shown in FIGS. 2 and 5, any data from a client fraud file may be appended to the addresses (block 70). This data may be contributed by the business making the request (block 66). That is, the business provides fraud address data records. An example of such a record is as follows:
  • Customer/Business Maintained Fraud/High Risk Address File
    First name Street name
    Middle initial/name Unit number
    Last name City name
    Surname State Name
    Street number Zip Code plus 4
    Street directional
  • These records may be from on-line case management system that have stored accessible addresses for confirmed fraud incidents. This information will be used in the process for determining a risk of fraud, which may be indicated by a score.
  • Also, information is derived relating to inquiry activity relating to both new address and the reference addresses. This information is stored and updated in an address velocity file. Information is appended to the addresses relating to frequency of inquiries. (block 80). Also, a previous history file is reviewed for information relating to the new application and reference addresses. This information may be appended to the addresses (block 60). This previous history file includes previously scored addresses. This file may include date of scoring, address scored, and the score. This file may be updated to reflect any scoring performed on an address. False positive rates are improved through the use of warm address data, customer maintained known fraud address file coupled with the U.S. Postal Service National Change of Address Database. These data sources will be used in the score development process.
  • As shown in FIG. 2, once information has been appended to the addresses, then a score is created based on all the data (block 82). Generally, statistical models are used to derive a score, which is used to predict the risk of fraud. At block 82, a score is created based on the data associated with the request and the appended data. FIG. 6 a shows the logical operations for determining a score in accordance with one embodiment of the present embodiment. As shown is FIG. 6 a, as shown in block 180, the first step is to analyze the demographic data appended to each of the addresses and derive information used to predict the risk of fraud. Next, as shown in block 182, a score is calculated based on the weights placed for each of the selected variables. In one embodiment of the present invention the following variables have been selected to be used in the model to predict the risk of fraud: (1) a variable that is based on the change in the financial make-up of the two addresses; (2) a variable that identifies records that were confirmed through third party data to match the name at a given address; (3) a variable that is based on the home value between the two addresses; (4) a variable that is based on the distance of the move for the change of address; (5) a variable that is based on whether the type of housing (e.g., apartment, non-apartment, single family home) has changed for the current address in comparison with the reference address or old address; (6) a variable that is based on whether the application address or the new address is a building (i.e., not an apartment or a home, rather something other than an apartment or a home); (7) a variable based on whether the new application address, the new address or current address is a warm address; (8) a variable that is based on the difference in Internet usages for the Zipcode+4 area for the two addresses; and (9) a variable that is based on the average length of stay at the residence at the Zip+4 area code for the reference address or the old address (when there is an address change requested). Then, the second step is to use the model to obtain a score to predict the risk of fraud. Each of these variables will be discussed in turn.
  • The first variable is based on the change in the financial make-up of the two addresses. In one embodiment of this model, this variable is called “Value1.” This variable analyzes the change in the financial make-up of the reference address, the old address (e.g., in address change or account takeover situations), or FROM address (e.g., old address) and new application address, the new address, or the TO address (e.g., the address to which it has been changed). It is a composite of three demographic variables: Income, Net Worth and Home Ownership. In one embodiment, to derive the composite information the following steps are used. First, the difference in income is determined. As described with respect to FIG. 6 b, to determine the difference in income, for both addresses (e.g., new application address and reference address in risk of fraud relating to a new application or as will be described later, reference or old and new addresses in a takeover situation), income for the respective address is appended by matching name and address to the appropriate demographic file. If there is not a match by both name and address, then a search is made to match at by address only to find income. If there is not a match by address only, then the Zip+4 for an address is used and the average income for that Zip+4 is appended to the address. If there is still not a match, then the mean income for all individuals is assigned. For instance, the mean income for all individuals may be assigned, when a Zip+4 for a particular address cannot be determined or when demographic data cannot be located for the address of a Zip+4 area.
  • Once, a value has been appended to each address for income, then the difference in income between the two addresses is calculated using the following formula:

  • DF_INCOME=INCOME(FROM)−INCOME(TO)
  • Where DF_INCOME refers to the difference in income between the two addresses, INCOME(FROM) refers income appended to the reference address or old address, and INCOME(TO) refers to income appended to new application address or the new address.
  • Next, the difference in net worth ranking is constructed. To determine the difference in net worth, for both addresses, net worth ranking is appended by first trying to match by name and address to the demographic file. If a match is not found, then match by address only is attempted to find net worth ranking. If there is still no match, then a match is made to the Zip+4 of the address and the average net worth ranking for that Zip+4 is appended. If there is still no match, then the mean net worth ranking for all individuals is appended to the address. For instance, as with income, the mean net work ranking for all individuals may be appended when a Zip+4 for a particular address cannot be determined or when demographic data cannot be located for the address of a Zip+4 area.
  • Once, a net worth value has been appended for both addresses, then the difference in net worth between the two addresses is calculated as follows:

  • DF_NETWR=NETWR(FROM)−NETWR(TO)
  • DF_NETWR refers to the difference in net worth. NETWR(FROM) refers to the net worth of the reference address or old address and NETWR(TO) refers to the net worth of the new application address or the new address.
  • Next, the difference in homeownership is constructed. In order to determine the difference in homeownership, for both addresses, a homeowner indicator is appended to both addresses by matching name and address to the appropriate demographic file. If there is not match, then a homeowner indicator is appended by matching by address only to find homeowner indicator. If there is still no match, the average homeownership percentage for that Zip+4 is appended. If there is still no match, the mean homeowner percentage for all individuals is assigned. For instance, as with income, the mean homeowner percentage for all individuals may be appended, when a Zip+4 for a particular address cannot be determined or when demographic data cannot be located for the address of a Zip+4 area.
  • Once, we have a value for both the FROM and TO address, we then calculate the difference between the FROM and TO address as follows:

  • DF_HOMEON=HOMEON(FROM)−HOMEON(TO)
  • Where DF_HOMEON refers to the difference in homeownership, HOMEON(FROM) refers to homeownership for reference address or old address, and HOMEON(TO) refers to homeownership for the new application address or new address.
  • Once the three difference for the income, net worth and homeownership have been calculated, then a variable that is a combination of the three is created:
  • IF DF_HOMEON <= −1,
    THEN—VALUE1 = 0.00056
    IF DF_HOMEON > −1 and DF_HOMEON <= 0
    AND DF_NETWR <= −4.7
    THEN VALUE1 = 0.00701
    IF DF_HOMEON > −1 and DF_HOMEON <= 0
    AND DF_NETWR > −4.7 and DF_NETWR <= −2.7
    THEN VALUE1 = 0.00131
    IF DF_HOMEON > −1 and DF_HOMEON <= 0
    AND DF_NETWR > −2.7 and DF_NETWR <= −1.7
    THEN VALUE1 = 0.00191
    IF DF_HOMEON > −1 and DF_HOMEON <= 0
    AND DF_NETWR > −1.7 and DF_NETWR <= −0.7
    AND DF_INCOM <= −11,000
    THEN VALUE1 = 0.00056
    IF DF_HOMEON > −1 and DF_HOMEON <= 0
    AND DF_NETWR > −1.7 and DF_NETWR <= −0.7
    AND DF_INCOM > −11,000
    THEN VALUE1 = 0.00565
    IF DF_HOMEON > −1 and DF_HOMEON <= 0
    AND DF_NETWR > −0.7 and DF_NETWR <= 0.3
    THEN VALUE1 = 0.00066
    IF DF_HOMEON > −1 and DF_HOMEON <= 0
    AND DF_NETWR > 0.3 and DF_NETWR <= 2.3
    THEN VALUE1 = 0.00131
    IF DF_HOMEON > −1 and DF_HOMEON <= 0
    AND DF_NETWR > 2.3
    THEN VALUEI = 0.00297
    IF DF_HOMEON > 0
    AND DF_NETWR <= 5.3
    THEN VALUE1 = 0.01894
    IF DF_HOMEON > 0
    AND DF_NETWR > 5.3
    AND DF_INCOM <= 37,000
    THEN VALUEI = 0.00275
    IF DF_HOMEON > 0
    AND DF_NETWR > 5.3
    AND DF_INCOM > 37,000
    THEN VALUE1 = 0.01095
  • The numerical values are derived from a statistical analysis using known methods of actual identity theft fraud data, which was used to build this model.
  • The next variable identifies records that were confirmed through third party data to match the name at a given address. This variable is titled “MATCH.” If a match is found to the third party database (demographics) via name and address, this variable is coded as a value of 1. If it is not confirmed, it is coded as a 0.
  • The next variable is based on the home value between the two addresses. To determine the value for this variable an analysis of the change in the home value is performed. This variable is named “DF_HOMVL.” In one embodiment, the difference between the home value of the FROM address (e.g., reference address in a new application situation or the old address in takeover situations) and the TO address (e.g., the new application address in a new application or a new address in takeover situations). For both the FROM and TO address, a home value is appended by matching by name and address to the appropriate demographic file. If there is not a match, then the home value is appended based on a match by address only. If there is still no match, then the average home value for that Zip+4 of the address is appended. If there is still no match, then the mean home value for all individuals is appended. Once, we have a value for home value for both the FROM and TO address, we calculate the difference between the FROM and TO address as follows:

  • DF_HOMVL=HOMEVAL(FROM)−HOMEVAL(TO)
  • Where DF_HOMVL is the difference in home value, HOMEVAL(FROM) refers to the home value of the address prior in time to the one reflected as the address in a new application or in a change of address, and HOMEVAL(TO) refers to the address on the new application form as the current address or the new address provided in changing the address.
  • The next variable in the model is based on the distance of the move for the change of address. This variable is named “DF_DISTN.” In one embodiment, this variable measures the distance of the move for the change of address. Using the delivery point for both the FROM and TO address, we then determine the longitude and latitude of the delivery point. We then calculate the distance of the move as follows:

  • DF_DISTX=69.1*[TO(Latitude)−FROM(Latitude)]

  • DF_DISTY=69.1*[TO(Longitude)−FROM(longitude)]*COS[FROM(latitude)/57.3)

  • DF_DISTN=SQRT[(DF_DISTX*DF_DISTX)+(DF_DISTY*DF_DISTY)]
  • Where DF_DISTX refers to the change in latitude from the TO and FROM addresses multiplied by 69.1, DF_DISTY refers to the change in longitude from the TO and FROM addresses multiplied by the cos of the latituted of the FROM address divided by 57.3, all of which is multiplied by 69.1, and DF_DISTN is calculated by the square root of the sum of the squares of DF DISTX and DF_DISTY. The mathematical calculation is a known formula for converting latitudinal and longitudinal information into a distance.
  • The next variable is based on whether the type of housing (e.g., apartment, non-apartment, single family home) has changed for the current address in comparison with the reference address or old address. This variable is called “HOMAPT.” In one embodiment, this variable indicates whether or not a person has moved from a non-apartment to an apartment. In one embodiment, if the FROM address is not an apartment and the TO address is an apartment, this variable is coded as a 1. Otherwise this variable is coded as a 0.
  • The next variable is based on whether the new application address or the new address is a building. This variable is named “BLDNG.” This variable indicates whether or not the TO address is a building. In the model, If the TO Address is a Building, this variable is coded as a 1. Otherwise this variable is coded as a 0.
  • The next variable is based on whether the new application address, the new address or current address is a warm address. In short, this variable indicates if the second address is “warm”. Warm addresses are addresses that are non-standard delivery addresses. This type of address includes addresses such as UPS Stores, Mail Boxes, Etc., hotels/motels, etc. The variable is named “WARMADD.” In the model, if a match is made by TO the address to the Warm Address file, this variable is coded as a 1. Otherwise this variable is coded as a 0.
  • The next variable is based on the difference in internet usages for the Zipcode+4 area (sometimes also referred to as Zip+4) for the two addresses. In one embodiment, this variable measures the difference in internet usage for the area defined by Zip+4 for the FROM address to the area defined by the Zip+4 for the TO address. This variable is named “Z4_WEB.” In one embodiment, this information is derived as follows. First, the average internet usage is calculated for the Zip+4 area for both the FROM address and the TO address. This data is resident on the demographic file, where a value of 1 indicates lowest likelihood of internet usage and 9 indicates the highest. Then, the average value for all addresses in the specific Zip+4 area is calculated. Once the value for each the FROM and TO addresses is determined, the difference variable is coded as follows:

  • Z4_WEB=WEBUSE(FROM)−WEBUSE(TO)
  • Where Z4 WEB refers to the difference is web usages for the area defined by the Zip+4 for each of the addresses, WEBUSE(FROM) refers to the average internet usage for area defined by the Zip+4 for the FROM address (e.g., the reference address in a new application situation or the old address in a takeover situation), and WEBUSE(TO) refers to the average internet usage for the Zip+4 for the area defined by the TO address (e.g., the new application address or the new address in the takeover situation). While average internet usage is used as the measure, other measures such as median internet usage may be used in the appropriate model.
  • The last variable used in this embodiment of the model is based on the average length of stay at the residence at the Zip+4 area code for the reference address or the old address (when there is an address change requested). This variable is named “Z4 LORF.” In one embodiment, this variable measures the average length of residence for the area defined by the Zip+4 for the 5 FROM address. In one embodiment, this information is derived as follows. First, the average length of residence for the area defined by the Zip+4 is calculated for the FROM address. This data is resident on the demographic file, where the values indicate the number of years a person has resided at that residence. Then, the average value for all addresses in that Zip+4 area is calculated. The variable then indicates the average length of residence for people living in the area defined by the Zip+4 for the FROM address.
  • In one embodiment, the model used to predict has nine variables. However, the model used to predict may have any number of variables. Also, the variables used may evolve based on information collected on the characteristics of confirmed fraud accounts. Another factor that may change the variables used relates to the evolution of methods used by the people committing the fraud. As the methods change, the variables may have to be varied. However, the present invention is not limited to the number of factors used on the types of factor used in the model to predict the risk of identity theft fraud.
  • Once the variables have been analyzed, the values for each of the variables are plugged into the model. The basic formula for the model is generalized as follows:

  • Y=A+B1*x1+B2*x2+B3*x3 . . . +Bn*xn,
  • Where Y is the dependent or outcome variable is the result used to predict the risk of identity theft fraud, A is a constant value, B1 . . . Bn are the coefficients or weights assigned to the independent variables, and x1 . . . xn are the independent variables themselves. In the embodiment described above, the independent variables include VALUE1, MATCH, DF_HOMVLDF_DISTN, HOMAPT, BLDNG, WARMADD, Z4WEB, and Z4_LORF.
  • Using known statistical methods to analyze actual data from confirmed identity theft fraud cases, the following coefficients were determined for the model:
  • COMPUTE SCORE = 0.00154 + VALUE 1 * 0.93061 + MATCH * - 0.00594 + DF_HOMVL * 2.12 E - 09 + DF_DISTN * 1.53 E - 06 + HOTVLEAPT * 0.002093 + BLDNG * 0.002334 + WARMADD * 0.078844 + Z4_WEB * - 0.00021 + Z4_LORF * 0.000134
  • Where COMPUTE SCORE refers to the score that will be used, at least in part, to predict a risk of identity fraud. In this method, the coefficients were determined using ordinary least squares regression. However, other known statistical methods such as logistic regression, CHAID, CART, discriminant analysis, neural networks or the like may be used.
  • In one embodiment the score is between 0 and 1 with 1 being most likely to be fraud. However, the scale may be any range. For instance, the score may be in a range of 1 to 100. Similarly, the score may be converted to a description. So depending on the risk tolerance of the institution making the inquiry, ranges may be provided that would indicate likelihood of identity theft fraud. For instance, on a scale of 0 to 1, a 0.8 or above may be designated as a high risk for fraud and the report to the company making the inquiry may be a descriptive assessment based on a numerical score rather than the score itself. The score itself shows some level of risk of identity theft fraud. Whether the level of risk is acceptable is one that must include input from the business as to its tolerance of this risk. Also, while the score itself may be used to predict whether identity theft is being perpetrated, the score may be used with other data such as, without limitation, warm address files, undeliverable mail addresses, syntax of the drivers license for a particular state to assess a risk of fraud, or the year the social security number was issued is compared to the date of birth for rationality.
  • The model described for determining a score was developed using confirmed identify theft fraud data. However, while the variables selected are based on an analysis of this confirmed fraud data, other variables may be selected. Because the model described herein is based on a statistical analysis of confirmed fraud data, the model takes what is known about the past and applies it to future events. Over time, however, behaviors and relationships change. This is especially true in the area of identity theft fraud. As fraud models and tools are effectively deployed, the fraud migrates, creating new behaviors and relationships. Because of this, the model may be modified by using the same methods described herein to emphasize certain variables or add other variables from the information sources described herein. The model described herein was tested to understand how well the model “performs” or segments the entire population of applications. The effectiveness of the model described here is shown by the segmentation table and the ROC curve.
  • In developing the model, the confirmed fraud data is scored. The scored data was categorized into equal sized buckets or categories from lowest to highest. Thus, the identity theft fraud rate present within each bucket is shown by categorizing the worst 5% into the first bucket, the next worst 5% into the second bucket, etc. The following chart shows the performance of the model.
  • Percent Indexed
    Of Fraud
    Segment Cases Rate
    1 5% 908
    2 5% 279
    3 5% 301
    4 5% 93
    5 5% 88
    6 5% 88
    7 5% 76
    8 5% 59
    9 5% 42
    10 5% 17
    11 5% 21
    12 5% 4
    13 5% 8
    14 5% 0
    15 5% 8
    16 5% 4
    17 5% 0
    18 5% 0
    19 5% 0
    20 5% 0
    TOTAL 100%  100
  • In this example, segment 1 is the worst 5% of scored records from the test data set. As shown by the chart, this segment has a fraud rate that is over 9 times the average fraud rate for the entire population. (Note: the Indexed fraud rate is calculated by taking the segment level fraud rate divided by the overall population fraud rate*100.)
  • Another way to look at the performance of the model is to look at a Power of Segmentation summary chart (FIG. 7). This is sometimes also referred to as a ROC curve or Lorenz Diagram. This view shows how many cumulative fraud records are identified for each level of screening.
  • For example, this curve indicates that the model is able to identify approximately 60% of the total frauds (y-axis) by only looking at the worst 10% of records as identified by the model (x-axis). Similarly, the curve shows that the worst 5% account for approximately 45% of the total fraud. The top line shows how well the model performs, whereas the lower line shows how a randomly generated model performs (i.e., If one looked at 10% of the records, one would expect to identify about 10% of the fraud.)
  • Going back to FIG. 2, after the score is determined, at block 84, the address velocity file is updated with the score. Next, at block 86, apply business rules to the data. This business rules are to ensure that regardless of the score, certain data elements are checked (e.g., whether the address is a warm address, whether the address is a undeliverable mail address, whether social security number is valid etc.) That is, create a file on this analyzed case and include in that data relating to whether a warm address was present, whether it was a reported fraud address, or whether the address was an undeliverable mailing address. Such information may be used in analysis of other inquiries in the future. Moreover, regardless of the score, if the new address or the address on an application is a warm address, then the rule may be to report that as a high risk of identity theft.
  • Also, regardless of the fraud risk information, data relating to undeliverable mailing addresses would be useful information for the customer making the inquiry because sending media (e.g., checks, credit cards or the like) to an undeliverable providing address is expense to the business and creates a risk for fraud to be committed. As such, the customer making the inquiry that the address is an undeliverable mailing address would be useful to the customer and would save the customer the expense of mailing media to an undeliverable mailing address. Also, by not mailing media to an undeliverable address, the customer would reduce the risk of fraud being committed with the media.
  • Next, at block 88, user defined parameters are applied. That is, the business making the request may have some criteria (e.g., verify syntax of the driver's license). Each may provide information related to score thresholds based on its tolerance for risk. Apply those requirements and append that information with the score and the other information discussed with respect to business rules to create an output for sharing with the business.
  • At block 90, fraud alerts may be created with reason codes and transmitted to the business entity through a user interface at block 92 or a web server at block 98. The reason codes may be based on user defined criteria or codes based on the variables used in the analysis or data considered in the analysis. At block 91, the previous history file for this account may also be updated. As shown in blocks 94 and 96, a case management system provides display screen functionality for the fraud alerts, management queuing functionality with operator and pending case tracking.
  • In terms of output to the customer who initiated the inquiry, in one embodiment, the output message content includes the following:
  • Output Message Content
    Score First name
    One or more reason codes Middle initial/name
    Account or reference number Last name
    Surname
  • However, the output may be provided in a other ways. For instance, the output may be provided by simply stating a level of risk or providing a statement of the level of risk of fraud in addition to the score. Also, while the information related to the level of risk of fraud may be communicated via a data line, the internet, a facsimile or by voice (including an operator simply calling the customer with an oral report of the risk analysis).
  • Also, the web server (block 98) may be used by the customer to provide confirmed fraud data, which would be used to update the client fraud data file for future use.
  • In operation, the business/customer makes an inquiry to assess a level of risk of fraud on a new application. Data is appended to the address provided on the new application and the reference address (from a third party source such as a credit report or this information may be on the application). A score is derived using the model described above. The result may be provided real-time or via batch processing. In either case, the results maybe provided to the customer in any commercially practicable method including, but not limited to, a data line, the internet, a facsimile or by voice (electronic or human voice). Customers may establish internal policies and procedures for handling accounts based on the score.
  • The system described with reference to FIG. 2 is a client-server system. The client transmitted the request and input information to a remote server for processing. FIG. 8 shows the logical operations used in a system that is hosted at the client site. That is, the customer hosts the system for determining the risk of fraud in a new application process or on an account takeover situation.
  • As shown in FIG. 8, most of the logical operations are the same as the operations described in FIG. 2. However, one difference is that the client hosts the software to perform the analysis to create the score. Also, depending on the level of resources committed by the client may not access all the demographic data described in the process described with respect to FIG. 2. For instance, the client hosted solution may be limited to Zip code plus+4 data variables. As such, the model may not be as rigorous as the model as described with respect to FIG. 2. This type of system may be provide a risk analysis that while less rigorous useful in some situations.
  • FIGS. 8-15 show alternatives to the basic method described with respect to FIG. 2 for use in account takeover situations. That is, the basic logical operations of appending information to the addresses and calculating a score as described with references to FIGS. 2-6 b would be used. As described with respect to FIG. 2, in determining fraud with respect to a new application the reference address is usually linked to the applicant's identity, not necessarily the address on the new application form. As described above, usually, in a new application situation, the reference address is obtained from a credit bureau. However, in the takeover situation, the old address or the FROM address would be the reference address and the address to which it is changed to is the new address (e.g., the TO address). A customer may want each change of address analyzed to determine a risk of fraud and match to subsequent media requests, a customer may want the change of address analyzed only when such a request is matched to a media request, or a client may want each change of address analyzed for risk of fraud. Each of these situations will be discussed in turn with reference to FIGS. 6 b-13.
  • FIGS. 9, 11, and 12 show the logical operations for an embodiment in which an analysis is performed for each address change and a match is made for subsequent media requests. As shown in FIGS. 9, 11, and 12, the logical operations for analyzing the risk of fraud is the same as that described and shown in FIGS. 2-6 b. That is, information is appended to the old address (the address before the change of address request)—which for takeover situation would be considered a reference address—and to the new address (i.e., the address it was changed to). Then, a score would be derived using the model described with reference to FIG. 6 a. However, as shown in block 300, there is an address change file that maintains the change in address for a particular account. Also, as shown in block 302, a media request file is maintained. A media request may include a request for financial instruments such as checks or credit cards. In addition, as shown in block 304, a scored history file is maintained to store the score based on the analysis done (consistent with the analysis as described in FIG. 2) for an account in which there was a change in address. When a media request is made, it is checked against the scored history file. If there is a match in terms of an address change in the same account on which the media request is made, business rules—which may be supplied by the customer—are used to determine whether to honor the media request. Some factors that may be used include the time lapse between the media request and the address change and the risk of identity theft fraud as determined by the scoring.
  • As shown in FIGS. 10, 11, and 12, customers may only want an address change analyzed for risk of fraud if it is followed with a media requested within a period of time of the address change. It should be noted that the media request may be prior to the address change request. In this situation, as shown in block 320, a media request file is maintained storing media request information on accounts. Also, as shown in block 322, a 90 day rolling address change file is maintained. While in one embodiment the rolling address change file has a 90 day window, the rolling address file is not limited to a 90 day window but rather may be constructed to any length of time. As shown in block 324, a determination is made as to whether a media request matches a change in address request. If so, then the analysis to score the change in address as described with respect to FIG. 2-6 b is performed (as shown in FIGS. 10-12).
  • FIGS. 13-15 show the process described with respect to FIGS. 2-6 b being applied in the case when each address change is scored, but no additional steps are performed with respect to media requests.
  • As with the process described with respect to new applications, a numerical score derived from this process may be used to assess risk. However, in other embodiments, the score may be considered along with data analyzed based on the business rules and client-defined parameters to make as assessment of the risk of identity theft. This information may be provided in any number of ways including voice, data line, facsimile. Also, the processing for takeover accounts may be done in batch, real-time, and in a client-server structure where the server is in a remote location or in a structure where the system is hosted at the client site.
  • There are several purpose for which this invention serves. A purpose of this invention is to prevent fraud losses associated with account takeover. An additional purpose of the invention is to prevent fraud losses that accrue from criminals submitting fraudulent credit account applications to financial institutions where the criminal assumes the credit identity of an unknowing person/victim. If the account is approved, the criminal receives the credit card, debit card, checks or merchandise or services at a street address other than that of the victim.
  • An additional purpose of this invention is to reduce fraud losses in a form of account takeover that is associated with “over night” emergency requests for the replacement of items such as credit/debit cards, personal checks, traveler check replacements. There is a business and competitive need for financial institutions to provide emergency replacement services. Criminals can affect an account take over by exploiting the Emergency replacement process through requesting that an unauthorized replacement, be sent to an address for which they have access. The criminal receives the replacement and commits unauthorized use fraud. Emergency type credit and debit card replacements are often requested to be sent to an address other than the address of record. A financial institution has a short processing window to establish the legitimacy of these requests. “This invention would help to identify potentially fraudulent requests using the analysis described above.
  • Another purpose of this invention is to reduce fraud losses where product or service fulfillment or billing activities involve a street address and the effects of fraudulent addresses that would be negative to business interests. This can occur in the retail environment particularly in non-face to face transactions. In addition to reduced direct fraud losses through superior detection, the purpose of this invention is to reduce overhead and infrastructure expenses associated with low false positive rates, reduced infrastructure expenses that are necessary to process fraudulent claims and an improved customer experience.
  • As can be seen by the above Figures, different factors may be considered depending upon the particular request that is received, and may be dynamically determined as to what factors should be considered for a given request. For instance, some requests may only utilize certain factors, while other requests may involve checks of all factors in providing a score.
  • Hence, it can be seen that embodiments of the present invention provide various systems and methods that can be used for detecting fraud in account requests.
  • Embodiments of the invention can be embodied in a computer program product. It will be understood that a computer program product including one or more features or operations of the present invention may be created in a computer usable medium (such as a CD-ROM or other medium) having computer readable code embodied therein. The computer usable medium preferably contains a number of computer readable program code devices configured to cause a computer to affect one or more of the various functions or operations herein described.
  • While the methods disclosed herein have been described and shown with reference to particular operations performed in a particular order, it will be understood that these operations may be combined, sub-divided, or re-ordered to form equivalent methods without departing from the teachings of the present invention. Accordingly, unless specifically indicated herein, the order and grouping of the operations is not a limitation of the present invention.
  • While the invention has been particularly shown and described with reference to embodiments thereof, it will be understood by those skilled in the art that various other changes in the form and details may be made without departing from the spirit and scope of the invention.

Claims (20)

1-33. (canceled)
34. A method for assessing a risk of identity theft fraud with respect to new applications, comprising:
receiving first address information relating to an applicant for an account; and
using demographic data relating to the address information.
35. The method of claim 34, further comprising receiving a reference address.
36. The method of claim 35, wherein act of receiving a reference address includes receiving reference address information from a third party database
37. The method of claim 35, wherein the act of receiving a reference address includes receiving reference address information as part of input data provided in making a request to assess a risk of identity theft fraud.
38. The method of claim 35, further comprises measuring at least one difference in demographic data appended to the first and reference address information.
39. The method of claim 38, further comprising calculating a score indicative of a risk of identity theft.
40. The method of claim 39, further comprising reporting an assessment of a risk of identity theft based at least in part on the score.
41. The method of claim 40, further comprising analyzing negative data for the first address.
42. The method of claim 41, wherein the act of assessing risk of identity theft is based on the score and analysis of the negative data.
43. A method for assessing a risk of fraud, comprising:
using demographic attributes of street addresses to predict the risk of fraud, wherein the act of using comprises analyzing differences between demographic attributes of the addresses.
44. The method of claim 43, further comprising reporting the assessment of fraud.
45. The method of claim 44, wherein the act of assessing a risk of fraud comprises assessing the risk of identity theft fraud due to account takeover.
46. The method of claim 44, wherein the act of assessing a risk of fraud comprises assessing the risk of identity theft fraud perpetrated through a new application.
47. The method of claim 45, further comprising receiving information relating to a media request.
48. The method of claim 47, further comprising assessing risk of identity theft when the request for media is made on an emergency basis.
49. The method of claim 43, wherein the act of assessing risk of fraud comprises assessing a risk of identity theft in fulfillment activities.
50. The method of claim 43, further comprises coupling negative and positive information with address demographic attributes to assess the risk of identity theft fraud.
51. A method for determining whether an account request for a change of address from an applicant involves fraud, comprising:
receiving a request to change an address of an account, said request including an old address and a new address of the applicant;
obtaining demographic data based on the old address of the applicant;
obtaining demographic data based on the new address of the applicant;
calculating a differential between the demographic data based on the old address of the applicant and the demographic data based on the new address of the applicant; and
calculating a score for the request based on the differential, the score indicating whether the request may involve fraud.
52. The method of claim 51, further comprising reporting an assessment of a risk of identity theft based at least in part on the score.
US13/736,240 2002-11-01 2013-01-08 System, method and computer program product for assessing risk of identity theft Abandoned US20130211985A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US13/736,240 US20130211985A1 (en) 2002-11-01 2013-01-08 System, method and computer program product for assessing risk of identity theft
US16/660,289 US11610278B2 (en) 2002-11-01 2019-10-22 System, method and computer program product for assessing risk of identity theft
US18/097,572 US20230177635A1 (en) 2002-11-01 2023-01-17 System, Method and Computer Program Product for Assessing Risk of Identity Theft

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US42329802P 2002-11-01 2002-11-01
US10/697,076 US7870078B2 (en) 2002-11-01 2003-10-30 System, method and computer program product for assessing risk of identity theft
US12/758,171 US8352281B2 (en) 2002-11-01 2010-04-12 System, method and computer program product for assessing risk of identity theft
US13/736,240 US20130211985A1 (en) 2002-11-01 2013-01-08 System, method and computer program product for assessing risk of identity theft

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/758,171 Continuation US8352281B2 (en) 2002-11-01 2010-04-12 System, method and computer program product for assessing risk of identity theft

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/660,289 Continuation US11610278B2 (en) 2002-11-01 2019-10-22 System, method and computer program product for assessing risk of identity theft

Publications (1)

Publication Number Publication Date
US20130211985A1 true US20130211985A1 (en) 2013-08-15

Family

ID=32775811

Family Applications (5)

Application Number Title Priority Date Filing Date
US10/697,076 Active 2028-12-29 US7870078B2 (en) 2002-11-01 2003-10-30 System, method and computer program product for assessing risk of identity theft
US12/758,171 Expired - Lifetime US8352281B2 (en) 2002-11-01 2010-04-12 System, method and computer program product for assessing risk of identity theft
US13/736,240 Abandoned US20130211985A1 (en) 2002-11-01 2013-01-08 System, method and computer program product for assessing risk of identity theft
US16/660,289 Expired - Lifetime US11610278B2 (en) 2002-11-01 2019-10-22 System, method and computer program product for assessing risk of identity theft
US18/097,572 Pending US20230177635A1 (en) 2002-11-01 2023-01-17 System, Method and Computer Program Product for Assessing Risk of Identity Theft

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US10/697,076 Active 2028-12-29 US7870078B2 (en) 2002-11-01 2003-10-30 System, method and computer program product for assessing risk of identity theft
US12/758,171 Expired - Lifetime US8352281B2 (en) 2002-11-01 2010-04-12 System, method and computer program product for assessing risk of identity theft

Family Applications After (2)

Application Number Title Priority Date Filing Date
US16/660,289 Expired - Lifetime US11610278B2 (en) 2002-11-01 2019-10-22 System, method and computer program product for assessing risk of identity theft
US18/097,572 Pending US20230177635A1 (en) 2002-11-01 2023-01-17 System, Method and Computer Program Product for Assessing Risk of Identity Theft

Country Status (1)

Country Link
US (5) US7870078B2 (en)

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120278249A1 (en) * 2011-04-29 2012-11-01 American Express Travel Related Services Company, Inc. Generating an Identity Theft Score
US8788405B1 (en) * 2013-03-15 2014-07-22 Palantir Technologies, Inc. Generating data clusters with customizable analysis strategies
US8855999B1 (en) 2013-03-15 2014-10-07 Palantir Technologies Inc. Method and system for generating a parser and parsing complex data
US8930897B2 (en) 2013-03-15 2015-01-06 Palantir Technologies Inc. Data integration tool
US9009827B1 (en) 2014-02-20 2015-04-14 Palantir Technologies Inc. Security sharing system
US9021260B1 (en) 2014-07-03 2015-04-28 Palantir Technologies Inc. Malware data item analysis
US9043894B1 (en) 2014-11-06 2015-05-26 Palantir Technologies Inc. Malicious software detection in a computing system
US9202249B1 (en) 2014-07-03 2015-12-01 Palantir Technologies Inc. Data item clustering and analysis
US9230280B1 (en) 2013-03-15 2016-01-05 Palantir Technologies Inc. Clustering data based on indications of financial malfeasance
CN105528383A (en) * 2014-10-27 2016-04-27 阿里巴巴集团控股有限公司 Method and device for indentifying risks of accounts
US9367872B1 (en) 2014-12-22 2016-06-14 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive investigation of bad actor behavior based on automatic clustering of related data in various data structures
US9454785B1 (en) 2015-07-30 2016-09-27 Palantir Technologies Inc. Systems and user interfaces for holistic, data-driven investigation of bad actor behavior based on clustering and scoring of related data
WO2016161892A1 (en) * 2015-04-09 2016-10-13 阿里巴巴集团控股有限公司 Address matching-based risk identification method and device
CN106156279A (en) * 2016-06-24 2016-11-23 深圳前海征信中心股份有限公司 Address based on longitude and latitude and text comparison similarity recognition method and system
US9535974B1 (en) 2014-06-30 2017-01-03 Palantir Technologies Inc. Systems and methods for identifying key phrase clusters within documents
US9552615B2 (en) 2013-12-20 2017-01-24 Palantir Technologies Inc. Automated database analysis to detect malfeasance
US9635046B2 (en) 2015-08-06 2017-04-25 Palantir Technologies Inc. Systems, methods, user interfaces, and computer-readable media for investigating potential malicious communications
US9785773B2 (en) 2014-07-03 2017-10-10 Palantir Technologies Inc. Malware data item analysis
US9817563B1 (en) 2014-12-29 2017-11-14 Palantir Technologies Inc. System and method of generating data points from one or more data stores of data items for chart creation and manipulation
US9875293B2 (en) 2014-07-03 2018-01-23 Palanter Technologies Inc. System and method for news events detection and visualization
US9898528B2 (en) 2014-12-22 2018-02-20 Palantir Technologies Inc. Concept indexing among database of documents using machine learning techniques
US9898509B2 (en) 2015-08-28 2018-02-20 Palantir Technologies Inc. Malicious activity detection system capable of efficiently processing data accessed from databases and generating alerts for display in interactive user interfaces
US9965937B2 (en) 2013-03-15 2018-05-08 Palantir Technologies Inc. External malware data item clustering and analysis
US10103953B1 (en) 2015-05-12 2018-10-16 Palantir Technologies Inc. Methods and systems for analyzing entity performance
US10120857B2 (en) 2013-03-15 2018-11-06 Palantir Technologies Inc. Method and system for generating a parser and parsing complex data
US10162887B2 (en) 2014-06-30 2018-12-25 Palantir Technologies Inc. Systems and methods for key phrase characterization of documents
US10230746B2 (en) 2014-01-03 2019-03-12 Palantir Technologies Inc. System and method for evaluating network threats and usage
US10235461B2 (en) 2017-05-02 2019-03-19 Palantir Technologies Inc. Automated assistance for generating relevant and valuable search results for an entity of interest
US10275778B1 (en) 2013-03-15 2019-04-30 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive investigation based on automatic malfeasance clustering of related data in various data structures
US10318630B1 (en) 2016-11-21 2019-06-11 Palantir Technologies Inc. Analysis of large bodies of textual data
US10325224B1 (en) 2017-03-23 2019-06-18 Palantir Technologies Inc. Systems and methods for selecting machine learning training data
US10356032B2 (en) 2013-12-26 2019-07-16 Palantir Technologies Inc. System and method for detecting confidential information emails
US10362133B1 (en) 2014-12-22 2019-07-23 Palantir Technologies Inc. Communication data processing architecture
US10482382B2 (en) 2017-05-09 2019-11-19 Palantir Technologies Inc. Systems and methods for reducing manufacturing failure rates
US10484342B2 (en) 2015-04-14 2019-11-19 Alibaba Group Holding Limited Accuracy and security of data transfer to an online user account
US10489391B1 (en) 2015-08-17 2019-11-26 Palantir Technologies Inc. Systems and methods for grouping and enriching data items accessed from one or more databases for presentation in a user interface
US10552994B2 (en) 2014-12-22 2020-02-04 Palantir Technologies Inc. Systems and interactive user interfaces for dynamic retrieval, analysis, and triage of data items
US10572496B1 (en) 2014-07-03 2020-02-25 Palantir Technologies Inc. Distributed workflow system and database with access controls for city resiliency
US10572487B1 (en) 2015-10-30 2020-02-25 Palantir Technologies Inc. Periodic database search manager for multiple data sources
US10579647B1 (en) 2013-12-16 2020-03-03 Palantir Technologies Inc. Methods and systems for analyzing entity performance
WO2020057432A1 (en) * 2018-09-17 2020-03-26 阿里巴巴集团控股有限公司 Address standardization method and device, storage medium and computer terminal
US10606866B1 (en) 2017-03-30 2020-03-31 Palantir Technologies Inc. Framework for exposing network activities
US10620618B2 (en) 2016-12-20 2020-04-14 Palantir Technologies Inc. Systems and methods for determining relationships between defects
CN111178452A (en) * 2020-01-02 2020-05-19 平安科技(深圳)有限公司 Driving risk identification method, electronic device and readable storage medium
US10719527B2 (en) 2013-10-18 2020-07-21 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive simultaneous querying of multiple data stores
US10838987B1 (en) 2017-12-20 2020-11-17 Palantir Technologies Inc. Adaptive and transparent entity screening
US11119630B1 (en) 2018-06-19 2021-09-14 Palantir Technologies Inc. Artificial intelligence assisted evaluations and user interface for same
US11616809B1 (en) * 2020-08-18 2023-03-28 Wells Fargo Bank, N.A. Fuzzy logic modeling for detection and presentment of anomalous messaging

Families Citing this family (159)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6950519B2 (en) 2001-03-05 2005-09-27 Digimarc Corporation Geographically watermarked imagery and methods
US9710852B1 (en) 2002-05-30 2017-07-18 Consumerinfo.Com, Inc. Credit report timeline user interface
US9400589B1 (en) 2002-05-30 2016-07-26 Consumerinfo.Com, Inc. Circular rotational interface for display of consumer credit information
US20030233575A1 (en) * 2002-06-12 2003-12-18 Kimmo Syrjanen Method of analysing level of information security in an organization
WO2004088464A2 (en) * 2003-03-28 2004-10-14 Dun & Bradstreet, Inc. System and method for data cleansing
US7686214B1 (en) 2003-05-12 2010-03-30 Id Analytics, Inc. System and method for identity-based fraud detection using a plurality of historical identity records
US7458508B1 (en) 2003-05-12 2008-12-02 Id Analytics, Inc. System and method for identity-based fraud detection
US7562814B1 (en) * 2003-05-12 2009-07-21 Id Analytics, Inc. System and method for identity-based fraud detection through graph anomaly detection
US8386377B1 (en) 2003-05-12 2013-02-26 Id Analytics, Inc. System and method for credit scoring using an identity network connectivity
US10521857B1 (en) 2003-05-12 2019-12-31 Symantec Corporation System and method for identity-based fraud detection
US7840459B1 (en) * 2003-05-22 2010-11-23 Visa U.S.A. Inc. Method and apparatus for identity theft prevention
US8712919B1 (en) * 2003-10-03 2014-04-29 Ebay Inc. Methods and systems for determining the reliability of transaction
US7503488B2 (en) * 2003-10-17 2009-03-17 Davis Bruce L Fraud prevention in issuance of identification credentials
US7225977B2 (en) * 2003-10-17 2007-06-05 Digimarc Corporation Fraud deterrence in connection with identity documents
US7314162B2 (en) * 2003-10-17 2008-01-01 Digimore Corporation Method and system for reporting identity document usage
US7433864B2 (en) 2004-04-08 2008-10-07 At&T Intellectual Property I, L.P. Compiling information obtained by combinatorial searching
US9384348B2 (en) 2004-04-29 2016-07-05 James A. Roskind Identity theft countermeasures
US7971246B1 (en) * 2004-04-29 2011-06-28 James A. Roskind Identity theft countermeasures
WO2005114885A1 (en) * 2004-05-18 2005-12-01 Digimarc Corporation Methods and systems for issuing identity credentials, such as driver licenses
US8412837B1 (en) 2004-07-08 2013-04-02 James A. Roskind Data privacy
US8732004B1 (en) 2004-09-22 2014-05-20 Experian Information Solutions, Inc. Automated analysis of data to generate prospect notifications based on trigger events
US7818228B1 (en) 2004-12-16 2010-10-19 Coulter David B System and method for managing consumer information
US20060149674A1 (en) * 2004-12-30 2006-07-06 Mike Cook System and method for identity-based fraud detection for transactions using a plurality of historical identity records
US7630924B1 (en) * 2005-04-20 2009-12-08 Authorize.Net Llc Transaction velocity counting for fraud detection
US10643217B2 (en) * 2005-05-26 2020-05-05 Efunds Corporation Debit-based identity theft monitoring and prevention
US20060271457A1 (en) * 2005-05-26 2006-11-30 Romain Martin R Identity theft monitoring and prevention
WO2006130874A2 (en) * 2005-06-02 2006-12-07 Fair Isaac Corporation Comprehensive identity protection system
US20060282270A1 (en) * 2005-06-09 2006-12-14 First Data Corporation Identity verification noise filter systems and methods
US8290836B2 (en) * 2005-06-22 2012-10-16 Early Warning Services, Llc Identification and risk evaluation
US8109435B2 (en) * 2005-07-14 2012-02-07 Early Warning Services, Llc Identity verification switch
US20070016521A1 (en) * 2005-07-15 2007-01-18 Zhiping Wang Data processing system for a billing address-based credit watch
US20070043577A1 (en) * 2005-08-16 2007-02-22 Sheldon Kasower Apparatus and method of enabling a victim of identity theft to resolve and prevent fraud
US20070100656A1 (en) * 2005-10-17 2007-05-03 Brown Charles D System and method for sponsorship sourcing system
US20070162761A1 (en) 2005-12-23 2007-07-12 Davis Bruce L Methods and Systems to Help Detect Identity Fraud
US8676680B2 (en) 2006-02-03 2014-03-18 Zillow, Inc. Automatically determining a current value for a home
US8606669B2 (en) * 2006-06-01 2013-12-10 Broadridge Securities Processing Solutions, Inc. Systems and methods for client screening in the financial services industry
US8707459B2 (en) 2007-01-19 2014-04-22 Digimarc Corporation Determination of originality of content
US20080077458A1 (en) * 2006-09-19 2008-03-27 Andersen Timothy J Collecting and representing home attributes
US8935380B2 (en) * 2006-09-22 2015-01-13 Oracle America, Inc. Automated product knowledge catalog
US8036979B1 (en) 2006-10-05 2011-10-11 Experian Information Solutions, Inc. System and method for generating a finance attribute from tradeline data
US8359278B2 (en) * 2006-10-25 2013-01-22 IndentityTruth, Inc. Identity protection
US20080103800A1 (en) * 2006-10-25 2008-05-01 Domenikos Steven D Identity Protection
US8606626B1 (en) 2007-01-31 2013-12-10 Experian Information Solutions, Inc. Systems and methods for providing a direct marketing campaign planning environment
US8606666B1 (en) 2007-01-31 2013-12-10 Experian Information Solutions, Inc. System and method for providing an aggregation tool
WO2008147918A2 (en) 2007-05-25 2008-12-04 Experian Information Solutions, Inc. System and method for automated detection of never-pay data sets
US7958050B2 (en) * 2007-07-02 2011-06-07 Early Warning Services, Llc Payment account monitoring system and method
US7440915B1 (en) 2007-11-16 2008-10-21 U.S. Bancorp Licensing, Inc. Method, system, and computer-readable medium for reducing payee fraud
US8127986B1 (en) 2007-12-14 2012-03-06 Consumerinfo.Com, Inc. Card registry systems and methods
US9990674B1 (en) 2007-12-14 2018-06-05 Consumerinfo.Com, Inc. Card registry systems and methods
US8140421B1 (en) 2008-01-09 2012-03-20 Zillow, Inc. Automatically determining a current value for a home
US8078569B2 (en) * 2008-03-26 2011-12-13 Fair Isaac Corporation Estimating transaction risk using sub-models characterizing cross-interaction among categorical and non-categorical variables
US20090265198A1 (en) * 2008-04-22 2009-10-22 Plaxo, Inc. Reputation Evalution Using a contact Information Database
US20090307206A1 (en) * 2008-06-05 2009-12-10 Is Technologies, Llc Self verifying address update process and system
US8312033B1 (en) 2008-06-26 2012-11-13 Experian Marketing Solutions, Inc. Systems and methods for providing an integrated identifier
US9256904B1 (en) 2008-08-14 2016-02-09 Experian Information Solutions, Inc. Multi-bureau credit file freeze and unfreeze
US20100088338A1 (en) * 2008-10-03 2010-04-08 Pavoni Jr Donald Gordon Red flag identification verification system and method
US8280785B1 (en) * 2008-10-06 2012-10-02 Intuit Inc. Financial account manager
US8060424B2 (en) 2008-11-05 2011-11-15 Consumerinfo.Com, Inc. On-line method and system for monitoring and reporting unused available credit
US20100293090A1 (en) * 2009-05-14 2010-11-18 Domenikos Steven D Systems, methods, and apparatus for determining fraud probability scores and identity health scores
US8548910B1 (en) * 2009-08-19 2013-10-01 Bank Of America Corporation Address change notification
US8412605B2 (en) * 2009-12-01 2013-04-02 Bank Of America Corporation Comprehensive suspicious activity monitoring and alert system
US9652802B1 (en) 2010-03-24 2017-05-16 Consumerinfo.Com, Inc. Indirect monitoring and reporting of a user's credit data
US8375427B2 (en) * 2010-04-21 2013-02-12 International Business Machines Corporation Holistic risk-based identity establishment for eligibility determinations in context of an application
US8725613B1 (en) 2010-04-27 2014-05-13 Experian Information Solutions, Inc. Systems and methods for early account score and notification
US10380653B1 (en) 2010-09-16 2019-08-13 Trulia, Llc Valuation system
US8930262B1 (en) 2010-11-02 2015-01-06 Experian Technology Ltd. Systems and methods of assisted strategy design
US9147042B1 (en) 2010-11-22 2015-09-29 Experian Information Solutions, Inc. Systems and methods for data verification
EP2676197B1 (en) 2011-02-18 2018-11-28 CSidentity Corporation System and methods for identifying compromised personally identifiable information on the internet
US10460406B1 (en) 2011-03-09 2019-10-29 Zillow, Inc. Automatically determining market rental rates for properties
CA2830797A1 (en) * 2011-03-23 2012-09-27 Detica Patent Limited An automated fraud detection method and system
US9558519B1 (en) 2011-04-29 2017-01-31 Consumerinfo.Com, Inc. Exposing reporting cycle information
US9665854B1 (en) 2011-06-16 2017-05-30 Consumerinfo.Com, Inc. Authentication alerts
US9483606B1 (en) 2011-07-08 2016-11-01 Consumerinfo.Com, Inc. Lifescore
US9106691B1 (en) 2011-09-16 2015-08-11 Consumerinfo.Com, Inc. Systems and methods of identity protection and management
US8819793B2 (en) 2011-09-20 2014-08-26 Csidentity Corporation Systems and methods for secure and efficient enrollment into a federation which utilizes a biometric repository
US8738516B1 (en) 2011-10-13 2014-05-27 Consumerinfo.Com, Inc. Debt services candidate locator
US10366351B2 (en) * 2011-10-14 2019-07-30 Experian Health, Inc. Information standardization and verification
US8812337B2 (en) 2011-10-20 2014-08-19 Target Brands, Inc. Resource allocation based on retail incident information
US11030562B1 (en) 2011-10-31 2021-06-08 Consumerinfo.Com, Inc. Pre-data breach monitoring
US9853959B1 (en) 2012-05-07 2017-12-26 Consumerinfo.Com, Inc. Storage and maintenance of personal data
US20130325743A1 (en) * 2012-06-01 2013-12-05 Northwest Research, Inc. Systems and methods for tracking packages
US8918891B2 (en) 2012-06-12 2014-12-23 Id Analytics, Inc. Identity manipulation detection system and method
US9654541B1 (en) 2012-11-12 2017-05-16 Consumerinfo.Com, Inc. Aggregating user web browsing data
US9916621B1 (en) 2012-11-30 2018-03-13 Consumerinfo.Com, Inc. Presentation of credit score factors
US10255598B1 (en) 2012-12-06 2019-04-09 Consumerinfo.Com, Inc. Credit card account data extraction
US9928473B1 (en) 2013-01-30 2018-03-27 Target Brands, Inc. Booster centric resource allocation
US9697263B1 (en) 2013-03-04 2017-07-04 Experian Information Solutions, Inc. Consumer data request fulfillment system
US9406085B1 (en) 2013-03-14 2016-08-02 Consumerinfo.Com, Inc. System and methods for credit dispute processing, resolution, and reporting
US8812387B1 (en) 2013-03-14 2014-08-19 Csidentity Corporation System and method for identifying related credit inquiries
US10102570B1 (en) 2013-03-14 2018-10-16 Consumerinfo.Com, Inc. Account vulnerability alerts
US9870589B1 (en) 2013-03-14 2018-01-16 Consumerinfo.Com, Inc. Credit utilization tracking and reporting
US10664936B2 (en) 2013-03-15 2020-05-26 Csidentity Corporation Authentication systems and methods for on-demand products
US9633322B1 (en) 2013-03-15 2017-04-25 Consumerinfo.Com, Inc. Adjustment of knowledge-based authentication
US10685398B1 (en) 2013-04-23 2020-06-16 Consumerinfo.Com, Inc. Presenting credit score information
US9721147B1 (en) 2013-05-23 2017-08-01 Consumerinfo.Com, Inc. Digital identity
US9443268B1 (en) 2013-08-16 2016-09-13 Consumerinfo.Com, Inc. Bill payment and reporting
CN103577190A (en) * 2013-11-07 2014-02-12 中科信息安全共性技术国家工程研究中心有限公司 Dynamic extensible intelligent evaluation system based on reflection algorithm technology
US10754884B1 (en) 2013-11-12 2020-08-25 Zillow, Inc. Flexible real estate search
US10325314B1 (en) 2013-11-15 2019-06-18 Consumerinfo.Com, Inc. Payment reporting systems
US9477737B1 (en) 2013-11-20 2016-10-25 Consumerinfo.Com, Inc. Systems and user interfaces for dynamic access of multiple remote databases and synchronization of data based on user rules
US20150161664A1 (en) * 2013-12-06 2015-06-11 The Dun & Bradstreet Corporation Method and system for collecting data on businesses via mobile and geolocation communications
US10262362B1 (en) 2014-02-14 2019-04-16 Experian Information Solutions, Inc. Automatic generation of code for attributes
USD759689S1 (en) 2014-03-25 2016-06-21 Consumerinfo.Com, Inc. Display screen or portion thereof with graphical user interface
USD759690S1 (en) 2014-03-25 2016-06-21 Consumerinfo.Com, Inc. Display screen or portion thereof with graphical user interface
USD760256S1 (en) 2014-03-25 2016-06-28 Consumerinfo.Com, Inc. Display screen or portion thereof with graphical user interface
US9892457B1 (en) 2014-04-16 2018-02-13 Consumerinfo.Com, Inc. Providing credit data in search results
US10373240B1 (en) 2014-04-25 2019-08-06 Csidentity Corporation Systems, methods and computer-program products for eligibility verification
US11093982B1 (en) 2014-10-02 2021-08-17 Zillow, Inc. Determine regional rate of return on home improvements
US10339527B1 (en) 2014-10-31 2019-07-02 Experian Information Solutions, Inc. System and architecture for electronic fraud detection
US10445152B1 (en) 2014-12-19 2019-10-15 Experian Information Solutions, Inc. Systems and methods for dynamic report generation based on automatic modeling of complex data structures
US20160189277A1 (en) 2014-12-24 2016-06-30 Digimarc Corporation Self-checkout arrangements
US10937025B1 (en) 2015-01-15 2021-03-02 Wells Fargo Bank, N.A. Payment services via application programming interface
US10990974B1 (en) 2015-01-15 2021-04-27 Wells Fargo Bank, N.A. Identity verification services and user information provision via application programming interface
US10621658B1 (en) * 2015-01-15 2020-04-14 Wells Fargo Bank, N.A. Identity verification services with identity score through external entities via application programming interface
US10997654B1 (en) 2015-01-15 2021-05-04 Wells Fargo Bank, N.A. Identity verification services through external entities via application programming interface
US10643232B1 (en) 2015-03-18 2020-05-05 Zillow, Inc. Allocating electronic advertising opportunities
US11151468B1 (en) 2015-07-02 2021-10-19 Experian Information Solutions, Inc. Behavior analysis using distributed representations of event data
KR102146925B1 (en) * 2015-10-21 2020-08-21 노키아 솔루션스 앤드 네트웍스 오와이 How to detect billing fraud
US9900330B1 (en) * 2015-11-13 2018-02-20 Veritas Technologies Llc Systems and methods for identifying potentially risky data users within organizations
US10757154B1 (en) 2015-11-24 2020-08-25 Experian Information Solutions, Inc. Real-time event-based notification system
US20170221167A1 (en) * 2016-01-28 2017-08-03 Bank Of America Corporation System and Network for Detecting Unauthorized Activity
US10437778B2 (en) 2016-02-08 2019-10-08 Bank Of America Corporation Archive validation system with data purge triggering
US10437880B2 (en) 2016-02-08 2019-10-08 Bank Of America Corporation Archive validation system with data purge triggering
US9823958B2 (en) 2016-02-08 2017-11-21 Bank Of America Corporation System for processing data using different processing channels based on source error probability
US10460296B2 (en) 2016-02-08 2019-10-29 Bank Of America Corporation System for processing data using parameters associated with the data for auto-processing
US9952942B2 (en) 2016-02-12 2018-04-24 Bank Of America Corporation System for distributed data processing with auto-recovery
US10067869B2 (en) 2016-02-12 2018-09-04 Bank Of America Corporation System for distributed data processing with automatic caching at various system levels
US10789549B1 (en) 2016-02-25 2020-09-29 Zillow, Inc. Enforcing, with respect to changes in one or more distinguished independent variable values, monotonicity in the predictions produced by a statistical model
US11037159B1 (en) 2016-03-25 2021-06-15 State Farm Mutual Automobile Insurance Company Identifying chargeback scenarios based upon non-compliant merchant computer terminals
US10157078B2 (en) 2016-04-10 2018-12-18 Bank Of America Corporation System for transforming large scale electronic processing using application block chain
US10067994B2 (en) 2016-10-07 2018-09-04 Bank Of America Corporation Real time event capture and transformation of transient data for an information network
US10158737B2 (en) 2016-10-07 2018-12-18 Bank Of America Corporation Real time event capture and analysis of transient data for an information network
US10069672B2 (en) 2016-10-07 2018-09-04 Bank Of America Corporation Real time event capture, analysis and reporting system
US11227001B2 (en) 2017-01-31 2022-01-18 Experian Information Solutions, Inc. Massive scale heterogeneous data ingestion and user resolution
US11514517B2 (en) 2017-04-24 2022-11-29 Consumer Direct, Inc. Scenario gamification to provide improved mortgage and securitization
US11232489B2 (en) 2017-04-24 2022-01-25 Consumer Direct, Inc. Scenario gamification to provide actionable elements and temporally appropriate advertising
US10735183B1 (en) 2017-06-30 2020-08-04 Experian Information Solutions, Inc. Symmetric encryption for private smart contracts among multiple parties in a private peer-to-peer network
US20190005467A1 (en) * 2017-06-30 2019-01-03 Intuit Inc. System and method for risk assessment of a third party application for controlling competitive migration
US11861747B1 (en) 2017-09-07 2024-01-02 MFTB Holdco, Inc. Time on market and likelihood of sale prediction
US10699028B1 (en) 2017-09-28 2020-06-30 Csidentity Corporation Identity security architecture systems and methods
US10896472B1 (en) 2017-11-14 2021-01-19 Csidentity Corporation Security and identity verification system and architecture
US11676126B1 (en) 2017-12-28 2023-06-13 Wells Fargo Bank, N.A. Account open interfaces
US11106515B1 (en) 2017-12-28 2021-08-31 Wells Fargo Bank, N.A. Systems and methods for multi-platform product integration
US10586280B2 (en) 2018-01-30 2020-03-10 PointPredictive Inc. Risk-based machine learning classsifier
US10270808B1 (en) 2018-03-12 2019-04-23 Capital One Services, Llc Auto-generated synthetic identities for simulating population dynamics to detect fraudulent activity
US10911234B2 (en) 2018-06-22 2021-02-02 Experian Information Solutions, Inc. System and method for a token gateway environment
US20200074541A1 (en) 2018-09-05 2020-03-05 Consumerinfo.Com, Inc. Generation of data structures based on categories of matched data items
US11334832B2 (en) * 2018-10-03 2022-05-17 Verint Americas Inc. Risk assessment using Poisson Shelves
CN109584041A (en) * 2018-10-25 2019-04-05 深圳壹账通智能科技有限公司 Credit fraud analysis method, apparatus, equipment and computer readable storage medium
US11315179B1 (en) 2018-11-16 2022-04-26 Consumerinfo.Com, Inc. Methods and apparatuses for customized card recommendations
US11093912B1 (en) 2018-12-10 2021-08-17 Wells Fargo Bank, N.A. Third-party payment interfaces
WO2020146667A1 (en) 2019-01-11 2020-07-16 Experian Information Solutions, Inc. Systems and methods for secure data aggregation and computation
US11334877B2 (en) * 2019-02-11 2022-05-17 Bank Of America Corporation Security tool
US11238656B1 (en) 2019-02-22 2022-02-01 Consumerinfo.Com, Inc. System and method for an augmented reality experience via an artificial intelligence bot
WO2020176978A1 (en) 2019-03-01 2020-09-10 Mastercard Technologies Canada ULC Feature drift hardened online application origination (oao) service for fraud prevention systems
US11044246B1 (en) 2019-06-21 2021-06-22 Wells Fargo Bank, N.A. Secure communications via third-party systems through frames
CA3150122A1 (en) 2019-10-01 2021-04-08 Mastercard Technologies Canada ULC Feature encoding in online application origination (oao) service for a fraud prevention system
US11528261B2 (en) 2020-04-28 2022-12-13 Bank Of America Corporation Dynamic unauthorized activity detection and control system
US11290480B2 (en) 2020-05-26 2022-03-29 Bank Of America Corporation Network vulnerability assessment tool

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7337119B1 (en) * 1998-10-26 2008-02-26 First Data Corporation System and method for detecting purchasing card fraud
US6418436B1 (en) * 1999-12-20 2002-07-09 First Data Corporation Scoring methodology for purchasing card fraud detection
US7263506B2 (en) * 2000-04-06 2007-08-28 Fair Isaac Corporation Identification and management of fraudulent credit/debit card purchases at merchant ecommerce sites
US20020087460A1 (en) * 2001-01-04 2002-07-04 Hornung Katharine A. Method for identity theft protection
US8078524B2 (en) * 2001-02-22 2011-12-13 Fair Isaac Corporation Method and apparatus for explaining credit scores
US7089592B2 (en) * 2001-03-15 2006-08-08 Brighterion, Inc. Systems and methods for dynamic detection and prevention of electronic fraud
US20030009426A1 (en) * 2001-04-19 2003-01-09 Marcelo Ruiz-Sanchez Methods and apparatus for protecting against credit card fraud, check fraud, and identity theft
US7865427B2 (en) * 2001-05-30 2011-01-04 Cybersource Corporation Method and apparatus for evaluating fraud risk in an electronic commerce transaction
US7801828B2 (en) * 2001-07-06 2010-09-21 Candella George J Method and system for detecting identity theft in non-personal and personal transactions
US8224723B2 (en) * 2002-05-31 2012-07-17 Jpmorgan Chase Bank, N.A. Account opening system, method and computer program product
US7356506B2 (en) * 2002-09-18 2008-04-08 General Electric Capital Corporation Methods and apparatus for evaluating a credit application

Cited By (90)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120278249A1 (en) * 2011-04-29 2012-11-01 American Express Travel Related Services Company, Inc. Generating an Identity Theft Score
US9165299B1 (en) 2013-03-15 2015-10-20 Palantir Technologies Inc. User-agent data clustering
US9230280B1 (en) 2013-03-15 2016-01-05 Palantir Technologies Inc. Clustering data based on indications of financial malfeasance
US8818892B1 (en) * 2013-03-15 2014-08-26 Palantir Technologies, Inc. Prioritizing data clusters with customizable scoring strategies
US8855999B1 (en) 2013-03-15 2014-10-07 Palantir Technologies Inc. Method and system for generating a parser and parsing complex data
US8930897B2 (en) 2013-03-15 2015-01-06 Palantir Technologies Inc. Data integration tool
US9965937B2 (en) 2013-03-15 2018-05-08 Palantir Technologies Inc. External malware data item clustering and analysis
US10120857B2 (en) 2013-03-15 2018-11-06 Palantir Technologies Inc. Method and system for generating a parser and parsing complex data
US9171334B1 (en) 2013-03-15 2015-10-27 Palantir Technologies Inc. Tax data clustering
US10264014B2 (en) 2013-03-15 2019-04-16 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive investigation based on automatic clustering of related data in various data structures
US9135658B2 (en) 2013-03-15 2015-09-15 Palantir Technologies Inc. Generating data clusters
US10834123B2 (en) 2013-03-15 2020-11-10 Palantir Technologies Inc. Generating data clusters
US9177344B1 (en) 2013-03-15 2015-11-03 Palantir Technologies Inc. Trend data clustering
US10216801B2 (en) 2013-03-15 2019-02-26 Palantir Technologies Inc. Generating data clusters
US8788407B1 (en) 2013-03-15 2014-07-22 Palantir Technologies Inc. Malware data clustering
US10275778B1 (en) 2013-03-15 2019-04-30 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive investigation based on automatic malfeasance clustering of related data in various data structures
US10721268B2 (en) 2013-03-15 2020-07-21 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive investigation based on automatic clustering of related data in various data structures
US8788405B1 (en) * 2013-03-15 2014-07-22 Palantir Technologies, Inc. Generating data clusters with customizable analysis strategies
US10719527B2 (en) 2013-10-18 2020-07-21 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive simultaneous querying of multiple data stores
US10579647B1 (en) 2013-12-16 2020-03-03 Palantir Technologies Inc. Methods and systems for analyzing entity performance
US9552615B2 (en) 2013-12-20 2017-01-24 Palantir Technologies Inc. Automated database analysis to detect malfeasance
US10356032B2 (en) 2013-12-26 2019-07-16 Palantir Technologies Inc. System and method for detecting confidential information emails
US10230746B2 (en) 2014-01-03 2019-03-12 Palantir Technologies Inc. System and method for evaluating network threats and usage
US10805321B2 (en) 2014-01-03 2020-10-13 Palantir Technologies Inc. System and method for evaluating network threats and usage
US9923925B2 (en) 2014-02-20 2018-03-20 Palantir Technologies Inc. Cyber security sharing and identification system
US10873603B2 (en) 2014-02-20 2020-12-22 Palantir Technologies Inc. Cyber security sharing and identification system
US9009827B1 (en) 2014-02-20 2015-04-14 Palantir Technologies Inc. Security sharing system
US9535974B1 (en) 2014-06-30 2017-01-03 Palantir Technologies Inc. Systems and methods for identifying key phrase clusters within documents
US10162887B2 (en) 2014-06-30 2018-12-25 Palantir Technologies Inc. Systems and methods for key phrase characterization of documents
US11341178B2 (en) 2014-06-30 2022-05-24 Palantir Technologies Inc. Systems and methods for key phrase characterization of documents
US10180929B1 (en) 2014-06-30 2019-01-15 Palantir Technologies, Inc. Systems and methods for identifying key phrase clusters within documents
US9202249B1 (en) 2014-07-03 2015-12-01 Palantir Technologies Inc. Data item clustering and analysis
US9881074B2 (en) 2014-07-03 2018-01-30 Palantir Technologies Inc. System and method for news events detection and visualization
US9021260B1 (en) 2014-07-03 2015-04-28 Palantir Technologies Inc. Malware data item analysis
US9785773B2 (en) 2014-07-03 2017-10-10 Palantir Technologies Inc. Malware data item analysis
US10798116B2 (en) 2014-07-03 2020-10-06 Palantir Technologies Inc. External malware data item clustering and analysis
US9998485B2 (en) 2014-07-03 2018-06-12 Palantir Technologies, Inc. Network intrusion data item clustering and analysis
US9344447B2 (en) 2014-07-03 2016-05-17 Palantir Technologies Inc. Internal malware data item clustering and analysis
US10572496B1 (en) 2014-07-03 2020-02-25 Palantir Technologies Inc. Distributed workflow system and database with access controls for city resiliency
US10929436B2 (en) 2014-07-03 2021-02-23 Palantir Technologies Inc. System and method for news events detection and visualization
US9875293B2 (en) 2014-07-03 2018-01-23 Palanter Technologies Inc. System and method for news events detection and visualization
CN110245487A (en) * 2014-10-27 2019-09-17 阿里巴巴集团控股有限公司 The recognition methods of account risk and device
CN105528383A (en) * 2014-10-27 2016-04-27 阿里巴巴集团控股有限公司 Method and device for indentifying risks of accounts
US9558352B1 (en) 2014-11-06 2017-01-31 Palantir Technologies Inc. Malicious software detection in a computing system
US10135863B2 (en) 2014-11-06 2018-11-20 Palantir Technologies Inc. Malicious software detection in a computing system
US9043894B1 (en) 2014-11-06 2015-05-26 Palantir Technologies Inc. Malicious software detection in a computing system
US10728277B2 (en) 2014-11-06 2020-07-28 Palantir Technologies Inc. Malicious software detection in a computing system
US9898528B2 (en) 2014-12-22 2018-02-20 Palantir Technologies Inc. Concept indexing among database of documents using machine learning techniques
US10552994B2 (en) 2014-12-22 2020-02-04 Palantir Technologies Inc. Systems and interactive user interfaces for dynamic retrieval, analysis, and triage of data items
US9589299B2 (en) 2014-12-22 2017-03-07 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive investigation of bad actor behavior based on automatic clustering of related data in various data structures
US9367872B1 (en) 2014-12-22 2016-06-14 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive investigation of bad actor behavior based on automatic clustering of related data in various data structures
US11252248B2 (en) 2014-12-22 2022-02-15 Palantir Technologies Inc. Communication data processing architecture
US10362133B1 (en) 2014-12-22 2019-07-23 Palantir Technologies Inc. Communication data processing architecture
US10447712B2 (en) 2014-12-22 2019-10-15 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive investigation of bad actor behavior based on automatic clustering of related data in various data structures
US10552998B2 (en) 2014-12-29 2020-02-04 Palantir Technologies Inc. System and method of generating data points from one or more data stores of data items for chart creation and manipulation
US9817563B1 (en) 2014-12-29 2017-11-14 Palantir Technologies Inc. System and method of generating data points from one or more data stores of data items for chart creation and manipulation
US20180024943A1 (en) * 2015-04-09 2018-01-25 Alibaba Group Holding Limited Risk identification based on address matching
WO2016161892A1 (en) * 2015-04-09 2016-10-13 阿里巴巴集团控股有限公司 Address matching-based risk identification method and device
CN106161372A (en) * 2015-04-09 2016-11-23 阿里巴巴集团控股有限公司 A kind of Risk Identification Method based on address coupling and device
KR102112913B1 (en) 2015-04-09 2020-05-19 알리바바 그룹 홀딩 리미티드 Method and device for risk identification based on address matching
KR20170137138A (en) * 2015-04-09 2017-12-12 알리바바 그룹 홀딩 리미티드 Address matching based risk identification method and apparatus
US10484342B2 (en) 2015-04-14 2019-11-19 Alibaba Group Holding Limited Accuracy and security of data transfer to an online user account
US10103953B1 (en) 2015-05-12 2018-10-16 Palantir Technologies Inc. Methods and systems for analyzing entity performance
US10223748B2 (en) 2015-07-30 2019-03-05 Palantir Technologies Inc. Systems and user interfaces for holistic, data-driven investigation of bad actor behavior based on clustering and scoring of related data
US9454785B1 (en) 2015-07-30 2016-09-27 Palantir Technologies Inc. Systems and user interfaces for holistic, data-driven investigation of bad actor behavior based on clustering and scoring of related data
US11501369B2 (en) 2015-07-30 2022-11-15 Palantir Technologies Inc. Systems and user interfaces for holistic, data-driven investigation of bad actor behavior based on clustering and scoring of related data
US10484407B2 (en) 2015-08-06 2019-11-19 Palantir Technologies Inc. Systems, methods, user interfaces, and computer-readable media for investigating potential malicious communications
US9635046B2 (en) 2015-08-06 2017-04-25 Palantir Technologies Inc. Systems, methods, user interfaces, and computer-readable media for investigating potential malicious communications
US10489391B1 (en) 2015-08-17 2019-11-26 Palantir Technologies Inc. Systems and methods for grouping and enriching data items accessed from one or more databases for presentation in a user interface
US10346410B2 (en) 2015-08-28 2019-07-09 Palantir Technologies Inc. Malicious activity detection system capable of efficiently processing data accessed from databases and generating alerts for display in interactive user interfaces
US9898509B2 (en) 2015-08-28 2018-02-20 Palantir Technologies Inc. Malicious activity detection system capable of efficiently processing data accessed from databases and generating alerts for display in interactive user interfaces
US11048706B2 (en) 2015-08-28 2021-06-29 Palantir Technologies Inc. Malicious activity detection system capable of efficiently processing data accessed from databases and generating alerts for display in interactive user interfaces
US10572487B1 (en) 2015-10-30 2020-02-25 Palantir Technologies Inc. Periodic database search manager for multiple data sources
CN106156279A (en) * 2016-06-24 2016-11-23 深圳前海征信中心股份有限公司 Address based on longitude and latitude and text comparison similarity recognition method and system
US10318630B1 (en) 2016-11-21 2019-06-11 Palantir Technologies Inc. Analysis of large bodies of textual data
US10620618B2 (en) 2016-12-20 2020-04-14 Palantir Technologies Inc. Systems and methods for determining relationships between defects
US11681282B2 (en) 2016-12-20 2023-06-20 Palantir Technologies Inc. Systems and methods for determining relationships between defects
US10325224B1 (en) 2017-03-23 2019-06-18 Palantir Technologies Inc. Systems and methods for selecting machine learning training data
US10606866B1 (en) 2017-03-30 2020-03-31 Palantir Technologies Inc. Framework for exposing network activities
US11481410B1 (en) 2017-03-30 2022-10-25 Palantir Technologies Inc. Framework for exposing network activities
US11210350B2 (en) 2017-05-02 2021-12-28 Palantir Technologies Inc. Automated assistance for generating relevant and valuable search results for an entity of interest
US10235461B2 (en) 2017-05-02 2019-03-19 Palantir Technologies Inc. Automated assistance for generating relevant and valuable search results for an entity of interest
US11714869B2 (en) 2017-05-02 2023-08-01 Palantir Technologies Inc. Automated assistance for generating relevant and valuable search results for an entity of interest
US10482382B2 (en) 2017-05-09 2019-11-19 Palantir Technologies Inc. Systems and methods for reducing manufacturing failure rates
US11537903B2 (en) 2017-05-09 2022-12-27 Palantir Technologies Inc. Systems and methods for reducing manufacturing failure rates
US10838987B1 (en) 2017-12-20 2020-11-17 Palantir Technologies Inc. Adaptive and transparent entity screening
US11119630B1 (en) 2018-06-19 2021-09-14 Palantir Technologies Inc. Artificial intelligence assisted evaluations and user interface for same
WO2020057432A1 (en) * 2018-09-17 2020-03-26 阿里巴巴集团控股有限公司 Address standardization method and device, storage medium and computer terminal
CN111178452A (en) * 2020-01-02 2020-05-19 平安科技(深圳)有限公司 Driving risk identification method, electronic device and readable storage medium
US11616809B1 (en) * 2020-08-18 2023-03-28 Wells Fargo Bank, N.A. Fuzzy logic modeling for detection and presentment of anomalous messaging

Also Published As

Publication number Publication date
US7870078B2 (en) 2011-01-11
US20110066547A1 (en) 2011-03-17
US11610278B2 (en) 2023-03-21
US8352281B2 (en) 2013-01-08
US20200118235A1 (en) 2020-04-16
US20230177635A1 (en) 2023-06-08
US20040153663A1 (en) 2004-08-05

Similar Documents

Publication Publication Date Title
US11610278B2 (en) System, method and computer program product for assessing risk of identity theft
US20210224902A1 (en) Systems and methods for electronic account certification and enhanced credit reporting
US6418436B1 (en) Scoring methodology for purchasing card fraud detection
US10504174B2 (en) System and method to search and verify borrower information using banking and investment account data and process to systematically share information with lenders and government sponsored agencies for underwriting and securitization phases of the lending cycle
US7313545B2 (en) System and method for detecting fraudulent calls
US7546271B1 (en) Mortgage fraud detection systems and methods
LaCour-Little Discrimination in mortgage lending: A critical review of the literature
US20060080230A1 (en) Method and system for identity theft prevention, detection and victim assistance
US20120143649A1 (en) Method and system for dynamically detecting illegal activity
US20030009426A1 (en) Methods and apparatus for protecting against credit card fraud, check fraud, and identity theft
US20090248465A1 (en) Assessment of risk associated with doing business with a party
US20090171723A1 (en) Systems and methods for electronic account certification and enhanced credit reporting
KR20210125565A (en) intelligent alarm system
CN113807974A (en) Transaction monitoring system
KR20180060044A (en) Security System for Cloud Computing Service
WO2012177786A1 (en) System and method for locating and accessing account data
US20090248559A1 (en) Assessment of risk associated with doing business with a party
KR20180060005A (en) Security System for Cloud Computing Service
US8566198B1 (en) Database structure for a consumer reporting agency
Furick Using neural networks to develop a new model to screen applicants for apartment rentals
AU2016201497A1 (en) Method and system for identity theft prevention, detection and victim assistance

Legal Events

Date Code Title Description
STCV Information on status: appeal procedure

Free format text: ON APPEAL -- AWAITING DECISION BY THE BOARD OF APPEALS

STCV Information on status: appeal procedure

Free format text: BOARD OF APPEALS DECISION RENDERED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION