US20100004982A1 - Quantifying trust in computing networks - Google Patents
Quantifying trust in computing networks Download PDFInfo
- Publication number
- US20100004982A1 US20100004982A1 US12/264,253 US26425308A US2010004982A1 US 20100004982 A1 US20100004982 A1 US 20100004982A1 US 26425308 A US26425308 A US 26425308A US 2010004982 A1 US2010004982 A1 US 2010004982A1
- Authority
- US
- United States
- Prior art keywords
- trust
- agent
- group
- discrepancy
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
- G06Q10/06398—Performance of employee with respect to a job function
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
Definitions
- the trust characteristics may be established through logical rules of inference from the certification or revocation actions of one or more trusted third parties.
- the trusted third parties may determine that a user is trustworthy if he has a certain set of credentials and if he complies with a local security policy, but this method does not allow the third party to explain why the user is trusted or how much the third party trusts him.
- a computer program may be configured to establish a trust value for an agent (user) in a computing network.
- the trust value may be established by comparing an agent's expected behavior to his actual behavior in past transactions.
- the computer program may analyze the “gap” or discrepancy between the agent's expected behavior and his actual behavior to establish an initial trust value. Trust values for each agent in the computing network may be evaluated using a similar type of analysis.
- the computer program may form one or more trust groups, or cliques, containing agents with similar trust values in each other.
- Each trust group may be created such that trust values of each agent may be within a specific tolerance (“q”) of each other. If an agent's trust value is not within the specified tolerance (“q”) of other agents in a trust group, the computer program may split the trust group into one or more sub-groups such that each agent within the sub-group may have similar trust values in each other but with a smaller tolerance than that of the larger trust group. If the agent's trust value is not within the specified tolerance (“q”) or does not have a similar trust value with other agents in sub-groups, he may be rejected from the small and large trust groups.
- new candidates, or new agents may be granted entry into a trust group or sub-group based on an evaluation performed by each agent in the trust group or sub-group.
- Each agent within the trust group or sub-group may then assess their trust in the new candidate and create his own trust value for the new candidate.
- the trust value assigned to the candidate may be quantified into a value between zero and one. If the candidate meets the trust group's trust requirement, i.e., the trust value of the candidate is within the trust group's tolerance “q”, he may be granted access into the trust group. If the candidate does not meet the trust requirement of each agent in a trust group, he may be accepted into a sub-group or he may be rejected from the trust group altogether.
- FIG. 1 illustrates a schematic diagram of a computing system in which the various techniques described herein may be incorporated and practiced.
- FIG. 2 illustrates a flow diagram of a method for initially quantifying trust and grouping agents with similar trust values in a computing network in accordance with one or more implementations of various techniques described herein.
- FIG. 3 illustrates a flow diagram of a method for evaluating the trust characteristics of a new agent in accordance with one or more implementations of various techniques described herein.
- one or more implementations described herein are directed to quantifying trust in an agent and grouping agents based on their trust values.
- One or more implementations of various techniques for quantifying trust will be described in more detail with reference to FIGS. 1-3
- Implementations of various technologies described herein may be operational with numerous general purpose or special purpose computing system environments or configurations.
- Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the various technologies described herein include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
- program modules include routines, programs, objects, components, data structures, etc. that performs particular tasks or implement particular abstract data types.
- program modules may also be implemented in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network, e.g., by hardwired links, wireless links, or combinations thereof.
- program modules may be located in both local and remote computer storage media including memory storage devices.
- FIG. 1 illustrates a schematic diagram of a computing system 100 in which the various technologies described herein may be incorporated and practiced.
- the computing system 100 may be a conventional desktop or a server computer, as described above, other computer system configurations may be used.
- the computing system 100 may include a central processing unit (CPU) 21 , a system memory 22 and a system bus 23 that couples various system components including the system memory 22 to the CPU 21 . Although only one CPU is illustrated in FIG. 1 , it should be understood that in some implementations the computing system 100 may include more than one CPU.
- the system bus 23 may be any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
- the system memory 22 may include a read only memory (ROM) 24 and a random access memory (RAM) 25 .
- ROM read only memory
- RAM random access memory
- BIOS basic input/output system
- BIOS basic routines that help transfer information between elements within the computing system 100 , such as during start-up, may be stored in the ROM 24 .
- the computing system 100 may further include a hard disk drive 27 for reading from and writing to a hard disk, a magnetic disk drive 28 for reading from and writing to a removable magnetic disk 29 , and an optical disk drive 30 for reading from and writing to a removable optical disk 31 , such as a CD ROM or other optical media.
- the hard disk drive 27 , the magnetic disk drive 28 , and the optical disk drive 30 may be connected to the system bus 23 by a hard disk drive interface 32 , a magnetic disk drive interface 33 , and an optical drive interface 34 , respectively.
- the drives and their associated computer-readable media may provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the computing system 100 .
- computing system 100 may also include other types of computer-readable media that may be accessed by a computer.
- computer-readable media may include computer storage media and communication media.
- Computer storage media may include volatile and non-volatile, and removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules or other data.
- Computer storage media may further include RAM, ROM, erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other solid state memory technology, CD-ROM, digital versatile disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computing system 100 .
- Communication media may embody computer readable instructions, data structures, program modules or other data in a modulated data signal, such as a carrier wave or other transport mechanism and may include any information delivery media.
- modulated data signal may mean a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above may also be included within the scope of computer readable media.
- a number of program modules may be stored on the hard disk 27 , magnetic disk 29 , optical disk 31 , ROM 24 or RAM 25 , including an operating system 35 , one or more application programs 36 , a trust quantification application 60 , program data 38 , and a database system 55 .
- the operating system 35 may be any suitable operating system that may control the operation of a networked personal or server computer, such as Windows® XP, Mac OS® X, Unix-variants (e.g., Linux® and BSD®), and the like.
- the trust quantification application 60 will be described in more detail with reference to FIG. 2 in the paragraphs below.
- a user may enter commands and information into the computing system 100 through input devices such as a keyboard 40 and pointing device 42 .
- Other input devices may include a microphone, joystick, game pad, satellite dish, scanner, or the like.
- These and other input devices may be connected to the CPU 21 through a serial port interface 46 coupled to system bus 23 , but may be connected by other interfaces, such as a parallel port, game port or a universal serial bus (USB).
- a monitor 47 or other type of display device may also be connected to system bus 23 via an interface, such as a video adapter 48 .
- the computing system 100 may further include other peripheral output devices such as speakers and printers.
- the computing system 100 may operate in a networked environment using logical connections to one or more remote computers
- the logical connections may be any connection that is commonplace in offices, enterprise-wide computer networks, intranets, and the Internet, such as local area network (LAN) 51 and a wide area network (WAN) 52 .
- LAN local area network
- WAN wide area network
- the computing system 100 may be connected to the local network 51 through a network interface or adapter 53 .
- the computing system 100 may include a modem 54 , wireless router or other means for establishing communication over a wide area network 52 , such as the Internet.
- the modem 54 which may be internal or external, may be connected to the system bus 23 via the serial port interface 46 .
- program modules depicted relative to the computing system 100 may be stored in a remote memory storage device 50 . It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
- various technologies described herein may be implemented in connection with hardware, software or a combination of both.
- various technologies, or certain aspects or portions thereof may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the various technologies.
- the computing device may include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
- One or more programs that may implement or utilize the various technologies described herein may use an application programming interface (API), reusable controls, and the like.
- API application programming interface
- Such programs may be implemented in a high level procedural or object oriented programming language to communicate with a computer system.
- the program(s) may be implemented in assembly or machine language, if desired.
- the language may be a compiled or interpreted language, and combined with hardware implementations.
- FIG. 2 illustrates a flow diagram of a method 200 for initially quantifying trust in an agent in a computing network in accordance with one or more implementations of various techniques described herein.
- the following description of flow diagram 200 is made with reference to computing system 100 of FIG. 1 in accordance with one or more implementations of various techniques described herein. It should be understood that while the operational flow diagram 200 indicates a particular order of execution of the operations, in some implementations, certain portions of the operations might be executed in a different order.
- the method for quantifying trust may be performed by the trust quantification program 60 .
- the trust quantification program 60 may calculate a trust value for each agent present in the computing network.
- the trust value or a-priori trust value, may be generated based on previous knowledge about an agent. Such previous knowledge may include information about the agents expected behavior and his corresponding actual behavior in past transactions.
- the “gap” or discrepancy between the expected and actual behavior of the agent may be quantified and normalized per bit to create a discrepancy value in the interval between 0 and 1.
- the discrepancy value may correspond to a conditional entropy that may be normalized to a value in the interval between 0 and 1.
- the discrepancy value, or average uncertainty, of the agent based on another agent's previous knowledge about the agent.
- x, y, and z are random variables or random agents in a network.
- x is an ideal agent that does not lie or err
- y is an agent in a well defined role whose trustworthiness is being evaluated
- z is a random variable describing the evaluator's trustworthiness in x.
- the average uncertainty or discrepancy value of y's trustworthiness given x's trustworthiness may be represented as a normalized conditional entropy of y given x's previous knowledge of y.
- the average uncertainty or discrepancy value of y given x may be denoted as H x (y).
- previous knowledge pertaining to a purchaser may be used to generate an initial trust value for the purchaser.
- the purchaser's expected behavior and his corresponding actual behavior may correspond to his promise to pay a specified amount and the actual amount he paid in previous transactions.
- information pertaining to the purchaser's previous transactions may be provided by a credit card company (agent X).
- agent X The discrepancy between the purchaser's promise to pay and his actual payment may be used to create a discrepancy value, or average uncertainty that the credit card company may have in the purchaser.
- the average uncertainty of the credit card company in the purchaser may be defined as H credit-card-co (purchaser).
- the trust quantification application 60 may help gather the input data for trust evaluation. For example, the trust quantification application 60 may detect truth-in-ads discrepancies (the ad promised price x, and the buyer was charged y>x) made by merchants. Furthermore, the trust quantification application 60 may detect discrepancies in a revocation list, such as complaints about truth-in-ads, and it may gather input data about the trustworthiness of revocation authority.
- the gap value may be based on one or more other factors, such as information pertaining to the date in which the purchaser paid, the manner in which he paid it, and/or combinations of the like.
- previous knowledge pertaining to a merchant may be used to generate an initial trust value for the merchant.
- the merchant's expected behavior and his corresponding actual behavior may correlate to his advertised price on a product and the actual amount he charged for the product in previous transactions.
- information pertaining to the purchaser's and/or the merchant's previous transactions may be provided by one or more credit card companies, banks, peer reviews, or the like.
- the trust quantification application 60 may group agents with trust values in each other into a trust group.
- the group of agents within a trust group may have trust values within a specified tolerance ‘q1’ of each other.
- the specified tolerance ‘q1’ may correspond to a high trust value.
- the trust values of each agent on each other may naturally converge to the extremes such that the result is the formation of maximal-trust trust groups among peers. For example, in extremely large trust groups, trust values of each agent in each other may converge to the extremes such that each agent may be deemed as either trustworthy or not.
- the trust quantification application 60 may divide or split the trust group formed at step 220 into one or more sub-groups Splits may occur when a new agent does not meet the trust value requirement (specified tolerance ‘q1’) for the whole group but it may meet the trust requirement for a subgroup. If the new agent meets the trust requirements for a subgroup, the trust quantification application 60 may decide to split the group and accept the new agent into a subgroup as opposed to rejecting the new agent altogether.
- the sub-groups may contain agents with similar trust values in each other but with a smaller tolerance ‘q2’ than those agents in the trust group. The agents in the sub-group may be considered to “trust” each other more than those agents in the original trust group.
- the trust quantification application 60 may split the larger trust group into two or more sub-groups based on one or more economic utility functions.
- Economic utility functions may be used to maximize the utility or purpose for establishing trust groups.
- Economic utility functions may measure the relative satisfaction of an agent based on a correlation between an agent's economic behavior and his desire for consuming various goods and services. Examples of some economic utility functions may include a trust group of sellers and buyers that may wish to maximize overall market share, a user who may wish to maximize the number of features (plug-in modules) in his machine, assigning various non-uniform weights to various features, or other types of utility functions.
- all economic utility functions may not be able to exist in harmony with each other because there may be interdependency as well as conflicts between different economic utilities.
- an agent may switch opportunistically among its cliques when performing distinct tasks that may depend on his economic utility functions.
- the trust values change as sub-groups grow, therefore, the trust quantification application 60 may have to verify each agent's acceptance into every sub-group.
- the trust quantification application 60 may gradually evaluate each agent's trustworthiness and allow some tolerance q>0 in the acceptance criteria and then a limit on the clique size may be quantified as n. If the trust quantification application 60 evaluates the agents and cliques instantaneously, there may not be a limit on the clique size.
- the overall uncertainty may be represented as O(q ⁇ exp(n)), hence n may most likely be small, and q ⁇ exp( ⁇ n).
- FIG. 3 illustrates a flow diagram of a method 300 for evaluating the trust characteristics of a new agent in a computing network in accordance with one or more implementations of various techniques described herein.
- the following description of flow diagram 300 is made with reference to computing system 100 of FIG. 1 in accordance with one or more implementations of various techniques described herein. It should be understood that while the operational flow diagram 300 indicates a particular order of execution of the operations, in some implementations, certain portions of the operations might be executed in a different order.
- the method for evaluating the trust characteristics of a new agent may be performed by the trust quantification program 60 .
- the trust quantification application 60 may receive a request from a new candidate to gain entry into one or more trust groups or sub-groups.
- a new candidate may be a new purchaser or merchant in a computing network.
- the trust quantification application 60 may provide information pertaining to the new candidate to each agent in the trust group to receive a consensus trust evaluation on the new candidate.
- the new purchaser may base its initial trust value on information obtained from a credit card company.
- Each agent of a trust group in the computing network may be provided the new purchaser's initial trust value which may be determined using information from the credit card company.
- Each agent may then evaluate the new purchaser's discrepancy value as described in step 210 and assess his own trust value for the new purchaser.
- each agent in a trust group or sub-group may assess their trust value in the new candidate based on their trust value in the credit card company that assigned the initial trust value in the candidate.
- masters or users may program their agents to evaluate new candidates based on his initial trust value and the trust value given by other agents within the trust group.
- agents may not have the space to hold trust data, so they may request for trust values from their masters.
- agents may not even analyze the candidate's trust value; instead, they may request their masters to provide conclusions.
- the trust quantification application 60 may receive from each agent in the computing network a trust value assessment of the new candidate. Based on the received trust values of the agents within the trust group or sub-groups, the trust quantification application 60 may determine the consensus of all of the agents in the trust group at step 340 .
- the trust quantification application 60 may determine a consensus value for the new candidate.
- the trust values may be defined as t i0 (0) where i may represent an agent within the trust group.
- the consensus values at time ( ⁇ +1) may be.
- the process for calculating the consensus values may be iterated until a stable consensus is reached.
- the weights may be uniform for each value; however, in some implementations, the weights may not be uniform yet the sum of all of the weights may be equal to one.
- Each random variable may be defined over a message space. For example, when evaluating a merchant and his truth in ads, each data point (“message”) may be an advertisement and the actual price charged as reported by many users.
- the trust quantification application 60 may reject the new candidate's entry into a trust group (step 350 ), accept his entry into a trust group (step 360 ), or accept his entry into a sub-group (step 370 ).
- the trust quantification application 60 may reject the new candidate's entry into a trust group based on the consensus values received at step 340 .
- rejection into a trust group may indicate to agents in a network that the candidate is not trustworthy.
- the new candidate may be rejected from a trust group if one or more members of the trust group do not have trust values within a specified tolerance for the new candidate.
- the trust quantification application 60 may accept the new candidate's entry into a trust group.
- the candidate may be accepted into the trust group.
- Acceptance into the trust group may indicate to agents in a network that the candidate is trustworthy within a certain degree.
- the trust value between each member in a trust group may be within a specified tolerance.
- the trust quantification application 60 may accept the new candidate's entry into a sub-group. In one implementation, if the new candidate is accepted into a sub-group, then the trust quantification application 60 may split the larger trust group into two or more sub-groups based on one or more economic utility functions as described in FIG. 2 .
- the trust quantification application 60 may apply the same method 200 and method 300 to various scenarios such as trust between humans and certification authority, Tit-for-tat strategy in the iterative prisoner's dilemma game, the inter-relations among software modules in a system, the stock market, and other trust oriented applications.
Abstract
Method for calculating a trust value of an agent in a computing network. In one implementation, the method may include receiving information pertaining to a first agent's previous actions, quantifying a discrepancy between an expected behavior and an actual behavior of the first agent during the first agent's previous actions, and determining the trust value of the first agent based on the quantified discrepancy.
Description
- This application claims priority to U.S. provisional patent application Ser. No. 61/078,068, filed Jul. 3, 2008, titled METHOD FOR QUANTIFYING TRUST, which is incorporated herein by reference.
- This application claims priority to U.S. provisional patent application Ser. No. 61/094,861, filed Sep. 5, 2008, titled TRUST AND COLLABORATION, which is incorporated herein by reference.
- As digital communications, networks, and transactions increase, the need became apparent for ways in which computer users could “trust” each other. Digital trust systems, such as Public Key Infrastructure (PKI), build trust hierarchies, such as the “Web of Trust”, so that users could securely communicate with each other, authenticate the identities of each other, and determine the integrity of the messages received from each other. In order to establish the trust characteristic of each user, trust systems rely on the certification or revocation of a user by one or more trusted third parties. However, the certification or revocation of a user does not explain what “trust” exactly is or how to quantify it. Instead, each user's trust characteristic is defined in a binary trust form consisting of 1 (trustworthy) or 0 (not trustworthy). The trust characteristics may be established through logical rules of inference from the certification or revocation actions of one or more trusted third parties. The trusted third parties may determine that a user is trustworthy if he has a certain set of credentials and if he complies with a local security policy, but this method does not allow the third party to explain why the user is trusted or how much the third party trusts him.
- Described herein are implementations of various technologies for quantifying trust in a computing network. In one implementation, a computer program may be configured to establish a trust value for an agent (user) in a computing network. The trust value may be established by comparing an agent's expected behavior to his actual behavior in past transactions. The computer program may analyze the “gap” or discrepancy between the agent's expected behavior and his actual behavior to establish an initial trust value. Trust values for each agent in the computing network may be evaluated using a similar type of analysis.
- After trust values are established for each agent, the computer program may form one or more trust groups, or cliques, containing agents with similar trust values in each other. Each trust group may be created such that trust values of each agent may be within a specific tolerance (“q”) of each other. If an agent's trust value is not within the specified tolerance (“q”) of other agents in a trust group, the computer program may split the trust group into one or more sub-groups such that each agent within the sub-group may have similar trust values in each other but with a smaller tolerance than that of the larger trust group. If the agent's trust value is not within the specified tolerance (“q”) or does not have a similar trust value with other agents in sub-groups, he may be rejected from the small and large trust groups.
- After trust groups and sub-groups of agents have been created, new candidates, or new agents, may be granted entry into a trust group or sub-group based on an evaluation performed by each agent in the trust group or sub-group. Each agent within the trust group or sub-group may then assess their trust in the new candidate and create his own trust value for the new candidate. The trust value assigned to the candidate may be quantified into a value between zero and one. If the candidate meets the trust group's trust requirement, i.e., the trust value of the candidate is within the trust group's tolerance “q”, he may be granted access into the trust group. If the candidate does not meet the trust requirement of each agent in a trust group, he may be accepted into a sub-group or he may be rejected from the trust group altogether.
- The above referenced summary section is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description section. The summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
-
FIG. 1 illustrates a schematic diagram of a computing system in which the various techniques described herein may be incorporated and practiced. -
FIG. 2 illustrates a flow diagram of a method for initially quantifying trust and grouping agents with similar trust values in a computing network in accordance with one or more implementations of various techniques described herein. -
FIG. 3 illustrates a flow diagram of a method for evaluating the trust characteristics of a new agent in accordance with one or more implementations of various techniques described herein. - In general, one or more implementations described herein are directed to quantifying trust in an agent and grouping agents based on their trust values. One or more implementations of various techniques for quantifying trust will be described in more detail with reference to
FIGS. 1-3 - Implementations of various technologies described herein may be operational with numerous general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the various technologies described herein include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
- The various technologies described herein may be implemented in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that performs particular tasks or implement particular abstract data types. The various technologies described herein may also be implemented in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network, e.g., by hardwired links, wireless links, or combinations thereof. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
-
FIG. 1 illustrates a schematic diagram of acomputing system 100 in which the various technologies described herein may be incorporated and practiced. Although thecomputing system 100 may be a conventional desktop or a server computer, as described above, other computer system configurations may be used. - The
computing system 100 may include a central processing unit (CPU) 21, asystem memory 22 and asystem bus 23 that couples various system components including thesystem memory 22 to theCPU 21. Although only one CPU is illustrated inFIG. 1 , it should be understood that in some implementations thecomputing system 100 may include more than one CPU. Thesystem bus 23 may be any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus. Thesystem memory 22 may include a read only memory (ROM) 24 and a random access memory (RAM) 25. A basic input/output system (BIOS) 26, containing the basic routines that help transfer information between elements within thecomputing system 100, such as during start-up, may be stored in theROM 24. - The
computing system 100 may further include ahard disk drive 27 for reading from and writing to a hard disk, amagnetic disk drive 28 for reading from and writing to a removablemagnetic disk 29, and anoptical disk drive 30 for reading from and writing to a removableoptical disk 31, such as a CD ROM or other optical media. Thehard disk drive 27, themagnetic disk drive 28, and theoptical disk drive 30 may be connected to thesystem bus 23 by a harddisk drive interface 32, a magneticdisk drive interface 33, and anoptical drive interface 34, respectively. The drives and their associated computer-readable media may provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for thecomputing system 100. - Although the
computing system 100 is described herein as having a hard disk, a removablemagnetic disk 29 and a removableoptical disk 31, it should be appreciated by those skilled in the art that thecomputing system 100 may also include other types of computer-readable media that may be accessed by a computer. For example, such computer-readable media may include computer storage media and communication media. Computer storage media may include volatile and non-volatile, and removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules or other data. Computer storage media may further include RAM, ROM, erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other solid state memory technology, CD-ROM, digital versatile disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by thecomputing system 100. Communication media may embody computer readable instructions, data structures, program modules or other data in a modulated data signal, such as a carrier wave or other transport mechanism and may include any information delivery media. The term “modulated data signal” may mean a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above may also be included within the scope of computer readable media. - A number of program modules may be stored on the
hard disk 27,magnetic disk 29,optical disk 31,ROM 24 orRAM 25, including anoperating system 35, one ormore application programs 36, atrust quantification application 60,program data 38, and adatabase system 55. Theoperating system 35 may be any suitable operating system that may control the operation of a networked personal or server computer, such as Windows® XP, Mac OS® X, Unix-variants (e.g., Linux® and BSD®), and the like. Thetrust quantification application 60 will be described in more detail with reference toFIG. 2 in the paragraphs below. - A user may enter commands and information into the
computing system 100 through input devices such as akeyboard 40 andpointing device 42. Other input devices may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices may be connected to theCPU 21 through aserial port interface 46 coupled tosystem bus 23, but may be connected by other interfaces, such as a parallel port, game port or a universal serial bus (USB). Amonitor 47 or other type of display device may also be connected tosystem bus 23 via an interface, such as avideo adapter 48. In addition to themonitor 47, thecomputing system 100 may further include other peripheral output devices such as speakers and printers. - Further, the
computing system 100 may operate in a networked environment using logical connections to one or more remote computers The logical connections may be any connection that is commonplace in offices, enterprise-wide computer networks, intranets, and the Internet, such as local area network (LAN) 51 and a wide area network (WAN) 52. - When using a LAN networking environment, the
computing system 100 may be connected to thelocal network 51 through a network interface oradapter 53. When used in a WAN networking environment, thecomputing system 100 may include amodem 54, wireless router or other means for establishing communication over awide area network 52, such as the Internet. Themodem 54, which may be internal or external, may be connected to thesystem bus 23 via theserial port interface 46. In a networked environment, program modules depicted relative to thecomputing system 100, or portions thereof, may be stored in a remotememory storage device 50. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used. - It should be understood that the various technologies described herein may be implemented in connection with hardware, software or a combination of both. Thus, various technologies, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the various technologies. In the case of program code execution on programmable computers, the computing device may include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. One or more programs that may implement or utilize the various technologies described herein may use an application programming interface (API), reusable controls, and the like. Such programs may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) may be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language, and combined with hardware implementations.
-
FIG. 2 illustrates a flow diagram of amethod 200 for initially quantifying trust in an agent in a computing network in accordance with one or more implementations of various techniques described herein. The following description of flow diagram 200 is made with reference tocomputing system 100 ofFIG. 1 in accordance with one or more implementations of various techniques described herein. It should be understood that while the operational flow diagram 200 indicates a particular order of execution of the operations, in some implementations, certain portions of the operations might be executed in a different order. In one implementation, the method for quantifying trust may be performed by thetrust quantification program 60. - At
step 210, thetrust quantification program 60 may calculate a trust value for each agent present in the computing network. In one implementation, the trust value, or a-priori trust value, may be generated based on previous knowledge about an agent. Such previous knowledge may include information about the agents expected behavior and his corresponding actual behavior in past transactions. The “gap” or discrepancy between the expected and actual behavior of the agent may be quantified and normalized per bit to create a discrepancy value in the interval between 0 and 1. In one implementation, the discrepancy value may correspond to a conditional entropy that may be normalized to a value in the interval between 0 and 1. The discrepancy value, or average uncertainty, of the agent based on another agent's previous knowledge about the agent. Alternatively, symmetric measure based on conditional entropy may be used to determine the discrepancy value (D(x,y)) such that D(x,y)=H_x(y)/H(x,y)+H_y(x)/H(x,y). - For example, suppose that x, y, and z are random variables or random agents in a network. If x is an ideal agent that does not lie or err, y is an agent in a well defined role whose trustworthiness is being evaluated, and z is a random variable describing the evaluator's trustworthiness in x. The average uncertainty or discrepancy value of y's trustworthiness given x's trustworthiness may be represented as a normalized conditional entropy of y given x's previous knowledge of y. The average uncertainty or discrepancy value of y given x may be denoted as Hx(y). Since x may be considered to be an ideal agent that does not lie or err, the absolute trustworthiness value of y may then be determined by subtracting the discrepancy value from 1, such that y's absolute trustworthiness, or ty, may be defined as ty=1−Hx(y). Since z is the evaluator's determination of how agent y should behave, agent z's trustworthiness in agent y, or tzy, may be defined as tzy=1−Hz(y).
- For example, in an online marketplace where computer users may buy or sell merchandise on the Internet, previous knowledge pertaining to a purchaser (agent Y) may be used to generate an initial trust value for the purchaser. The purchaser's expected behavior and his corresponding actual behavior may correspond to his promise to pay a specified amount and the actual amount he paid in previous transactions. In one implementation, information pertaining to the purchaser's previous transactions may be provided by a credit card company (agent X). The discrepancy between the purchaser's promise to pay and his actual payment may be used to create a discrepancy value, or average uncertainty that the credit card company may have in the purchaser. The average uncertainty of the credit card company in the purchaser may be defined as Hcredit-card-co(purchaser). The trust value of the purchaser may then be determined by subtracting the discrepancy value from 1, such that the credit card company's trustworthiness in the purchaser may be defined as tcredit-card-co-purchaser=1−Hcredit-card-co(purchaser).
- In one implementation, the
trust quantification application 60 may help gather the input data for trust evaluation. For example, thetrust quantification application 60 may detect truth-in-ads discrepancies (the ad promised price x, and the buyer was charged y>x) made by merchants. Furthermore, thetrust quantification application 60 may detect discrepancies in a revocation list, such as complaints about truth-in-ads, and it may gather input data about the trustworthiness of revocation authority. - Although the above example based the gap value of the purchaser on his previous transaction's promise to pay and his subsequent actual payment, it should be noted that the gap value may be based on one or more other factors, such as information pertaining to the date in which the purchaser paid, the manner in which he paid it, and/or combinations of the like. Similarly, previous knowledge pertaining to a merchant (agent) may be used to generate an initial trust value for the merchant. The merchant's expected behavior and his corresponding actual behavior may correlate to his advertised price on a product and the actual amount he charged for the product in previous transactions. In one implementation, information pertaining to the purchaser's and/or the merchant's previous transactions may be provided by one or more credit card companies, banks, peer reviews, or the like.
- At
step 220, thetrust quantification application 60 may group agents with trust values in each other into a trust group. The group of agents within a trust group may have trust values within a specified tolerance ‘q1’ of each other. In one implementation, the specified tolerance ‘q1’ may correspond to a high trust value. The trust values of each agent on each other may naturally converge to the extremes such that the result is the formation of maximal-trust trust groups among peers. For example, in extremely large trust groups, trust values of each agent in each other may converge to the extremes such that each agent may be deemed as either trustworthy or not. Atstep 230, thetrust quantification application 60 may divide or split the trust group formed atstep 220 into one or more sub-groups Splits may occur when a new agent does not meet the trust value requirement (specified tolerance ‘q1’) for the whole group but it may meet the trust requirement for a subgroup. If the new agent meets the trust requirements for a subgroup, thetrust quantification application 60 may decide to split the group and accept the new agent into a subgroup as opposed to rejecting the new agent altogether. In one implementation, the sub-groups may contain agents with similar trust values in each other but with a smaller tolerance ‘q2’ than those agents in the trust group. The agents in the sub-group may be considered to “trust” each other more than those agents in the original trust group. - In one implementation, the
trust quantification application 60 may split the larger trust group into two or more sub-groups based on one or more economic utility functions. Economic utility functions may be used to maximize the utility or purpose for establishing trust groups. Economic utility functions may measure the relative satisfaction of an agent based on a correlation between an agent's economic behavior and his desire for consuming various goods and services. Examples of some economic utility functions may include a trust group of sellers and buyers that may wish to maximize overall market share, a user who may wish to maximize the number of features (plug-in modules) in his machine, assigning various non-uniform weights to various features, or other types of utility functions. Unfortunately, all economic utility functions may not be able to exist in harmony with each other because there may be interdependency as well as conflicts between different economic utilities. Therefore, an agent may switch opportunistically among its cliques when performing distinct tasks that may depend on his economic utility functions. In one implementation, the trust values change as sub-groups grow, therefore, thetrust quantification application 60 may have to verify each agent's acceptance into every sub-group. In one implementation, thetrust quantification application 60 may gradually evaluate each agent's trustworthiness and allow some tolerance q>0 in the acceptance criteria and then a limit on the clique size may be quantified as n. If thetrust quantification application 60 evaluates the agents and cliques instantaneously, there may not be a limit on the clique size. The overall uncertainty may be represented as O(q·exp(n)), hence n may most likely be small, and q<<exp(−n). In that case, the acceptance criteria may require that every pair of agents may have close to mutual maximal trust (>1−q) and that the consensus among agents in the sub-clique may be allowed to oscillate within tolerance δ=O(q·exp(n)), away from the maximal trust. -
FIG. 3 illustrates a flow diagram of amethod 300 for evaluating the trust characteristics of a new agent in a computing network in accordance with one or more implementations of various techniques described herein. The following description of flow diagram 300 is made with reference tocomputing system 100 ofFIG. 1 in accordance with one or more implementations of various techniques described herein. It should be understood that while the operational flow diagram 300 indicates a particular order of execution of the operations, in some implementations, certain portions of the operations might be executed in a different order. In one implementation, the method for evaluating the trust characteristics of a new agent may be performed by thetrust quantification program 60. - At
step 310, thetrust quantification application 60 may receive a request from a new candidate to gain entry into one or more trust groups or sub-groups. For example, with respect to the purchaser/merchant example described inFIG. 2 , a new candidate may be a new purchaser or merchant in a computing network. - At
step 320, thetrust quantification application 60 may provide information pertaining to the new candidate to each agent in the trust group to receive a consensus trust evaluation on the new candidate. In the preceding example, the new purchaser may base its initial trust value on information obtained from a credit card company. Each agent of a trust group in the computing network may be provided the new purchaser's initial trust value which may be determined using information from the credit card company. Each agent may then evaluate the new purchaser's discrepancy value as described instep 210 and assess his own trust value for the new purchaser. In one implementation, each agent in a trust group or sub-group may assess their trust value in the new candidate based on their trust value in the credit card company that assigned the initial trust value in the candidate. - In another implementation, masters or users may program their agents to evaluate new candidates based on his initial trust value and the trust value given by other agents within the trust group. In another implementation, agents may not have the space to hold trust data, so they may request for trust values from their masters. In some implementations, agents may not even analyze the candidate's trust value; instead, they may request their masters to provide conclusions.
- At
step 330, thetrust quantification application 60 may receive from each agent in the computing network a trust value assessment of the new candidate. Based on the received trust values of the agents within the trust group or sub-groups, thetrust quantification application 60 may determine the consensus of all of the agents in the trust group atstep 340. - At
step 340, thetrust quantification application 60 may determine a consensus value for the new candidate. In one implementation, at a discrete time (τ=0), the trust values may be defined as ti0(0) where i may represent an agent within the trust group. For each agent i, the consensus values at time (τ+1) may be. In one implementation, the process for calculating the consensus values may be iterated until a stable consensus is reached. Here, the weights may be uniform for each value; however, in some implementations, the weights may not be uniform yet the sum of all of the weights may be equal to one. - In one implementation, an estimation error in the values of a stable consensus trust may be defined as δ(n, N)=O(exp(n−N))where N is the size of message space and n is the total number of agents. Each random variable may be defined over a message space. For example, when evaluating a merchant and his truth in ads, each data point (“message”) may be an advertisement and the actual price charged as reported by many users. When a trust group grows gradually, adding one agent at a time, then the error in the
trust quantification application 60 estimations may become exp(n−N), where n=number of agents in the group, and N=the number of messages in the message space used to evaluate the gaps. However, when evaluations are instantaneous the error may be much smaller, ˜2 exp(−N). Based on the consensus, thetrust quantification application 60 may reject the new candidate's entry into a trust group (step 350), accept his entry into a trust group (step 360), or accept his entry into a sub-group (step 370). - At
step 350, thetrust quantification application 60 may reject the new candidate's entry into a trust group based on the consensus values received atstep 340. In one implementation, rejection into a trust group may indicate to agents in a network that the candidate is not trustworthy. The new candidate may be rejected from a trust group if one or more members of the trust group do not have trust values within a specified tolerance for the new candidate. - At
step 360, thetrust quantification application 60 may accept the new candidate's entry into a trust group. In one implementation, if each member of a trust group has high trust value in the candidate, and the candidate thus has a high trust value in each member of the trust group, then the candidate may be accepted into the trust group. Acceptance into the trust group may indicate to agents in a network that the candidate is trustworthy within a certain degree. The trust value between each member in a trust group may be within a specified tolerance. - At
step 370, thetrust quantification application 60 may accept the new candidate's entry into a sub-group. In one implementation, if the new candidate is accepted into a sub-group, then thetrust quantification application 60 may split the larger trust group into two or more sub-groups based on one or more economic utility functions as described inFIG. 2 . - In addition to the example of quantifying trust in an online purchaser and merchant relationship, the
trust quantification application 60 may apply thesame method 200 andmethod 300 to various scenarios such as trust between humans and certification authority, Tit-for-tat strategy in the iterative prisoner's dilemma game, the inter-relations among software modules in a system, the stock market, and other trust oriented applications. - Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Claims (20)
1. A method for calculating a trust value of an agent in a computing network, comprising:
receiving information pertaining to a first agent's previous actions;
quantifying a discrepancy between an expected behavior and an actual behavior of the first agent during the first agent's previous actions; and
determining the trust value of the first agent based on the quantified discrepancy.
2. The method of claim 1 , wherein the discrepancy corresponds to a conditional entropy of the first agent based a second agent's experience in dealing with the first agent.
3. The method of claim 1 , wherein the discrepancy is characterized as a gap value.
4. The method of claim 1 , wherein quantifying the discrepancy comprises normalizing the discrepancy in an interval between 0 and 1.
5. The method of claim 4 , wherein determining the trust value of the first agent comprises subtracting the normalized discrepancy from 1.
6. The method of claim 1 , wherein the expected behavior comprises the first agent's promise to pay a specified amount in a previous transaction.
7. The method of claim 6 , wherein the actual behavior comprises an actual amount the first agent paid in the previous transaction.
8. The method of claim 1 , wherein the information pertaining to the first agent's previous actions comprises information regarding previous transactions provided by a credit card company.
9. The method of claim 1 , wherein the expected behavior comprises the first agent's promise to sell an item at a specified amount in a previous transaction.
10. The method of claim 9 , wherein the actual behavior comprises an actual amount the first agent sold the item in the previous transaction.
11. A method for establishing trust groups in a computing network, comprising:
receiving one or more trust values for each agent in a computing network;
identifying one or more agents having trust values that differ within a first specified tolerance; and
grouping the one or more agents into a trust group.
12. The method of claim 11 , wherein the trust group comprises a limit on the number of agents.
13. The method of claim 12 , further comprising splitting the trust group into sub-groups if the limit is exceeded.
14. The method of claim 11 , wherein the sub-groups comprise one or more agents having trust values that differ within a second specified tolerance.
15. The method of claim 11 , wherein the second specified tolerance is smaller than the first specified tolerance.
16. A method for granting a new agent entry into a trust group within a computing network, comprising:
receiving a request for entry into the trust group from the new agent;
sending information pertaining to the new agent to each member of the trust group;
receiving a trust value from each member of the trust group; and
forming a consensus of the trust values received from member of the trust group.
17. The method of claim 16 , further comprising: rejecting the request of the new agent for entry into one or more trust cliques if the consensus is below a predetermined value.
18. The method of claim 16 , further comprising: accepting the request of the new agent for entry into one or more trust cliques if the consensus is above a predetermined value.
19. The method of claim 16 , further comprising:
splitting the trust group into two or more subgroups if the consensus is above a predetermined value and a limit on the number of members in the trust group has been exceeded; and
accepting the request of the new agent for entry into a subgroup.
20. The method of claim 16 , wherein the trust group is split based on one or more economic utility functions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/264,253 US20100004982A1 (en) | 2008-07-03 | 2008-11-04 | Quantifying trust in computing networks |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US7806808P | 2008-07-03 | 2008-07-03 | |
US9486108P | 2008-09-05 | 2008-09-05 | |
US12/264,253 US20100004982A1 (en) | 2008-07-03 | 2008-11-04 | Quantifying trust in computing networks |
Publications (1)
Publication Number | Publication Date |
---|---|
US20100004982A1 true US20100004982A1 (en) | 2010-01-07 |
Family
ID=41465102
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/264,253 Abandoned US20100004982A1 (en) | 2008-07-03 | 2008-11-04 | Quantifying trust in computing networks |
US12/272,801 Expired - Fee Related US8015177B2 (en) | 2008-07-03 | 2008-11-18 | Performing a collaborative search in a computing network |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/272,801 Expired - Fee Related US8015177B2 (en) | 2008-07-03 | 2008-11-18 | Performing a collaborative search in a computing network |
Country Status (1)
Country | Link |
---|---|
US (2) | US20100004982A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108985895A (en) * | 2018-07-10 | 2018-12-11 | 西南科技大学 | A kind of method of businessman's credit value in acquisition e-commerce |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100004982A1 (en) * | 2008-07-03 | 2010-01-07 | Microsoft Corporation | Quantifying trust in computing networks |
US9268850B2 (en) | 2010-01-26 | 2016-02-23 | Rami El-Charif | Methods and systems for selecting an optimized scoring function for use in ranking item listings presented in search results |
US20130144868A1 (en) * | 2011-12-01 | 2013-06-06 | Microsoft Corporation | Post Building and Search Creation |
US10388177B2 (en) * | 2012-04-27 | 2019-08-20 | President And Fellows Of Harvard College | Cluster analysis of participant responses for test generation or teaching |
US9959348B2 (en) * | 2012-06-04 | 2018-05-01 | Google Llc | Applying social annotations to search results |
GB201306589D0 (en) | 2013-04-11 | 2013-05-29 | Abeterno Ltd | Live cell imaging |
US10909130B1 (en) * | 2016-07-01 | 2021-02-02 | Palantir Technologies Inc. | Graphical user interface for a database system |
Citations (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5903721A (en) * | 1997-03-13 | 1999-05-11 | cha|Technologies Services, Inc. | Method and system for secure online transaction processing |
US6385725B1 (en) * | 1998-08-24 | 2002-05-07 | Entrust Technologies Limited | System and method for providing commitment security among users in a computer network |
US20030070070A1 (en) * | 2001-07-31 | 2003-04-10 | Yeager William J. | Trust spectrum for certificate distribution in distributed peer-to-peer networks |
US6856963B1 (en) * | 2000-01-11 | 2005-02-15 | Intel Corporation | Facilitating electronic commerce through automated data-based reputation characterization |
US20050197909A1 (en) * | 2004-03-05 | 2005-09-08 | Greg Klenske | Strategies for online marketplace sales channels |
US20050210285A1 (en) * | 2004-03-18 | 2005-09-22 | Microsoft Corporation | System and method for intelligent recommendation with experts for user trust decisions |
US20050261956A1 (en) * | 2004-05-20 | 2005-11-24 | Pa Co., Ltd. | Network-Employing System for Evaluating Anonymous Information in Providing Information on Positions/Help Wanted and Related Information |
US20060031510A1 (en) * | 2004-01-26 | 2006-02-09 | Forte Internet Software, Inc. | Methods and apparatus for enabling a dynamic network of interactors according to personal trust levels between interactors |
US20060080224A1 (en) * | 2004-10-11 | 2006-04-13 | Nec Corporation | Method for dynamically initiated interactive group communications |
US7076655B2 (en) * | 2001-06-19 | 2006-07-11 | Hewlett-Packard Development Company, L.P. | Multiple trusted computing environments with verifiable environment identities |
US7092821B2 (en) * | 2000-05-01 | 2006-08-15 | Invoke Solutions, Inc. | Large group interactions via mass communication network |
US20060218153A1 (en) * | 2005-03-28 | 2006-09-28 | Voon George H H | Building social networks using shared content data relating to a common interest |
US20070067630A1 (en) * | 2005-09-16 | 2007-03-22 | Dmitry Lenkov | Trusted information exchange based on trust agreements |
US7213047B2 (en) * | 2002-10-31 | 2007-05-01 | Sun Microsystems, Inc. | Peer trust evaluation using mobile agents in peer-to-peer networks |
US20070112761A1 (en) * | 2005-06-28 | 2007-05-17 | Zhichen Xu | Search engine with augmented relevance ranking by community participation |
US20070124191A1 (en) * | 2005-11-22 | 2007-05-31 | Jochen Haller | Method and system for selecting participants in an online collaborative environment |
US20070124579A1 (en) * | 2005-11-28 | 2007-05-31 | Jochen Haller | Method and system for online trust management using statistical and probability modeling |
US20070136178A1 (en) * | 2005-12-13 | 2007-06-14 | Microsoft Corporation | Trust based architecture for listing service |
US20070143281A1 (en) * | 2005-01-11 | 2007-06-21 | Smirin Shahar Boris | Method and system for providing customized recommendations to users |
US20070156604A1 (en) * | 2005-06-20 | 2007-07-05 | Stanley James | Method and system for constructing and using a personalized database of trusted metadata |
US20070180078A1 (en) * | 2006-01-30 | 2007-08-02 | Microsoft Corporation | Automated File Distribution |
US20070185841A1 (en) * | 2006-01-23 | 2007-08-09 | Chacha Search, Inc. | Search tool providing optional use of human search guides |
US20070208613A1 (en) * | 2006-02-09 | 2007-09-06 | Alejandro Backer | Reputation system for web pages and online entities |
US20080005064A1 (en) * | 2005-06-28 | 2008-01-03 | Yahoo! Inc. | Apparatus and method for content annotation and conditional annotation retrieval in a search context |
US20080016195A1 (en) * | 2006-07-14 | 2008-01-17 | Atul Vijay Tulshibagwale | Router for managing trust relationships |
US20080015910A1 (en) * | 2006-07-11 | 2008-01-17 | Claudia Reisz | Ranking-based method and system for evaluating customer predication models |
US20080155644A1 (en) * | 2006-12-26 | 2008-06-26 | Motorola, Inc. | Method and system for communicating in a group of communication devices |
US7415617B2 (en) * | 1995-02-13 | 2008-08-19 | Intertrust Technologies Corp. | Trusted infrastructure support systems, methods and techniques for secure electronic commerce, electronic transactions, commerce process control and automation, distributed computing, and rights management |
US20090132338A1 (en) * | 2007-11-20 | 2009-05-21 | Diaceutics | Method and system for improvements in or relating to sales and marketing practices |
US7707192B1 (en) * | 2006-05-23 | 2010-04-27 | Jp Morgan Chase Bank, N.A. | Confidence index for assets |
US8015177B2 (en) * | 2008-07-03 | 2011-09-06 | Microsoft Corporation | Performing a collaborative search in a computing network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070123191A1 (en) * | 2005-11-03 | 2007-05-31 | Andrew Simpson | Human-machine interface for a portable electronic device |
-
2008
- 2008-11-04 US US12/264,253 patent/US20100004982A1/en not_active Abandoned
- 2008-11-18 US US12/272,801 patent/US8015177B2/en not_active Expired - Fee Related
Patent Citations (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7415617B2 (en) * | 1995-02-13 | 2008-08-19 | Intertrust Technologies Corp. | Trusted infrastructure support systems, methods and techniques for secure electronic commerce, electronic transactions, commerce process control and automation, distributed computing, and rights management |
US5903721A (en) * | 1997-03-13 | 1999-05-11 | cha|Technologies Services, Inc. | Method and system for secure online transaction processing |
US6385725B1 (en) * | 1998-08-24 | 2002-05-07 | Entrust Technologies Limited | System and method for providing commitment security among users in a computer network |
US6856963B1 (en) * | 2000-01-11 | 2005-02-15 | Intel Corporation | Facilitating electronic commerce through automated data-based reputation characterization |
US7092821B2 (en) * | 2000-05-01 | 2006-08-15 | Invoke Solutions, Inc. | Large group interactions via mass communication network |
US7076655B2 (en) * | 2001-06-19 | 2006-07-11 | Hewlett-Packard Development Company, L.P. | Multiple trusted computing environments with verifiable environment identities |
US20030070070A1 (en) * | 2001-07-31 | 2003-04-10 | Yeager William J. | Trust spectrum for certificate distribution in distributed peer-to-peer networks |
US7383433B2 (en) * | 2001-07-31 | 2008-06-03 | Sun Microsystems, Inc. | Trust spectrum for certificate distribution in distributed peer-to-peer networks |
US7213047B2 (en) * | 2002-10-31 | 2007-05-01 | Sun Microsystems, Inc. | Peer trust evaluation using mobile agents in peer-to-peer networks |
US20060031510A1 (en) * | 2004-01-26 | 2006-02-09 | Forte Internet Software, Inc. | Methods and apparatus for enabling a dynamic network of interactors according to personal trust levels between interactors |
US20050197909A1 (en) * | 2004-03-05 | 2005-09-08 | Greg Klenske | Strategies for online marketplace sales channels |
US20050210285A1 (en) * | 2004-03-18 | 2005-09-22 | Microsoft Corporation | System and method for intelligent recommendation with experts for user trust decisions |
US20050261956A1 (en) * | 2004-05-20 | 2005-11-24 | Pa Co., Ltd. | Network-Employing System for Evaluating Anonymous Information in Providing Information on Positions/Help Wanted and Related Information |
US20060080224A1 (en) * | 2004-10-11 | 2006-04-13 | Nec Corporation | Method for dynamically initiated interactive group communications |
US20070143281A1 (en) * | 2005-01-11 | 2007-06-21 | Smirin Shahar Boris | Method and system for providing customized recommendations to users |
US20060218153A1 (en) * | 2005-03-28 | 2006-09-28 | Voon George H H | Building social networks using shared content data relating to a common interest |
US20070156604A1 (en) * | 2005-06-20 | 2007-07-05 | Stanley James | Method and system for constructing and using a personalized database of trusted metadata |
US20070112761A1 (en) * | 2005-06-28 | 2007-05-17 | Zhichen Xu | Search engine with augmented relevance ranking by community participation |
US20080005064A1 (en) * | 2005-06-28 | 2008-01-03 | Yahoo! Inc. | Apparatus and method for content annotation and conditional annotation retrieval in a search context |
US20070067630A1 (en) * | 2005-09-16 | 2007-03-22 | Dmitry Lenkov | Trusted information exchange based on trust agreements |
US20070124191A1 (en) * | 2005-11-22 | 2007-05-31 | Jochen Haller | Method and system for selecting participants in an online collaborative environment |
US20070124579A1 (en) * | 2005-11-28 | 2007-05-31 | Jochen Haller | Method and system for online trust management using statistical and probability modeling |
US20070136178A1 (en) * | 2005-12-13 | 2007-06-14 | Microsoft Corporation | Trust based architecture for listing service |
US20070185841A1 (en) * | 2006-01-23 | 2007-08-09 | Chacha Search, Inc. | Search tool providing optional use of human search guides |
US20070180078A1 (en) * | 2006-01-30 | 2007-08-02 | Microsoft Corporation | Automated File Distribution |
US20070208613A1 (en) * | 2006-02-09 | 2007-09-06 | Alejandro Backer | Reputation system for web pages and online entities |
US7707192B1 (en) * | 2006-05-23 | 2010-04-27 | Jp Morgan Chase Bank, N.A. | Confidence index for assets |
US20080015910A1 (en) * | 2006-07-11 | 2008-01-17 | Claudia Reisz | Ranking-based method and system for evaluating customer predication models |
US20080016195A1 (en) * | 2006-07-14 | 2008-01-17 | Atul Vijay Tulshibagwale | Router for managing trust relationships |
US20080155644A1 (en) * | 2006-12-26 | 2008-06-26 | Motorola, Inc. | Method and system for communicating in a group of communication devices |
US20090132338A1 (en) * | 2007-11-20 | 2009-05-21 | Diaceutics | Method and system for improvements in or relating to sales and marketing practices |
US8015177B2 (en) * | 2008-07-03 | 2011-09-06 | Microsoft Corporation | Performing a collaborative search in a computing network |
Non-Patent Citations (3)
Title |
---|
Hobson, Arthur and Cheng, Bin-Kang. A Comparison of the Shannon and Kullback Information Measures. Journal of Statistical Physics, Vol. 7, No. 4, 1973 * |
https://web.archive.org/web/20070613035805/http://www.itl.nist.gov/div898/handbook/prc/section2/prc263.htm * |
Wang, Yao and Vassileva, Julita. Trust-Based Community Formation in Peer-to-Peer File Sharing Networks. Proceedings of the IEEE/WIC/ACM International Conference on Web Intelligence (WI '04). 2004 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108985895A (en) * | 2018-07-10 | 2018-12-11 | 西南科技大学 | A kind of method of businessman's credit value in acquisition e-commerce |
Also Published As
Publication number | Publication date |
---|---|
US8015177B2 (en) | 2011-09-06 |
US20100005089A1 (en) | 2010-01-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Ali et al. | The state of play of blockchain technology in the financial services sector: A systematic literature review | |
US11244396B2 (en) | Crypto-machine learning enabled blockchain based profile pricer | |
US20100004982A1 (en) | Quantifying trust in computing networks | |
US10298597B2 (en) | Collaborative content evaluation | |
US11887115B2 (en) | Systems and methods to validate transactions for inclusion in electronic blockchains | |
US8781984B2 (en) | Techniques for generating a trustworthiness score in an online environment | |
Pasdar et al. | Connect api with blockchain: A survey on blockchain oracle implementation | |
Debe et al. | Blockchain-based decentralized reverse bidding in fog computing | |
JP4772449B2 (en) | Method and system for automatically evaluating participants in trust trust infrastructure | |
US11847698B2 (en) | Token-based entity risk management exchange | |
CN116743768B (en) | Method, apparatus, device and computer readable storage medium for trading computing power resources | |
Koirala et al. | A supply chain model with blockchain-enabled reverse auction bidding process for transparency and efficiency | |
Xin et al. | On trust guided collaboration among cloud service providers | |
Baranwal et al. | BARA: A blockchain-aided auction-based resource allocation in edge computing enabled industrial internet of things | |
Hil et al. | Cryptonight mining algorithm with yac consensus for social media marketing using blockchain | |
Li et al. | Reputation-based trustworthy supply chain management using smart contract | |
US20200265514A1 (en) | Recording medium recording communication program and communication apparatus | |
Srivastava et al. | Performance analysis of hyperledger fabric based blockchain for traceability in food supply chain | |
WO2022151992A1 (en) | Method and apparatus for processing information | |
JP2021530010A (en) | Systems and methods to verify transactions embedded in electronic blockchain | |
US20220374884A1 (en) | Blockchain Secured Transaction Workflows | |
Bedin et al. | A blockchain approach to social responsibility | |
Wu | Cloud trust model in e-commerce | |
Ekström et al. | Accounting for rater credibility when evaluating AEC subcontractors | |
Martins et al. | Recoverable token: Recovering from intrusions against digital assets in Ethereum |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT CORPORATION, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YACOBI, YACOV;KAJIYA, JIM;REEL/FRAME:021908/0347 Effective date: 20081031 |
|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034564/0001 Effective date: 20141014 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |