WO2013089646A1 - Information content reception and analysis architecture - Google Patents

Information content reception and analysis architecture Download PDF

Info

Publication number
WO2013089646A1
WO2013089646A1 PCT/SG2012/000475 SG2012000475W WO2013089646A1 WO 2013089646 A1 WO2013089646 A1 WO 2013089646A1 SG 2012000475 W SG2012000475 W SG 2012000475W WO 2013089646 A1 WO2013089646 A1 WO 2013089646A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
information content
client
participant
captured
Prior art date
Application number
PCT/SG2012/000475
Other languages
French (fr)
Inventor
Ken Alfred REIMER
Original Assignee
Reimer Ken Alfred
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Reimer Ken Alfred filed Critical Reimer Ken Alfred
Publication of WO2013089646A1 publication Critical patent/WO2013089646A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models

Definitions

  • the present disclosure relates generally to systems and techniques for information content analysis, including semantic analysis.
  • a method for processing information content including capturing the information content through a content source, saving the captured information content in a captured content memory, semantically processing the captured information content with a content processing system, identifying within the processed captured information content a predetermined analysis result condition, and generating and providing a corresponding summary of the identified analysis result condition, wherein the information content corresponds to any of a face-to-face interaction between at least two participants, and as an occurrence in response to an event identifier.
  • participant content sources can include portable and/or wearable devices that are configured to capture from one or more individuals audio and/or other types of information content associated with an event, situation, product, service, transaction, or client / business under consideration.
  • Participant content sources are configured to communicate captured information content to a content reception and decoding system, which is configured to communicate decoded information content to a content processing and/or analysis system such as a semantic analysis system.
  • FIG. 1A is a schematic illustration of a single-client or multi-client information content analytics architecture according to an embodiment of the disclosure.
  • FIG. IB is a schematic illustration of the architecture of FIG. 1A, in which participant content sources include a set of participant audio content sources; a set of participant audio / visual content sources; and a set of participant textual content sources.
  • FIG. 2A is a schematic illustration of a printable or distributable physical medium such as a transaction record, sales receipt, or other type of document or tangible item (e.g., an external surface of a product package or container, such as a food or beverage container) that carries or includes an event identifier.
  • a printable or distributable physical medium such as a transaction record, sales receipt, or other type of document or tangible item (e.g., an external surface of a product package or container, such as a food or beverage container) that carries or includes an event identifier.
  • FIG. 2B is an illustration of a representative user feedback interface by which a participant can generate or provide one or more types of participant information content corresponding to customer or user feedback.
  • FIG. 3A is a block diagram of a representative portable or wearable audio content capture device in accordance with an embodiment of the disclosure.
  • FIG. 3B is a schematic illustration of a representative docking station configured for communication or coupling with at least one audio content capture device in accordance with an embodiment of the disclosure.
  • FIG. 3C is a schematic illustration of representative docking station functional modules in accordance with an embodiment of the disclosure.
  • FIG. 4A is an illustration of a representative restaurant or food service environment in which a server or staff member (e.g., a waitress) wears an audio capture device that is configured to capture audio signals either continuously or when in the presence of a customer.
  • a server or staff member e.g., a waitress
  • FIG. 4B is an illustration of a representative medical environment in which a medical professional such as a doctor wears an audio content capture device, which can capture audio signals corresponding to the medical professional's interactions with patients and/or colleagues.
  • a medical professional such as a doctor wears an audio content capture device, which can capture audio signals corresponding to the medical professional's interactions with patients and/or colleagues.
  • FIG. 4C is an illustration of a representative law enforcement or security environment in which a law enforcement officer or security personnel wears an audio content capture device, which can capture audio signals corresponding to the officer's interactions with members of the public and/or colleagues.
  • FIG. 5 is a schematic illustration of portions of a content reception and/or decoding system in accordance with an embodiment of the present disclosure.
  • FIG. 6 is a schematic illustration of portions of a content auto-retrieval system in accordance with an embodiment of the present disclosure.
  • FIG. 7 is a schematic illustration of portions of a content processing and/or analysis system in accordance with an embodiment of the present disclosure.
  • FIG. 8 is a schematic illustration of portions of a client input / output manager in accordance with an embodiment of the present disclosure.
  • FIG. 9 is a schematic illustration of portions of a client result destination in accordance with an embodiment of the present disclosure.
  • the disclosure provides numerous advantages over the prior art.
  • embodiments of the disclosure may achieve advantages over other possible solutions and/or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the disclosure.
  • the following aspects, features, embodiments and advantages are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s).
  • any reference to "the invention” shall not be construed as a generalization of any inventive subject matter disclosed herein and shall not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s).
  • a method for processing information content including capturing the information content through a content source, saving the captured information content in a captured content memory, semantically processing the captured information content with a content processing system, identifying within the processed captured information content a predetermined analysis result condition, and generating and providing a coiTesponding summary of the identified analysis result condition, wherein the information content corresponds to any of a face-to-face interaction between at least two participants, and as an occurrence in response to an event identifier.
  • the face-to-face interaction between at least two participants occurs in anyone of a law enforcement environment, a security environment, a retail sales environment, a compliance environment, a hotel environment, a finance environment, an assessment environment, a corporate environment, and a counseling environment.
  • the event identifier is any one of a machine readable image, an email, a Short Message Service (SMS) message, and a Contactless Near Field Communication activation.
  • SMS Short Message Service
  • the information content is any one of a audio content, a visual content, and a textual content.
  • the information content is an audio content, and where the content is decoded through a speech-to-text conversion.
  • acoustic and linguistic models are utilized during the speech-to-text conversion to obtain decoded speech results of high accuracy.
  • the method further includes associating a content metadata with the captured information and saving the content metadata with the captured information content in the captured content memory.
  • semantically processing the captured information content includes matching and organizing word data in accordance with predetermined classification to obtain semi-structured data.
  • semantically processing the captured information content further includes running the semi-structured data through a natural language processing builder.
  • semantically processing the captured information content further includes data mining and categorizing the semi- structured data.
  • the predetermined analysis result condition is associated to multiple occurrences of any of a predetermined, word, phrase, expression within a period of time.
  • the information content is captured through an auxiliary content source, and wherein the auxiliary content source corresponds to any one of an Internet website, an Internet blog, a social media website, a virtual community, and a professional data library.
  • the information content is captured with a content acquisition manager operating a web crawler module in the auxiliary content source.
  • the method further includes saving a plurality of captured information content with content metadata associated with a client in a captured content memory, determining a client-specific analysis result condition with a client input/output manager, semantically processing captured information content based on the client-associated content metadata and the client-specific analysis result condition, generating a corresponding summary of the identified analysis, and providing the corresponding summary with the client input/output manager.
  • a system for processing information content including a content source for capturing information content, a captured content memory for saving the captured information content, and a content processing system for semantically processing the captured information content, identifying within the processed captured information content a predetermined analysis result condition; and generating and providing a corresponding summary of the identified analysis result condition, wherein the information content corresponds to any of a face-to-face interaction between at least two participants, and as an occurrence in response to an event identifier.
  • the system further includes a participant content source for capturing a participant information content, where the participant content source is any one of a mobile telephone, a personal computing device, tablet computer, a portable speech capture device and an audio/visual recording device.
  • the system further includes a docking station for receiving the participant content source, the docking station including an audio content loader for receiving captured information content from the docked participant content source, and a transfer module for transferring the received captured information content to a captured content memory.
  • the system further includes a client input/output manager, including a case definition manager and a results dissemination manager, the client input/output manager configured for communication with the content processing system and at least one client result analysis destination.
  • the system further includes a content auto-retrieval system for automated retrieval of auxiliary information content from an auxiliary content source.
  • the content auto-retrieval system further includes a content acquisition manager operating a web crawler in the auxiliary content source.
  • the depiction of a given element or consideration or use of a particular element number in a particular FIG. or a reference thereto in corresponding descriptive material can encompass the same, an equivalent, or an analogous element or element number identified in another FIG. or descriptive material associated therewith.
  • the term "set” corresponds to or is defined as a non-empty finite organization of elements that mathematically exhibits a cardinality of at least 1 (i.e., a set as defined herein can correspond to a singlet or single element set, or a multiple element set), in accordance with known mathematical definitions (for instance, in a manner corresponding to that described in An Introduction to Mathematical Reasoning: Numbers, Sets, and Functions, "Chapter 1 1 : Properties of Finite Sets” (e.g., as indicated on p. 140), by Peter J. Eccles, Cambridge University Press (1998)).
  • an element of a set can include or be a system, an apparatus, a device, a structure, a structural feature, an object, a process, a physical parameter, a signal, a data element, a value, or a person depending upon the type of set under consideration.
  • various embodiments in accordance with the present disclosure are configured to provide an information content processing and/or analysis architecture that facilitates the analysis of information content provided by or captured from participants, and the provision of information content analysis results to one or more clients for whom semantic and/or other aspects of such information content are of interest.
  • An architecture as described herein includes, corresponds to, or defines a design or implementation model that encompasses hardware; software or program instruction sets; and/or a set of user interfaces.
  • Particular information content analysis architectures described herein correspond to, encompass, include, or provide each of a hardware architecture, a software architecture, and a user interface architecture.
  • Information content as described herein includes one or more of (a) auditory, vocal, verbal, or speech content, information, signals, or data (hereafter auditory content); (b) visual content, information, signals, or data, which can correspond to text, logograms, pictographs, symbols, images, or video (hereafter visual content); and (c) metadata corresponding to such auditory and/or visual content.
  • auditory content a) auditory, vocal, verbal, or speech content, information, signals, or data
  • visual content visual content, information, signals, or data, which can correspond to text, logograms, pictographs, symbols, images, or video
  • metadata corresponding to such auditory and/or visual content e.g., audio content
  • visual content e.g., a type of metadata corresponding to auditory content.
  • information content can be categorized as (a) participant information content; or (b) auxiliary or adjunctive information content.
  • An architecture in accordance with an embodiment of the present disclosure facilitates or effectuates the capture, collection, or retrieval of information content from one or more participants; the analysis of such information content; and the generation of corresponding information analysis results that can be provided to one or more clients. More particularly, an individual that directly generates information content of interest or potential interest to a client by way of interacting with one or more other individuals and/or a device is referred herein to as a participant. Participant information content includes information content that is directly generated by one or more individuals interacting with each other and/or a set of information content capture, communication, or recording devices within a particular type of communication environment, situation, scenario, domain, or context. Participant information content can be (a) captured, recorded, or received by a system, apparatus, or device; and (b) subsequently subjected to information content analysis.
  • Auxiliary information content includes structured and/or unstructured information content which can facilitate the analysis of participant information content.
  • Auxiliary information content can be provided, accessed, retrieved, or downloaded by way of one or more external information or media content sources, repositories, libraries, databases, or services.
  • Representative examples of auxiliary information content include information content corresponding to or available, retrievable, downloadable, transferable, or requestable from websites, internet forums, internet blogs, social media, internet forums, virtual communities, industrial media (e.g., film, television, radio, or print media such as magazines), and/or professional or profession-related data sources or repositories corresponding to business, legal, medical, scientific, technical, or other types of information, literature, or documents.
  • a set of individuals or an organization that can have an interest in semantic and/or other aspects of information content generated by one or more sets of participants is referred to herein as a client. That is, a client can be defined as an end-user of information content analysis results. More particularly, a client can be defined as a set of individuals or entities for which information content analysis results can be generated by way of capturing information content from one or more participants deemed to be relevant or potentially relevant to the client, and subjecting such captured participant information content to one or more types of processing and/or analysis, such as semantic analysis, data mining, and/or knowledge extraction or discovery. In various embodiments, information content analysis results can be generated based upon participant information content in view of one or more types of adjunctive information content.
  • an information content reception and analysis system is selectively or dynamically configurable for analyzing participant and/or auxiliary information content in accordance with multiple client use cases corresponding to multiple distinguishable or distinct clients that are served by the system, for instance, in a seamless, computationally concurrent, virtually simultaneous, parallel, or sequential manner.
  • a single information content reception and analysis system can generate information content analysis results for multiple distinct clients, where information content analysis results corresponding to any given client are generated in accordance with at least one client use case corresponding to this client.
  • Such a single system can be configured to serve a wide variety of client types (e.g., in a computationally concurrent manner), and can be configured to provide one or more types of customer feedback processing and/or analysis services for any given client.
  • an information content reception and analysis system is configurable for analyzing participant and/or auxiliary information content with respect to one or more client use cases corresponding to a single client that is served by the system (e.g., an information content reception and analysis system can be dedicated to serving a single client, providing particular types of information content analysis services to the single client).
  • Multiple embodiments of the present disclosure are directed to an information content processing and/or analysis architecture that can be flexibly or selectively configured for (a) capturing and/or accessing (e.g., receiving or retrieving) participant information content by way of one or multiple participant information content sources; (b) resolving, decoding, and/or parsing captured participant information content, and possibly auxiliary information content; (c) processing and/or analyzing resolved participant information content, possibly in view of one or more types of auxiliary information content; (d) generating information content processing and/or analysis results, signals, or data, including semantic analysis results, in a manner that is selective, adaptive, configurable, or customizable with respect to a set of client use cases under consideration; and (e) storing, outputting, presenting, distributing, and/or publishing such processing and/or analysis results in a manner that is selective, adaptive, configurable, or customizable with respect to the set of client use cases under consideration.
  • FIG. 1A is a schematic illustration of a single-client or multi-client information content analytics architecture 10 according to an embodiment of the disclosure.
  • such an architecture 10 includes a number of participant content sources 100; at least one content reception / decoding system 200; at least one content processing and/or analysis system 500 and an associated content processing and/or analysis database 600; and at least one client analysis result destination 800.
  • the content processing and/or analysis system 500 is configured for performing client-selective or client-adaptive content processing and/or analysis operations.
  • Each participant content source 100 includes a device that is configurable for providing, acquiring, capturing, or recording participant information content.
  • a participant content source 100 can include a device configured for capturing one or more of audio, visual, and textual content.
  • participant content sources 100 include a telephony device, such as a mobile telephone; a personal or portable computing device such as a tablet computer; an audio/visual recording device; and a portable or wearable speech capture device that can capture or record a multi-party conversation independent or exclusive of a telephony system or device.
  • one or more participant content sources 100 are further configurable for making use of, accessing, or capturing an event, condition, transaction, or feedback identifier 102 that facilitates the establishment of an association between particular participant information content and a given client, client related event, or client use case. More particularly, by way of an event identifier 102, a given participant's information content such as verbal feedback relating to an event such as a business transaction can be (a) associated or identified with a specific client corresponding to the event, such as a business, franchise, or store at which the business transaction occurred; and possibly (b) directed to or toward a content reception / decoding system 200 that is associated with or assigned to the client, or which is configured for generating information content analysis results corresponding to the client.
  • an event identifier 102 a given participant's information content such as verbal feedback relating to an event such as a business transaction can be (a) associated or identified with a specific client corresponding to the event, such as a business, franchise, or store at which the business transaction occurred; and
  • An event identifier 102 can correspond to, provide, or be used to generate metadata associated with participant information content under consideration. Representative types of event identifiers 102 and participant content sources 100 that can utilize, access, or capture such event identifiers 102 are further described in detail below with reference to FIGs. 2A - 2D.
  • Each participant content source 100 is further configured for communicating participant information content to at least one content reception / decoding system 200 that is configured for resolving, decoding, and/or parsing participant information content and possibly auxiliary information content. Any given content reception / decoding system 200 is configured for communication with at least one content processing and/or analysis system 500 and/or an associated content analysis database 600.
  • the content processing and/or analysis system 500 is configured for processing and/or analyzing resolved, decoded, or parsed participant information content, possibly in association with processing and/or analysis of resolved, decoded, or parsed auxiliary information content; and generating information content processing and/or analysis results.
  • information analysis results include semantic analysis results.
  • the content analysis database 600 includes a number of databases that facilitate information content analysis, including one or more resolved content databases 610; expression databases 612; at least one client configuration database 620; at least one client use case database 622; one or more analysis indexes 630; and at least one analysis results database 632. Manners in which particular databases are utilized in association with information content analysis processes are described in detail below.
  • Some embodiments in accordance with the present disclosure additionally include a content auto-retrieval system 300, which is configured for performing content auto- retrieval operations that involve automatically or semi-automatically accessing, retrieving, downloading, or requesting auxiliary information content corresponding to one or more auxiliary content sources, repositories, and/or archives 400, and which is further configured for communicating with one or more content reception / decoding systems 200 as well as particular content processing and/or analysis systems 500 and/or one or more associated content analysis databases 600.
  • a content auto-retrieval system 300 which is configured for performing content auto- retrieval operations that involve automatically or semi-automatically accessing, retrieving, downloading, or requesting auxiliary information content corresponding to one or more auxiliary content sources, repositories, and/or archives 400, and which is further configured for communicating with one or more content reception / decoding systems 200 as well as particular content processing and/or analysis systems 500 and/or one or more associated content analysis databases 600.
  • Auxiliary content sources, repositories, and/or archives 400 can include physical and/or virtual systems (e.g., computer servers), apparatuses, or devices upon which auxiliary information content resides, for instance, corresponding to current or archived information or media associated with websites, internet blogs, internet forums, social media (e.g., Facebook), information uploading, hosting, and/or sharing services (e.g., YouTube), virtual communities, industrial media (e.g., film, television, radio, or print media such as magazines), and/or professional data sources or libraries for business literature, legal documents, scientific or technical literature, or other information.
  • content auto-retrieval operations can be performed in a client use case dependent manner.
  • content auto-retrieval operations can involve the automatic retrieval of information content from a first set of auxiliary content sources 400 in accordance with a first client use case corresponding to a first client (e.g., a first business or organization), and a second set of auxiliary content sources 400 in accordance with a second client use case distinct from the first client use case and corresponding to a second client (e.g., a second business or organization) that is distinct from the first client.
  • Several embodiments include at least one client input / output manager 700 configured for communication with one or more client result analysis destinations 800, at least one content processing and/or analysis system 500, and possibly a set of associated content analysis databases 600.
  • a client input / output manager 700 can include a computing system or device that is configurable for determining or establishing for a given client at least one client use case that references, identifies, defines, specifies, or includes (a) a client identifier; (b) information content reception and/or retrieval parameters that correspond to or define one or more manners in which information content is to be received and/or retrieved from participant content sources 100 and/or auxiliary content sources 400; (c) an anchor, source, original, initial, or default set of query words, phrases, and/or expressions that are relevant to the client, and which facilitate or enable information content analysis processes or operations as further described below; and (d) dissemination and/or notification parameters that indicate how information content analysis results are to be made available, provided, output, stored, distributed, or presented.
  • a client use case can include or be defined as a set of client use case parameters referenced by or stored in a data structure such as a table.
  • particular information content analysis results corresponding to a given client can be published to one or more databases (e.g., a client internal database); and/or transferred to one or more types of client systems or devices such as client computer systems, tablet computers, or mobile telephones.
  • a client input / output manager 700 can provide a set of client or system administrator user interfaces (e.g., visual or graphical user interfaces) to facilitate or enable client use case definition and modification.
  • information content analysis results corresponding to a given client are associated with, made available by way of, or distributed or published in accordance with an Application Program Interface (API), which can facilitate the definition or generation of client-customized or client-specific application software or interfaces (e.g., in accordance with client requests or requirements).
  • API Application Program Interface
  • a client input / output manager 700 can provide a system administrator or other individual with access to such an API and possibly associated software tools (e.g., client visual interface definition or programming software) or toolkits.
  • An individual may need to be incentivized in order to act as a participant. That is, an individual may need to be provided with some type of motivation to generate participant information content corresponding to the individual's experience relating to an event, situation, product, service, transaction, or client under consideration.
  • Certain embodiments in accordance with the present disclosure include one or more incentive management systems 900 configured to manage incentive, thank-you reward, or similar types of programs by which participants can receive coupons, discounts, reward point accruals, or the like.
  • a given inventive management system 900 can be provided or operated by a particular client, or a third-party.
  • an incentive management system 900 can communicate with one or more content reception and/or decoding systems 200 and/or client input / output managers 700 to facilitate the distribution of incentive awards, coupons, discounts, or the like to participants.
  • the set of networks 80 can include public and/or private networks such as one or more of a Local Area Network (LAN), a Wide Area Network (WAN), the Internet, a telephone network (e.g., a mobile telephone network), and a satellite network.
  • LAN Local Area Network
  • WAN Wide Area Network
  • the Internet a telephone network (e.g., a mobile telephone network), and a satellite network.
  • Portions of an architecture 10 in accordance with an embodiment of the disclosure can be configured or reconfigured in various manners, such as in accordance with embodiment details; technological constraints or advances; and/or the type(s) of participants, participant content sources 100, client(s), and/or client analysis result destination(s) 800 under consideration. Additionally, portions of one or more systems, subsystems, apparatuses, devices, or elements within the architecture 10 can be distributed across, combined with, or consolidated within one or more portions of other systems, subsystems, apparatuses, devices, or elements of the architecture 10.
  • participant information content sources 100 are configured for capturing audio content. Depending upon embodiment details, participant information content sources 100 can be configured for capturing additional and/or other types of information content, which can also be subjected to information content analysis.
  • FIG. IB is a schematic illustration of the architecture 10 of FIG. 1A, in which participant content sources 100 include a set of participant audio content sources 100a; a set of participant audio / visual content sources 100b; and a set of participant textual content sources 100c.
  • a single participant information content source 100 such as a mobile telephone can selectively capture or provide audio, audio / visual, or textual participant information content for subsequent analysis, for instance, based upon participant preference or input, and/or a client use case under consideration.
  • participant content sources 100 can be configured to use, access, receive, retrieve, or capture one or more types of event identifiers 102.
  • Representative non-limiting types of event identifiers 102 and participant content sources 100 that can be configured to use, receive, access, and/or capture such event identifiers 102 are described in detail hereafter with respect to FIGs. 2A - 2D.
  • FIG. 2 A is a schematic illustration of a printable or distributable physical medium 104 such as a transaction record, sales receipt, or other type of document or tangible item (e.g., an external surface of a product package or container, such as a food or beverage container) that carries or includes an event identifier 102.
  • an event identifier 102 corresponds to or provides a machine-readable, machine- communicable, and/or machine-processable image or symbol pattern that can be captured or received by a participant content source 102, and which can facilitate the provision of participant feedback, viewpoints, or opinions relating to an event, situation, product, service, transaction, or client (e.g., business or organization) under consideration.
  • an event identifier 102 includes a matrix or two-dimensional barcode (e.g., a Quick Response (QR) code, in accordance with International Standard ISO / IEC 18004) configured to encode a set of transaction parameters corresponding to an event, situation, product, service, transaction, or client, such as a purchase of a given client's product or a payment for a client's professional service.
  • transaction parameters can include a client identifier or code (e.g., a business or organization identifier or code, which can correspond to a given business division, location, or site); a date / time stamp; an event, situation, product, package / container, service, or transaction descriptor or code; and/or other information.
  • Transaction parameters can serve as or be used to generate metadata that establishes a correspondence between participant information content and a client related event.
  • An appropriate type of participant content source 100 such as a mobile telephone, tablet computer, or other type of electronic or computing system, apparatus, or device can capture an image of or otherwise receive or retrieve an event identifier 102, for instance, by way of a camera carried by the participant content source 100 as indicated in FIG. 2A.
  • a set of program instructions or software such as a downloadable application program or an application plug-in, add-on, or add-in can be executed by a processing or control unit of the participant content source 100, and can decode the event identifier 102 and correspondingly generate, provide access to, and/or manage or control the operation of one or more types of user interfaces by which the participant or the participant content source 100 can direct participant information content to a content reception and/or decoding system 200, where such participant information content corresponds to customer or user feedback pertaining to an event, situation, product, service, or transaction associated with a particular client.
  • the participant content source 100 can provide a user feedback interface 1 10 by which the participant can generate or provide one or more types of participant information content corresponding to customer or user feedback.
  • participant information content can include one or more of audio content, textual content, and possibly audio / visual content.
  • the participant content source 100 can generate or provide participant access to an appropriate type of user feedback interface, such as an audio feedback interface, a text feedback interface, or an audio / visual feedback interface by which the participant can direct or submit their feedback, comments, opinions, suggestions, complaints, or ratings to a content reception and/or decoding system 200. While some embodiments provide participant access to a predetermined type of user feedback interface, other embodiments provide participant access to multiple types of feedback interfaces, any one of which can be activated in response to participant selection.
  • an appropriate type of user feedback interface such as an audio feedback interface, a text feedback interface, or an audio / visual feedback interface by which the participant can direct or submit their feedback, comments, opinions, suggestions, complaints, or ratings to a content reception and/or decoding system 200.
  • an appropriate type of user feedback interface such as an audio feedback interface, a text feedback interface, or an audio / visual feedback interface by which the participant can direct or submit their feedback, comments, opinions, suggestions, complaints, or ratings to a content reception and/or decoding system 200. While some embodiments provide
  • any given user feedback interface can facilitate the local and/or remote capture of participant feedback. More particularly, in association with or following the generation of participant information content corresponding to an event, situation, product, service, transaction, and/or client under consideration, the participant content source 100 can establish communication with an appropriate content reception and/or decoding system 200 (e.g.., by way of one or more networks 80) to facilitate or effectuate the transfer of audio, textual, and/or audio / visual feedback from the participant content source 100 to the content reception and/or decoding system 200.
  • an appropriate content reception and/or decoding system 200 e.g., by way of one or more networks 80
  • a text feedback interface can be provided by an SMS interface or a text editor application.
  • An audio feedback interface can be provided by a set of local and/or remote voice recording applications, for instance, configured to execute on the participant content source 100 and/. or a content reception and/or decoding system 200, an Interactive Voice Response (IVR) system, or a voice messaging system.
  • An audio / visual feedback interface can be provided by a set of local and/or remote audio / visual capture applications (e.g., Skype mobileTM), configured to execute on the participant content source 100 and/or a content reception and/or decoding system 200.
  • the content reception and/or decoding system 200 can communicate with an incentive management system 900 to facilitate the transfer of a discount code or coupon, the accrual of thank- you points, or other type of incentive or incentive notification to the participant or a participant incentive account corresponding to received participant information content.
  • Such incentive or incentive notice transfer can occur by way of issuance of an SMS message or e-mail directed to the participant, or appropriate adjustment of an incentive account (e.g., an electronic account such as an online shopping account or a virtual world account) corresponding to the participant.
  • a participant content source 100 can be configured to receive an event identifier 102 in another manner, such as by way of an e- mail or Short Message Service (SMS) message.
  • SMS Short Message Service
  • a representative e-mail or SMS message 112 that includes a telephone number event identifier 102a as well as an Internet address event identifier 102b is shown in FIG. 2C.
  • an event identifier 102 can be communicated to a participant content source 100 in association with a mobile payment service.
  • FIG. 2D illustrates a Contactless Near Field Communciation (CNFC) device 1 14 that can be configured to process payments by way of a mobile telephone, and which can also be configured to communicate an event identifier 102 to such a device.
  • CNFC Contactless Near Field Communciation
  • an event identifier 102 to a participant content source 100, or the capture of an event identifier 102 thereby, facilitates the automated or semi-automated capture, collection, reception, and/or transfer of a participant's audio and/or other feedback, viewpoints, opinions, or impressions associated with an event, situation, product, service, transaction, or client, independent or exclusive or in the absence of, or prior to, participant communication of such feedback to another individual such as a customer service representative or call center operator.
  • a participant content source 100 includes a portable or wearable device that is configured to capture audio content such as speech signals in an environment that a set of participants occupies.
  • audio content can be captured from one or more participants, and subsequently be semantically analyzed to generate situational or strategic intelligence data relating to an event, situation, product, service, or transaction involving the participant(s) within the environment in which the wearable participant content source 100 is active.
  • a portable or wearable participant content source 100 that is primarily configured for capturing audio content can also be configured for capturing visual content (e.g., digital images or pictures) at one or more times.
  • FIGs. 3A - 3C particular representative types of portable or wearable participant content sources 100 configured as audio content capture devices 100a are described with reference to FIGs. 3A - 3C; and representative non-limiting examples of environments in which such audio content capture devices 100a can be utilized, activated, or deployed are described with reference to FIGs. 4A - 4C.
  • FIG. 3 A is a block diagram of a representative audio content capture device 100a in accordance with an embodiment of the disclosure.
  • an audio content capture device 100a includes a power source 110; a processing unit 1 12; a set of microphones 114; a user interface 116; a docking station interface 1 18; an optional Radio Frequency Identification (RFID) unit 120; and a memory 130, which includes an audio signal an audio signal capture / transfer module 132 and a captured content memory 134.
  • RFID Radio Frequency Identification
  • Each element of the audio capture device 100a can be coupled to a common bus or internal communication pathway 140, and the elements of the audio content capture device 100a can be carried by or reside within a common housing 150.
  • the power source 110 can be a battery (e.g., a replenishable or rechargeable battery), and the processing unit 1 12 can be a microcontroller, microprocessor, or the like.
  • the set of microphones 114 can include one or more microphones configured to capture audio signals on an omnidirectional or semi-directional basis in an environment in which the housing 150 resides.
  • the user interface 116 can include a set of buttons, switches, and/or a display device (e.g., a liquid crystal display (LCD)), and the docking station interface 1 18 can include a signal transfer interface such as a Universal Serial Bus (USB) interface by which captured audio signals can be transferred to a destination external to the audio content capture device 100a.
  • the docking station interface 118 can also include a signal transfer interface by which the power source 1 10 can be recharged.
  • the audio signal capture / transfer module 132 can include a set of program instructions configured to manage, control, or direct aspects of the capture and storage of audio signals, and the transfer of captured audio signals to an external destination.
  • the captured content memory 134 can be essentially any type of memory (e.g., a type of Random Access Memory (RAM)) that can receive and transfer digital audio signals and possibly other digital data.
  • the captured content memory 134 can be configured to store at least approximately 30 minutes of audio content, or between approximately 1 - 4 hours of audio content.
  • the audio content capture device 100a can be coupled to a docking station to facilitate audio and/or other content transfer to an external destination and/or power source replenishment.
  • the RFID unit 120 can be configured to receive RFID information corresponding to objects, structures, or devices in the audio content capture device's physical environment. Such RFID information can be stored in the memory 130. For instance, the RFID unit 120 can be configured to receive RFID information or codes corresponding to products positioned in a retail sales environment when the audio content capture device 100a is close or proximate to such products. RFID information can form portions of metadata corresponding to captured audio content.
  • FIG. 3B is a schematic illustration of a representative docking station 160 configured for communication or coupling with at least one audio content capture device 100a in accordance with an embodiment of the disclosure.
  • a docking station 160 can operate in association with or under the control of a computer system (e.g., a desktop or laptop computer), or as a substantially independent device.
  • the docking station 160 includes a number of audio content capture device interfaces, each of which is configured for receiving or mating with an audio content capture device 100a to facilitate the transfer of captured audio content from an audio content capture device 100a to the docking station 160 and/or a network destination external such as a content reception and/or decoding system 200.
  • Audio content capture device interfaces can additionally be configured for the transfer of power signals to an audio content capture device's power source 1 10.
  • the docking station 160 further includes a processing unit, a memory, and a network interface unit configured to facilitate or enable the reception of captured audio content from audio content capture devices 100a, and the transfer of such captured audio content to one or more network destinations.
  • FIG. 3C is a block diagram illustrating a set of representative docking station functional modules in accordance with an embodiment of the disclosure, where such functional modules can correspond to or include program instruction sets.
  • docking station functional modules include an audio content loader 170; a batch transfer module 175; a device administration manager 180; and a device environment manager 185.
  • the audio content loader 170 is configurable for receiving or retrieving captured audio content files from a set of audio content capture devices 100a, and storing such captured audio files in the docking station memory.
  • the batch transfer module 175 is configured for communicating or transferring captured audio files within the docking station memory to one or more content reception and/or decoding systems 200, and/or other destinations such as a captured speech library or database.
  • the device administration manager 180 is configured for facilitating administrative operations, such as establishing audio content capture device 100a parameters that can include a device carrier or wearer identifier or a captured voice profile of the device carrier or wearer that can be used for device carrier or wearer authentication purposes.
  • the device environment manager 185 provides an interface by which an administrator can establish or adjust environmental settings associated with one or more audio content capture devices 100a, such that the audio content capture device(s) 100a can adequately filter participant speech from other types of environmental sounds such as background noise.
  • An audio content capture device 100a can include or incorporate various types of structural and/or functional aspects in addition or as an alternative to those described above.
  • an audio content capture device 100a can include a digital camera configured for capturing and storing visual content, which can serve as information content to be analyzed, or metadata associated with audio content.
  • audio content capture device 100a can periodically initiate wireless transfer of captured information content to a docking station 160.
  • an audio content capture device 100a can be powered by wireless power transfer techniques, or an audio content capture device 100a can be (re)charged by way of a powermat.
  • FIG. 4A is an illustration of a representative restaurant or food service environment in which a server or staff member (e.g., a waitress) 20a wears an audio capture device 100a that is configured to capture audio signals either continuously or when in the presence of a customer 20b.
  • a server or staff member e.g., a waitress
  • An individual wearing or carrying an audio content capture device 100a as well as one or more other individuals with whom communication takes place proximate to the audio content capture device 100a can be considered as a participant.
  • the waitress 20a and the customer 20b can each be participants, and the audio content capture device 100a worn by the waitress 20a can capture verbal communication between the waitress 20a and her customer(s) (e.g., during each interaction between the waitress and her customer(s) during a work shift), thereby facilitating subsequent processing and/or analysis of such verbal communication.
  • FIG. 4B is an illustration of a representative medical environment in which a medical professional such as a doctor 20c wears an audio content capture device 100a, which can capture audio signals corresponding to the medical professional's interactions with patients and/or colleagues.
  • a medical professional such as a doctor 20c wears an audio content capture device 100a, which can capture audio signals corresponding to the medical professional's interactions with patients and/or colleagues.
  • FIG. 4C is an illustration of a representative law enforcement or security environment in which a law enforcement officer or security personnel 20e wears an audio content capture device 100a, which can capture audio signals corresponding to the officer's interactions with members of the public and/or colleagues.
  • Portable or wearable audio content capture devices 100a and one or more docking stations 160 can be deployed in wide variety of other types of environments, including but not limited to representative environments such as a retail sales environment in which one or more sales associates interact with customers or potential customers; a hotel environment in which hotel personnel interact with hotel guests; a compliance environment such as a finance or banking environment in which a financial professional interacts with one or more individuals; an assessment environment, such as an insurance assessment environment involving an insurance assessor (e.g., undertaking an automobile or home damage assessment); a counseling environment in which a therapist or counselor interacts with one or more patients (e.g., an individual or group counseling or behavioral therapy environment); and a corporate meeting or director environment such a board meeting in which board members interact with each other and/or company personnel.
  • representative environments such as a retail sales environment in which one or more sales associates interact with customers or potential customers; a hotel environment in which hotel personnel interact with hotel guests; a compliance environment such as a finance or banking environment in which a financial professional interacts with one or more individuals
  • audio signals and possibly visual information corresponding to or associated with an individual / participant carrying or wearing the audio content capture device 100a and one or more participants with whom they interact can be captured or recorded.
  • Such audio signals and associated metadata can be transferred to a content reception and/or decoding system 200 and subjected to processing and/or analysis by a content processing and/or analysis system 500.
  • portable or wearable participant content sources 100 such as audio content capture devices 100a facilitate or enable the direct or immediate capture, collection, and/or reception of real-time, face-to-face, and/or interpersonal interactions, conversations / discussions, or engagements between participants, in essentially any type of environment for which processing or analysis (e.g., semantic analysis) of real-time, face-to-face, and/or interpersonal participant information content could be useful.
  • processing or analysis e.g., semantic analysis
  • a portable or wearable participant content source 100 such as an audio capture device 100a can further facilitate the direct capture, collection, and/or reception of participant information content in a real-time manner that is independent or exclusive of, or prior to, a participant's communication of participant information content to one or more other individuals (e.g., a customer service agent or call center operator) after an interaction, conversation, or engagement with another participant.
  • a portable or wearable participant content source 100 facilitates the capture, collection, and/or reception of participant information content directly from the participant(s) involved in the generation of participant information content, as the generation of the participant information content occurs.
  • systems in accordance with embodiments of the present disclosure can process or analyze one or more participants' real-time, "first hand," and/or interpersonal experiences or accounts relating to an event, situation, product, service, transaction, or client, thereby reducing, minimizing, or eliminating a likelihood of misinterpreting situational, contextual, or semantic aspects of participant information content or generating inaccurate or erroneous information content analysis results, for instance, that can arise from processing or analyzing participant information content corresponding to separate, non-face-to-face, "second-hand,” “third-hand,” or other less direct or indirect accounts of an event or situation.
  • participant content source 100 can generate a participant content file in association with capturing or recording participant information content.
  • Participant content sources 100 can provide, communicate, or transfer one or more participant content files to one or more content reception and/or decoding systems 200.
  • a participant content file includes or references content signals or data that represents or forms portions of participant generated information content itself; plus corresponding metadata that establishes a set of associations or relationships between the content signals or data and one or both of (a) a content generation context such as an event, situation, product, service, transaction, and/or environment; and (b) one or more clients.
  • Content signals or data can include audio and/or visual signals or data (e.g., video, text, and/or graphical data).
  • Metadata can include one or more of a filename; a content creation, reception, and/or communication time and/or date; one or more client identifiers (e.g., a business location, or a staff member identifier or name); one or participant identifiers such as a participant name, a participant address, a telephone number, and an email address; a participant content source identifier (e.g., a device type or identifier); and other information.
  • client identifiers e.g., a business location, or a staff member identifier or name
  • participant identifiers such as a participant name, a participant address, a telephone number, and an email address
  • a participant content source identifier e.g., a device type or identifier
  • Each of a content reception / decoding system 200, a content auto-retrieval system 300, and a content processing / analysis system 500 can correspond to or include one or more physical and/or virtual computer systems, apparatuses, and/or devices providing access to or having dedicated or shared processing, network communication, data storage, and memory resources.
  • One or more of such systems can correspond to a server, a server farm, or a set of distributed computing resources such as cloud computing resources.
  • each of such systems, apparatuses, and/or devices can utilize, correspond to, provide access to, and/or include memory-resident or device-resident program instruction sets or software, which are executable by way of processing resources to facilitate, manage, control, or perform content reception, retrieval, processing, and/or analysis processes or services in accordance with particular embodiments of the present disclosure.
  • Representative manners in which particular program instruction sets, software modules, and/or computing layers corresponding to content reception / decoding systems 200, content auto-retrieval systems 300, and content processing / analysis systems 500 in accordance with certain embodiments of the present disclosure can be configured to cooperatively facilitate, manage, control, provide, or perform such processes or services are described in detail hereafter.
  • FIG. 5 is a schematic illustration of portions of a content reception and/or decoding system 200 in accordance with an embodiment of the present disclosure.
  • the content reception and/or decoding system 200 is configured to receive and/or retrieve information content from a number of sources, devices, and/or locations, and perform content decoding operations including speech-to-text conversion operations in which speech is initially decoded by way of matching and converting tone impulses to one or more of terms / words, phrases, portions of sentences, and sentences.
  • the content reception and/or decoding system 200 can include a content reception and transfer manager 210, a speech decoder 220, a set of acoustic and linguistic models 230, a scoring and pruning module 240, and one or more local databases 290.
  • Each of the content transfer manager 210, the speech content decoder 220, the acoustic and linguistic model(s) 230, and the scoring and pruning module 240 can correspond to or include one or more program instruction sets.
  • the reception and content transfer manager 210 oversees information content and decoded information content communication operations, and can serve as (a) an information content recipient or destination with respect to participant content sources 100 and one or more content auto-retrieval systems 300; and (b) a decoded information content source or origin with respect to a content processing / analysis system 500 and/or a content analysis database 600.
  • the content reception and transfer manager 210 facilitates, coordinates, oversees, or directs the reception of participant content files; the transfer of information content such as audio content contained therein to an appropriate content decoding, interpretation, or recognition engine, such as the speech decoder 220; and the transfer of decoded participant content files to local and/or remote databases 290, 600.
  • Any given decoded participant content file can include textual data corresponding to decoded audio content, and particular content file metadata.
  • the speech decoder 220 can include or be a speech decoding and/or recognition engine, such as a Weighted Finite State Transducer (WFST) or other type of speech decoding engine, which in association with the scoring and pruning module 240 utilizes the set of acoustic and linguistic models 230 to identify decoded speech results with a highest confidence level and output corresponding textual data in a manner understood by one of ordinary skill in the relevant art.
  • Decoded speech results can include textual data that forms portions of a decoded participant content file.
  • the content reception and/or decoding system 200 is configured to manage or perform operations including the conversion of captured participant speech content and/or acquired auxiliary speech content to text by way of speech-to-text (STT) processes or operations.
  • STT speech-to-text
  • FIG. 6 is a schematic illustration of portions of a content auto-retrieval system 300 in accordance with an embodiment of the present disclosure.
  • a content auto-retrieval system 300 includes a content acquisition and transfer manager 310, set of web crawlers or spiders 320, a services retrieval module 330, and a structured data retrieval module 340, each of which can correspond to or include one or more program instruction sets.
  • the content auto-retrieval system 300 can further include one or more local databases 390.
  • the content acquisition and transfer manager 310 manages or oversees the automatic retrieval or receipt of auxiliary information content from one or more sources, and the communication or distribution of retrieved auxiliary information content to one or more destinations.
  • the content acquisition and transfer manager 310 can communicate with a content analysis database 600 to determine for one or more client use cases particular types and/or sources of auxiliary information content to be retrieved.
  • the content acquisition and transfer manager 310 can correspondingly manage, coordinate, define, or select content retrieval processes or operations performed by the web crawler(s) / spider(s) 320, the services retrieval module 340, and the structured data retrieval module 340 to facilitate or enable the acquisition, retrieval, or receipt of auxiliary information content in accordance in a manner that depends upon any given client use case under consideration.
  • the content acquisition and transfer manager 310 can issue content acquisition instructions to the web crawler(s) / spider(s) 320, the services retrieval module 340, and the structured data retrieval module 340.
  • Such instructions can include (a) a client use case reference or identifier that forms a portion of metadata corresponding to auxiliary information content; and (b) a reference to an auxiliary information content location or address (e.g., a universal record locator (URL)) or another type of information (e.g., a login and/or password) that facilitates or enables access to auxiliary information content.
  • auxiliary information content location or address e.g., a universal record locator (URL)
  • another type of information e.g., a login and/or password
  • the web crawler(s) / spider(s), the services retrieval module 340, and the structured data retrieval module 340 can correspondingly interpret or decode an acquisition instruction, correspondingly access, acquire, retrieve, or request auxiliary information content, and store such auxiliary information content as portions of an auxiliary content file within a local database 390. Additional metadata and/or predefined objects such as stored procedures or user access or authentication rights corresponding to acquired auxiliary information content can be inherited from auxiliary information content itself.
  • the content acquisition and transfer manager 310 can transfer one or more auxiliary content files to a content processing / analysis system 500, a content analysis database 600, and/or a content reception and/or decoding system 200.
  • the content acquisition and transfer manager 310 can communicate auxiliary content files to particular destinations based upon the type(s) of auxiliary information content within such files.
  • the content acquisition and transfer manager 310 can communicate a raw, original, unstructured, or semi-structured auxiliary content file that includes auxiliary text data and any associated metadata to a content processing / analysis system 500 and/or a content analysis database 600.
  • the content acquisition and transfer manager 310 can communicate an auxiliary content file that includes auxiliary audio data and any associated metadata to a content reception and/or decoding system 200, which can subsequently perform speech-to-text operations upon the auxiliary audio data to generate a decoded auxiliary content file.
  • FIG. 7 is a schematic illustration of portions of a content processing and/or analysis system 500 in accordance with an embodiment of the present disclosure.
  • a content processing and/or analysis system 500 includes at least some of a communication module 510; an indexing module 515; a tokenization / parsing module 520; a natural language processing (NLP) and/or expression builder 525; a data mining module 530; a categorization module 535; a relevancing module 540; a knowledge extraction and/or discovery module 545; a set of use case processing agents 550; a set of use case learning modules 555; and a structured data conversion module 560, each of which can correspond to or include one or more program instruction sets.
  • the content processing and/or analysis system 500 additionally includes one or more local databases and/or local caches 590.
  • the communication module 510 manages or coordinates communication between the client processing / analysis system 500 and other systems, apparatuses, or devices. In several embodiments, the communication module manages or coordinates (a) the reception of decoded participant content files and auxiliary content files, which can include decoded auxiliary audio content and/or original text data; (b) communication between the content processing / analysis system 500 and portions of a content analysis database 600; and (c) communication between the content processing / analysis system 500 and a client input / output manager 700.
  • the indexing module 515 can index auxiliary information content, such as original textual data received from a content auto-retrieval system 300.
  • the tokenization / parsing module 520 can organize words within received content files (e.g., which include decoded participant content, decoded auxiliary content, and/or original auxiliary content) in accordance with known word classifications such as noun, name, place, and/or other classifications to generate semi-structured data.
  • the natural language processing and/or expression builder 525 can then query the semi-structured data with one or more anchor query words, phrases, and/or expressions specified by an anchor set of query words, phrases, and/or expressions for a client use case under consideration.
  • the natural language processing and/or expression builder 525 can additionally augment or expand the query using related or modified words, phrases, and/or expressions by way of natural language processing techniques to reflect or collect more natural forms of data input, in a manner understood by one of ordinary skill in the relevant art.
  • Natural language processing widens a range or circle of permissible query or search results by including with varying degrees of confidence words, phrases, and/or expressions that are grammatically or contextually (e.g., with respect to logical context) related to the anchor word(s), phrase(s), or expression(s),
  • that natural language processor / expression builder 525 can augment or expand words, phrases, and/or expressions in multiple dimensions or directions, such as linguistic, acoustic / phonetic, and temporal dimensions.
  • an anchor expression can be augmented or expanded by linguistic extension of a search phrase.
  • an anchor expression can be augmented or expanded based upon one or more manners in which a phrase was spoken or emphasized, for instance, in accordance with a set of acoustic and/or phonetic measures that can indicate whether the phrase was spoken actively or passively or with positive or negative intonations, such that the search can locate relevant data or differentiate words to detect concepts such as irony or sentiment polarity.
  • an expression can be augmented or expanded based upon a temporal context corresponding to the phrase, such as how often a phrase under consideration was uttered within a given period of time.
  • the content processing / analysis system 500 can thus identify or detect concepts that are of particular importance to a participant, such as identical, similar, or analogous concepts that the participant repeats multiple times (e.g., within shorter time frames) by way of natural language processing, expression building, and associated query augmentation or expansion.
  • concepts that are of particular importance to a participant, such as identical, similar, or analogous concepts that the participant repeats multiple times (e.g., within shorter time frames) by way of natural language processing, expression building, and associated query augmentation or expansion.
  • captured participant information content includes phrases such as "can you guarantee this?”, "you really can guarantee this?", and "are you sure you can guarantee it", where such phrases are possibly spoken in a short time interval, such phrases indicate persistence on the part of a participant seeking reassurance.
  • the system 500 can identify such spoken activity as highly important to the participant, which in this representative example indicates that the participant rates a client's ability to guarantee a service or product specification (e.g., corresponding to meeting a deadline) as very important.
  • the expression builder 525 can access or reference the content analysis database 600; external or third party libraries or databases; its own local database or library 590 it builds over time; or services to convert expressions into augmented or wider relevant search expressions.
  • the expression builder can augment or extend a query to include words, phrases, or expressions corresponding to or including "dizzy spells" as these two patient states statistically occur together often or relatively often.
  • a source of such statistics can be participant information content captured over time, external databases, or a historical analysis of anchor terms input by or on behalf of one or more clients.
  • Results from the original and any augmented or expanded queries can be mined for patterns by the data mining module 530, and further grouped or categorized by the categorization module 535, such as by way of a statistical data mining and/or categorization technique (e.g., a naive bayes classifier or process).
  • the relevancing module 540 can re-assess or evaluate the relevance of the search results (e.g., as categorized by the categorization module 535) in accordance with particular circumstantial criteria or parameters.
  • Such criteria can include one or more of a role of a participant in a conversation; a history of anchor words, phrases, or expressions and corresponding queries or searches performed in a prior or recent time interval (e.g., corresponding to searches performed for or by a client over the past few days or weeks); recent client web browsing history; and the headings and/or bodies of recent client e- mails to identify words, phrases, expressions, or concepts that are currently most relevant or expected to be most relevant to a client under consideration.
  • client-related history information can be made available or accessible to a client input / output manager 700, which can store client-related history information within the content analysis database 600.
  • the knowledge extraction and/or discovery module 545 further analyzes search results to identify subtle, hidden, or previously unrecognized knowledge. For instance, a knowledge extraction and/or discovery module 545 can create or build a relational schema based upon search results.
  • the set of use case processing agents 550 can further refine and reprocess search results to arrive at a generally, reasonably, or fairly accurate set of information content analysis results.
  • the set of use case processing agents 550 analyzes or refines search results in accordance with client workflow definitions or parameters corresponding to business events, scenarios, situations, or transaction types.
  • the set of use case processing agents 550 can identify a high recurrence of a particular phrase in search results, and can fetch this phrase and reprocess it with additional or other attributes or parameters. For instance, if the word or phrase "revenues" is frequently identified, the system 500 can query using words related to revenues.
  • the system 500 can establish a word pair such as "revenue increase” and investigate word patterns found under a search based upon "revenue increase.”
  • the content analysis database 600 can include a scenario or vertical specific library that can prioritize specific phrases for a client use case under consideration.
  • the use case learning module(s) 555 can be configured to seed priorities of phrases common to any given use case.
  • Information content analysis results can be locally stored in one or more local caches or databases 590, for instance, by the structured data conversion module 560.
  • FIG. 8 is a schematic illustration of portions of a client input / output manager 700 in accordance with an embodiment of the present disclosure.
  • a client input / output manager 700 includes one or more of a client use case definition manager 710; an information content analysis results dissemination manager 720; an application / GUI builder 730; a set of webservers 740; a messaging or notification manager 750; and a set of local databases 790.
  • the client use case definition manager 710 provides a set of user interfaces that facilitate or enable the definition of one or more use cases corresponding to a given client.
  • the results dissemination manager 720 provides at least one user interface that can be accessed by or on behalf of a client, and which facilitates the selection or definition of particular manners in which information content analysis results and/or notifications or alerts corresponding thereto are to be communicated to a client under consideration, such as by way of web publication, storage on a client database, association with a client specific application program, and/or issuance of a message or alert to a client device such as a mobile telephone.
  • the application / GUI builder 730 provides a set of user interfaces and by which particular information content analysis results can be associated with or provided to an application program or user interface that can be customized or tailored in accordance with client requirements.
  • the application / GUI builder 730 can access or utilize a webservices architecture such as an API library to facilitate the creation of an appropriate client specific application program or user interface.
  • the webserver(s) 740 can be configured to distribute or publish client relevant content over the Internet in one or more manners.
  • the messaging manager 750 can send portions of information content result sets, statistics or summaries corresponding thereto, and/or messages, notifications, or alerts that indicate certain types of information content analysis result conditions (e.g., which can be defined in accordance with client-specified alert triggers associated with a client or particular client use cases) to one or more client systems or devices.
  • the messaging manager 750 can include an SMS and/or an e-mail gateway.
  • FIG. 9 is a schematic illustration of portions of a client result destination 800 in accordance with an embodiment of the present disclosure.
  • a client result destination 800 can include one or more browsers 810, and/or particular message, notification, or alert reception services or processes corresponding to or executing on one or more types of client systems or devices.
  • a client result destination 800 can additionally or alternatively include a client database 890.
  • client database 890 In addition or as an alternative to the foregoing description with reference to FIGs. 1A - 9, particular representative embodiments detailing structural and functional aspects of content reception and analysis systems are provided in Appendix A to this specification.
  • aspects of particular embodiments of the present disclosure address at least one aspect, problem, limitation, and/or disadvantage associated with existing systems and techniques for information content reception and analysis. While features, aspects, and/or advantages associated with certain embodiments have been described in the disclosure, other embodiments may also exhibit such features, aspects, and/or advantages, and not all embodiments need necessarily exhibit such features, aspects, and/or advantages to fall within the scope of the disclosure. It will be appreciated by a person of ordinary skill in the art that several of the above-disclosed systems, components, processes, or alternatives thereof, may be desirably combined into other different systems, components, processes, and/or applications. In addition, various modifications, alterations, and/or improvements may be made to various embodiments that are disclosed by a person of ordinary skill in the art within the scope and spirit of the present disclosure.

Abstract

Aspects of the present disclosure are directed to an information content analysis architecture that can be configured for capturing, receiving, retrieving, or accessing information content (e.g., audio content) generated by one or more sets of participants by way of participant content sources, where such information content can be of interest to a client; selectively analyzing information content in accordance with one or more client use cases; generating information content analysis results, such as semantic analysis results, corresponding to each client use case under consideration; and providing information content analysis results to one or more clients in accordance with the client use cases under consideration. Participant content sources can include portable and/or wearable devices that are configured to capture from one or more individuals audio and/or other types of information content associated with an event, situation, product, service, transaction, or client / business under consideration. Participant content sources are configured to communicate captured information content to a content reception and decoding system, which is configured to communicate decoded information content to a information content reception and analysis architecture content processing and/or analysis system such as a semantic analysis system.

Description

INFORMATION CONTENT RECEPTION AND ANALYSIS ARCHITECTURE
Technical Field
The present disclosure relates generally to systems and techniques for information content analysis, including semantic analysis.
Background
Understanding your customers, clients, or end-users, has always been considered as the key to a running successful business. Customer related information and feedback, such as verbal communication with customers or clients, is considered key to such understanding. However, many business owners, and service providers struggle with obtaining such information and even after obtaining such information, utilizing it.
Summary
According to an aspect of the present disclosure, there is provided a method for processing information content, including capturing the information content through a content source, saving the captured information content in a captured content memory, semantically processing the captured information content with a content processing system, identifying within the processed captured information content a predetermined analysis result condition, and generating and providing a corresponding summary of the identified analysis result condition, wherein the information content corresponds to any of a face-to-face interaction between at least two participants, and as an occurrence in response to an event identifier. Further, aspects of the present disclosure are directed to an information content analysis architecture that can be configured for capturing, receiving, retrieving, or accessing information content (e.g., audio content) generated by one or more sets of participants by way of participant content sources, where such information content can be of interest to a client; selectively analyzing information content in accordance with one or more client use cases; generating information content analysis results, such as semantic analysis results, corresponding to each client use case under consideration; and providing information content analysis results to one or more clients in accordance with the client use cases under consideration. In a number of embodiments, participant content sources can include portable and/or wearable devices that are configured to capture from one or more individuals audio and/or other types of information content associated with an event, situation, product, service, transaction, or client / business under consideration. Participant content sources are configured to communicate captured information content to a content reception and decoding system, which is configured to communicate decoded information content to a content processing and/or analysis system such as a semantic analysis system.
Brief Description of the Drawings
FIG. 1A is a schematic illustration of a single-client or multi-client information content analytics architecture according to an embodiment of the disclosure. FIG. IB is a schematic illustration of the architecture of FIG. 1A, in which participant content sources include a set of participant audio content sources; a set of participant audio / visual content sources; and a set of participant textual content sources.
FIG. 2A is a schematic illustration of a printable or distributable physical medium such as a transaction record, sales receipt, or other type of document or tangible item (e.g., an external surface of a product package or container, such as a food or beverage container) that carries or includes an event identifier.
FIG. 2B is an illustration of a representative user feedback interface by which a participant can generate or provide one or more types of participant information content corresponding to customer or user feedback.
FIG. 3A is a block diagram of a representative portable or wearable audio content capture device in accordance with an embodiment of the disclosure. FIG. 3B is a schematic illustration of a representative docking station configured for communication or coupling with at least one audio content capture device in accordance with an embodiment of the disclosure. FIG. 3C is a schematic illustration of representative docking station functional modules in accordance with an embodiment of the disclosure.
FIG. 4A is an illustration of a representative restaurant or food service environment in which a server or staff member (e.g., a waitress) wears an audio capture device that is configured to capture audio signals either continuously or when in the presence of a customer.
FIG. 4B is an illustration of a representative medical environment in which a medical professional such as a doctor wears an audio content capture device, which can capture audio signals corresponding to the medical professional's interactions with patients and/or colleagues.
FIG. 4C is an illustration of a representative law enforcement or security environment in which a law enforcement officer or security personnel wears an audio content capture device, which can capture audio signals corresponding to the officer's interactions with members of the public and/or colleagues.
FIG. 5 is a schematic illustration of portions of a content reception and/or decoding system in accordance with an embodiment of the present disclosure.
FIG. 6 is a schematic illustration of portions of a content auto-retrieval system in accordance with an embodiment of the present disclosure.
FIG. 7 is a schematic illustration of portions of a content processing and/or analysis system in accordance with an embodiment of the present disclosure. FIG. 8 is a schematic illustration of portions of a client input / output manager in accordance with an embodiment of the present disclosure.
FIG. 9 is a schematic illustration of portions of a client result destination in accordance with an embodiment of the present disclosure.
Detailed Description
In the following, reference is made to embodiments of the disclosure. However, it should be understood that the disclosure is not limited to specific described embodiments. Instead, any combination of the following features and elements, whether related to different embodiments or not, is contemplated to implement and practice the disclosure.
Furthermore, in various embodiments the disclosure provides numerous advantages over the prior art. However, although embodiments of the disclosure may achieve advantages over other possible solutions and/or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the disclosure. Thus, the following aspects, features, embodiments and advantages are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s). Likewise, any reference to "the invention" shall not be construed as a generalization of any inventive subject matter disclosed herein and shall not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s).
According to an aspect of the present disclosure, there is provided a method for processing information content, including capturing the information content through a content source, saving the captured information content in a captured content memory, semantically processing the captured information content with a content processing system, identifying within the processed captured information content a predetermined analysis result condition, and generating and providing a coiTesponding summary of the identified analysis result condition, wherein the information content corresponds to any of a face-to-face interaction between at least two participants, and as an occurrence in response to an event identifier.
In an embodiment, the face-to-face interaction between at least two participants occurs in anyone of a law enforcement environment, a security environment, a retail sales environment, a compliance environment, a hotel environment, a finance environment, an assessment environment, a corporate environment, and a counseling environment.
In an embodiment, the event identifier is any one of a machine readable image, an email, a Short Message Service (SMS) message, and a Contactless Near Field Communication activation.
In an embodiment, the information content is any one of a audio content, a visual content, and a textual content. In a further embodiment, the information content is an audio content, and where the content is decoded through a speech-to-text conversion.
In an embodiment, acoustic and linguistic models are utilized during the speech-to-text conversion to obtain decoded speech results of high accuracy. In an embodiment, the method further includes associating a content metadata with the captured information and saving the content metadata with the captured information content in the captured content memory.
In an embodiment, semantically processing the captured information content includes matching and organizing word data in accordance with predetermined classification to obtain semi-structured data.
In an embodiment, semantically processing the captured information content further includes running the semi-structured data through a natural language processing builder.
In an embodiment, wherein semantically processing the captured information content further includes data mining and categorizing the semi- structured data. In an embodiment, the predetermined analysis result condition is associated to multiple occurrences of any of a predetermined, word, phrase, expression within a period of time.
In an embodiment, the information content is captured through an auxiliary content source, and wherein the auxiliary content source corresponds to any one of an Internet website, an Internet blog, a social media website, a virtual community, and a professional data library.
In an embodiment, the information content is captured with a content acquisition manager operating a web crawler module in the auxiliary content source.
In an embodiment, the method further includes saving a plurality of captured information content with content metadata associated with a client in a captured content memory, determining a client-specific analysis result condition with a client input/output manager, semantically processing captured information content based on the client-associated content metadata and the client-specific analysis result condition, generating a corresponding summary of the identified analysis, and providing the corresponding summary with the client input/output manager. In another aspect of the disclosure, there is provided a system for processing information content, including a content source for capturing information content, a captured content memory for saving the captured information content, and a content processing system for semantically processing the captured information content, identifying within the processed captured information content a predetermined analysis result condition; and generating and providing a corresponding summary of the identified analysis result condition, wherein the information content corresponds to any of a face-to-face interaction between at least two participants, and as an occurrence in response to an event identifier. In an embodiment, the system further includes a participant content source for capturing a participant information content, where the participant content source is any one of a mobile telephone, a personal computing device, tablet computer, a portable speech capture device and an audio/visual recording device.
In an embodiment, the system further includes a docking station for receiving the participant content source, the docking station including an audio content loader for receiving captured information content from the docked participant content source, and a transfer module for transferring the received captured information content to a captured content memory. In an embodiment, the system further includes a client input/output manager, including a case definition manager and a results dissemination manager, the client input/output manager configured for communication with the content processing system and at least one client result analysis destination. In an embodiment, the system further includes a content auto-retrieval system for automated retrieval of auxiliary information content from an auxiliary content source.
In an embodiment, wherein the content auto-retrieval system further includes a content acquisition manager operating a web crawler in the auxiliary content source.
In the present disclosure, the depiction of a given element or consideration or use of a particular element number in a particular FIG. or a reference thereto in corresponding descriptive material can encompass the same, an equivalent, or an analogous element or element number identified in another FIG. or descriptive material associated therewith.
As used herein, the term "set" corresponds to or is defined as a non-empty finite organization of elements that mathematically exhibits a cardinality of at least 1 (i.e., a set as defined herein can correspond to a singlet or single element set, or a multiple element set), in accordance with known mathematical definitions (for instance, in a manner corresponding to that described in An Introduction to Mathematical Reasoning: Numbers, Sets, and Functions, "Chapter 1 1 : Properties of Finite Sets" (e.g., as indicated on p. 140), by Peter J. Eccles, Cambridge University Press (1998)). In general, an element of a set can include or be a system, an apparatus, a device, a structure, a structural feature, an object, a process, a physical parameter, a signal, a data element, a value, or a person depending upon the type of set under consideration. As further detailed below, various embodiments in accordance with the present disclosure are configured to provide an information content processing and/or analysis architecture that facilitates the analysis of information content provided by or captured from participants, and the provision of information content analysis results to one or more clients for whom semantic and/or other aspects of such information content are of interest. An architecture as described herein includes, corresponds to, or defines a design or implementation model that encompasses hardware; software or program instruction sets; and/or a set of user interfaces. Particular information content analysis architectures described herein correspond to, encompass, include, or provide each of a hardware architecture, a software architecture, and a user interface architecture.
Information content as described herein includes one or more of (a) auditory, vocal, verbal, or speech content, information, signals, or data (hereafter auditory content); (b) visual content, information, signals, or data, which can correspond to text, logograms, pictographs, symbols, images, or video (hereafter visual content); and (c) metadata corresponding to such auditory and/or visual content. In certain situations, visual content itself can provide or be a type of metadata corresponding to auditory content. In general, information content can be categorized as (a) participant information content; or (b) auxiliary or adjunctive information content. An architecture in accordance with an embodiment of the present disclosure facilitates or effectuates the capture, collection, or retrieval of information content from one or more participants; the analysis of such information content; and the generation of corresponding information analysis results that can be provided to one or more clients. More particularly, an individual that directly generates information content of interest or potential interest to a client by way of interacting with one or more other individuals and/or a device is referred herein to as a participant. Participant information content includes information content that is directly generated by one or more individuals interacting with each other and/or a set of information content capture, communication, or recording devices within a particular type of communication environment, situation, scenario, domain, or context. Participant information content can be (a) captured, recorded, or received by a system, apparatus, or device; and (b) subsequently subjected to information content analysis.
Auxiliary information content includes structured and/or unstructured information content which can facilitate the analysis of participant information content. Auxiliary information content can be provided, accessed, retrieved, or downloaded by way of one or more external information or media content sources, repositories, libraries, databases, or services. Representative examples of auxiliary information content include information content corresponding to or available, retrievable, downloadable, transferable, or requestable from websites, internet forums, internet blogs, social media, internet forums, virtual communities, industrial media (e.g., film, television, radio, or print media such as magazines), and/or professional or profession-related data sources or repositories corresponding to business, legal, medical, scientific, technical, or other types of information, literature, or documents. A set of individuals or an organization that can have an interest in semantic and/or other aspects of information content generated by one or more sets of participants is referred to herein as a client. That is, a client can be defined as an end-user of information content analysis results. More particularly, a client can be defined as a set of individuals or entities for which information content analysis results can be generated by way of capturing information content from one or more participants deemed to be relevant or potentially relevant to the client, and subjecting such captured participant information content to one or more types of processing and/or analysis, such as semantic analysis, data mining, and/or knowledge extraction or discovery. In various embodiments, information content analysis results can be generated based upon participant information content in view of one or more types of adjunctive information content. Representative examples of clients include businesses and organizations; and representative examples of participants include customers and potential customers of such clients. Information content analysis results can correspond to situational or strategic intelligence data that can be useful to a given client. As further described below, architectures in accordance with several embodiments of the present disclosure selectively analyze participant and/or auxiliary information content on a client-centric, client-selective, or client-specific basis in accordance with one or more client use cases. In various embodiments, an information content reception and analysis system is selectively or dynamically configurable for analyzing participant and/or auxiliary information content in accordance with multiple client use cases corresponding to multiple distinguishable or distinct clients that are served by the system, for instance, in a seamless, computationally concurrent, virtually simultaneous, parallel, or sequential manner. Thus, a single information content reception and analysis system can generate information content analysis results for multiple distinct clients, where information content analysis results corresponding to any given client are generated in accordance with at least one client use case corresponding to this client. Such a single system can be configured to serve a wide variety of client types (e.g., in a computationally concurrent manner), and can be configured to provide one or more types of customer feedback processing and/or analysis services for any given client. In certain embodiments, an information content reception and analysis system is configurable for analyzing participant and/or auxiliary information content with respect to one or more client use cases corresponding to a single client that is served by the system (e.g., an information content reception and analysis system can be dedicated to serving a single client, providing particular types of information content analysis services to the single client).
Multiple embodiments of the present disclosure are directed to an information content processing and/or analysis architecture that can be flexibly or selectively configured for (a) capturing and/or accessing (e.g., receiving or retrieving) participant information content by way of one or multiple participant information content sources; (b) resolving, decoding, and/or parsing captured participant information content, and possibly auxiliary information content; (c) processing and/or analyzing resolved participant information content, possibly in view of one or more types of auxiliary information content; (d) generating information content processing and/or analysis results, signals, or data, including semantic analysis results, in a manner that is selective, adaptive, configurable, or customizable with respect to a set of client use cases under consideration; and (e) storing, outputting, presenting, distributing, and/or publishing such processing and/or analysis results in a manner that is selective, adaptive, configurable, or customizable with respect to the set of client use cases under consideration.
Various types of communication contexts involving representative examples of participants and clients, and corresponding representative manners of generating of information content analysis results in accordance with client use cases, are described in detail below.
Architectural Overview
FIG. 1A is a schematic illustration of a single-client or multi-client information content analytics architecture 10 according to an embodiment of the disclosure. In various embodiments, such an architecture 10 includes a number of participant content sources 100; at least one content reception / decoding system 200; at least one content processing and/or analysis system 500 and an associated content processing and/or analysis database 600; and at least one client analysis result destination 800. In an embodiment configured for generating information content analysis results for multiple distinguishable or distinct clients, the content processing and/or analysis system 500 is configured for performing client-selective or client-adaptive content processing and/or analysis operations. Each participant content source 100 includes a device that is configurable for providing, acquiring, capturing, or recording participant information content. Depending upon embodiment details, a participant content source 100 can include a device configured for capturing one or more of audio, visual, and textual content. As further detailed below, representative examples of participant content sources 100 include a telephony device, such as a mobile telephone; a personal or portable computing device such as a tablet computer; an audio/visual recording device; and a portable or wearable speech capture device that can capture or record a multi-party conversation independent or exclusive of a telephony system or device.
In some embodiments, one or more participant content sources 100 are further configurable for making use of, accessing, or capturing an event, condition, transaction, or feedback identifier 102 that facilitates the establishment of an association between particular participant information content and a given client, client related event, or client use case. More particularly, by way of an event identifier 102, a given participant's information content such as verbal feedback relating to an event such as a business transaction can be (a) associated or identified with a specific client corresponding to the event, such as a business, franchise, or store at which the business transaction occurred; and possibly (b) directed to or toward a content reception / decoding system 200 that is associated with or assigned to the client, or which is configured for generating information content analysis results corresponding to the client. An event identifier 102 can correspond to, provide, or be used to generate metadata associated with participant information content under consideration. Representative types of event identifiers 102 and participant content sources 100 that can utilize, access, or capture such event identifiers 102 are further described in detail below with reference to FIGs. 2A - 2D. Each participant content source 100 is further configured for communicating participant information content to at least one content reception / decoding system 200 that is configured for resolving, decoding, and/or parsing participant information content and possibly auxiliary information content. Any given content reception / decoding system 200 is configured for communication with at least one content processing and/or analysis system 500 and/or an associated content analysis database 600.
The content processing and/or analysis system 500 is configured for processing and/or analyzing resolved, decoded, or parsed participant information content, possibly in association with processing and/or analysis of resolved, decoded, or parsed auxiliary information content; and generating information content processing and/or analysis results. In various embodiments, information analysis results include semantic analysis results. The content analysis database 600 includes a number of databases that facilitate information content analysis, including one or more resolved content databases 610; expression databases 612; at least one client configuration database 620; at least one client use case database 622; one or more analysis indexes 630; and at least one analysis results database 632. Manners in which particular databases are utilized in association with information content analysis processes are described in detail below.
Some embodiments in accordance with the present disclosure additionally include a content auto-retrieval system 300, which is configured for performing content auto- retrieval operations that involve automatically or semi-automatically accessing, retrieving, downloading, or requesting auxiliary information content corresponding to one or more auxiliary content sources, repositories, and/or archives 400, and which is further configured for communicating with one or more content reception / decoding systems 200 as well as particular content processing and/or analysis systems 500 and/or one or more associated content analysis databases 600. Auxiliary content sources, repositories, and/or archives 400 can include physical and/or virtual systems (e.g., computer servers), apparatuses, or devices upon which auxiliary information content resides, for instance, corresponding to current or archived information or media associated with websites, internet blogs, internet forums, social media (e.g., Facebook), information uploading, hosting, and/or sharing services (e.g., YouTube), virtual communities, industrial media (e.g., film, television, radio, or print media such as magazines), and/or professional data sources or libraries for business literature, legal documents, scientific or technical literature, or other information. In some embodiments, content auto-retrieval operations can be performed in a client use case dependent manner. For instance, content auto-retrieval operations can involve the automatic retrieval of information content from a first set of auxiliary content sources 400 in accordance with a first client use case corresponding to a first client (e.g., a first business or organization), and a second set of auxiliary content sources 400 in accordance with a second client use case distinct from the first client use case and corresponding to a second client (e.g., a second business or organization) that is distinct from the first client. Several embodiments include at least one client input / output manager 700 configured for communication with one or more client result analysis destinations 800, at least one content processing and/or analysis system 500, and possibly a set of associated content analysis databases 600. A client input / output manager 700 can include a computing system or device that is configurable for determining or establishing for a given client at least one client use case that references, identifies, defines, specifies, or includes (a) a client identifier; (b) information content reception and/or retrieval parameters that correspond to or define one or more manners in which information content is to be received and/or retrieved from participant content sources 100 and/or auxiliary content sources 400; (c) an anchor, source, original, initial, or default set of query words, phrases, and/or expressions that are relevant to the client, and which facilitate or enable information content analysis processes or operations as further described below; and (d) dissemination and/or notification parameters that indicate how information content analysis results are to be made available, provided, output, stored, distributed, or presented. In an embodiment, a client use case can include or be defined as a set of client use case parameters referenced by or stored in a data structure such as a table. In some embodiments, particular information content analysis results corresponding to a given client can be published to one or more databases (e.g., a client internal database); and/or transferred to one or more types of client systems or devices such as client computer systems, tablet computers, or mobile telephones. A client input / output manager 700 can provide a set of client or system administrator user interfaces (e.g., visual or graphical user interfaces) to facilitate or enable client use case definition and modification. In certain embodiments, information content analysis results corresponding to a given client are associated with, made available by way of, or distributed or published in accordance with an Application Program Interface (API), which can facilitate the definition or generation of client-customized or client-specific application software or interfaces (e.g., in accordance with client requests or requirements). A client input / output manager 700 can provide a system administrator or other individual with access to such an API and possibly associated software tools (e.g., client visual interface definition or programming software) or toolkits.
An individual may need to be incentivized in order to act as a participant. That is, an individual may need to be provided with some type of motivation to generate participant information content corresponding to the individual's experience relating to an event, situation, product, service, transaction, or client under consideration. Certain embodiments in accordance with the present disclosure include one or more incentive management systems 900 configured to manage incentive, thank-you reward, or similar types of programs by which participants can receive coupons, discounts, reward point accruals, or the like. A given inventive management system 900 can be provided or operated by a particular client, or a third-party. In some embodiments, an incentive management system 900 can communicate with one or more content reception and/or decoding systems 200 and/or client input / output managers 700 to facilitate the distribution of incentive awards, coupons, discounts, or the like to participants.
As indicated in FIG. 1A, particular systems, subsystems, apparatuses, devices, or elements of the architecture 10 are configured for communication with other systems, subsystems, apparatuses, devices, or elements by way of a set of networks 80. Depending upon embodiment details, the set of networks 80 can include public and/or private networks such as one or more of a Local Area Network (LAN), a Wide Area Network (WAN), the Internet, a telephone network (e.g., a mobile telephone network), and a satellite network. Portions of an architecture 10 in accordance with an embodiment of the disclosure can be configured or reconfigured in various manners, such as in accordance with embodiment details; technological constraints or advances; and/or the type(s) of participants, participant content sources 100, client(s), and/or client analysis result destination(s) 800 under consideration. Additionally, portions of one or more systems, subsystems, apparatuses, devices, or elements within the architecture 10 can be distributed across, combined with, or consolidated within one or more portions of other systems, subsystems, apparatuses, devices, or elements of the architecture 10.
In various embodiments, participant information content sources 100 are configured for capturing audio content. Depending upon embodiment details, participant information content sources 100 can be configured for capturing additional and/or other types of information content, which can also be subjected to information content analysis. FIG. IB is a schematic illustration of the architecture 10 of FIG. 1A, in which participant content sources 100 include a set of participant audio content sources 100a; a set of participant audio / visual content sources 100b; and a set of participant textual content sources 100c. In some embodiments, a single participant information content source 100 such as a mobile telephone can selectively capture or provide audio, audio / visual, or textual participant information content for subsequent analysis, for instance, based upon participant preference or input, and/or a client use case under consideration.
Aspects of Representative Event Identifiers and Associated Participant Content Sources In association with the foregoing, in several embodiments participant content sources 100 can be configured to use, access, receive, retrieve, or capture one or more types of event identifiers 102. Representative non-limiting types of event identifiers 102 and participant content sources 100 that can be configured to use, receive, access, and/or capture such event identifiers 102 are described in detail hereafter with respect to FIGs. 2A - 2D.
FIG. 2 A is a schematic illustration of a printable or distributable physical medium 104 such as a transaction record, sales receipt, or other type of document or tangible item (e.g., an external surface of a product package or container, such as a food or beverage container) that carries or includes an event identifier 102. In various embodiments, an event identifier 102 corresponds to or provides a machine-readable, machine- communicable, and/or machine-processable image or symbol pattern that can be captured or received by a participant content source 102, and which can facilitate the provision of participant feedback, viewpoints, or opinions relating to an event, situation, product, service, transaction, or client (e.g., business or organization) under consideration. In some embodiments, an event identifier 102 includes a matrix or two-dimensional barcode (e.g., a Quick Response (QR) code, in accordance with International Standard ISO / IEC 18004) configured to encode a set of transaction parameters corresponding to an event, situation, product, service, transaction, or client, such as a purchase of a given client's product or a payment for a client's professional service. Such transaction parameters can include a client identifier or code (e.g., a business or organization identifier or code, which can correspond to a given business division, location, or site); a date / time stamp; an event, situation, product, package / container, service, or transaction descriptor or code; and/or other information. Transaction parameters can serve as or be used to generate metadata that establishes a correspondence between participant information content and a client related event.
An appropriate type of participant content source 100 such as a mobile telephone, tablet computer, or other type of electronic or computing system, apparatus, or device can capture an image of or otherwise receive or retrieve an event identifier 102, for instance, by way of a camera carried by the participant content source 100 as indicated in FIG. 2A. In several embodiments, a set of program instructions or software such as a downloadable application program or an application plug-in, add-on, or add-in can be executed by a processing or control unit of the participant content source 100, and can decode the event identifier 102 and correspondingly generate, provide access to, and/or manage or control the operation of one or more types of user interfaces by which the participant or the participant content source 100 can direct participant information content to a content reception and/or decoding system 200, where such participant information content corresponds to customer or user feedback pertaining to an event, situation, product, service, or transaction associated with a particular client.
In some embodiments, such as that shown in FIG. 2B, following or in response to the participant content source's capture or reception of an event identifier 102, and/or participant selection of the displayed event identifier 102 or an image or icon associated therewith, the participant content source 100 can provide a user feedback interface 1 10 by which the participant can generate or provide one or more types of participant information content corresponding to customer or user feedback. Depending upon embodiment details, such participant information content can include one or more of audio content, textual content, and possibly audio / visual content. Correspondingly, in response to participant interaction with the user interface 110, the participant content source 100 can generate or provide participant access to an appropriate type of user feedback interface, such as an audio feedback interface, a text feedback interface, or an audio / visual feedback interface by which the participant can direct or submit their feedback, comments, opinions, suggestions, complaints, or ratings to a content reception and/or decoding system 200. While some embodiments provide participant access to a predetermined type of user feedback interface, other embodiments provide participant access to multiple types of feedback interfaces, any one of which can be activated in response to participant selection.
Any given user feedback interface can facilitate the local and/or remote capture of participant feedback. More particularly, in association with or following the generation of participant information content corresponding to an event, situation, product, service, transaction, and/or client under consideration, the participant content source 100 can establish communication with an appropriate content reception and/or decoding system 200 (e.g.., by way of one or more networks 80) to facilitate or effectuate the transfer of audio, textual, and/or audio / visual feedback from the participant content source 100 to the content reception and/or decoding system 200.
In a representative implementation, a text feedback interface can be provided by an SMS interface or a text editor application. An audio feedback interface can be provided by a set of local and/or remote voice recording applications, for instance, configured to execute on the participant content source 100 and/. or a content reception and/or decoding system 200, an Interactive Voice Response (IVR) system, or a voice messaging system. An audio / visual feedback interface can be provided by a set of local and/or remote audio / visual capture applications (e.g., Skype mobile™), configured to execute on the participant content source 100 and/or a content reception and/or decoding system 200. As indicated above, individuals may need to be incentivized in order to act as participants and generate participant information content corresponding to an event, situation, product, service, transaction, and/or client. In some embodiments, in association with or following a content reception and/or decoding system's receipt of participant information content corresponding to an event identifier 102, the content reception and/or decoding system 200 can communicate with an incentive management system 900 to facilitate the transfer of a discount code or coupon, the accrual of thank- you points, or other type of incentive or incentive notification to the participant or a participant incentive account corresponding to received participant information content. Such incentive or incentive notice transfer can occur by way of issuance of an SMS message or e-mail directed to the participant, or appropriate adjustment of an incentive account (e.g., an electronic account such as an online shopping account or a virtual world account) corresponding to the participant. In addition or as an alternative to the foregoing, a participant content source 100 can be configured to receive an event identifier 102 in another manner, such as by way of an e- mail or Short Message Service (SMS) message. A representative e-mail or SMS message 112 that includes a telephone number event identifier 102a as well as an Internet address event identifier 102b is shown in FIG. 2C. In some embodiments, an event identifier 102 can be communicated to a participant content source 100 in association with a mobile payment service. For instance, FIG. 2D illustrates a Contactless Near Field Communciation (CNFC) device 1 14 that can be configured to process payments by way of a mobile telephone, and which can also be configured to communicate an event identifier 102 to such a device.
In view of the foregoing, the provision of an event identifier 102 to a participant content source 100, or the capture of an event identifier 102 thereby, facilitates the automated or semi-automated capture, collection, reception, and/or transfer of a participant's audio and/or other feedback, viewpoints, opinions, or impressions associated with an event, situation, product, service, transaction, or client, independent or exclusive or in the absence of, or prior to, participant communication of such feedback to another individual such as a customer service representative or call center operator.
Aspects of Other Representative Types of Participant Content Sources
In certain embodiments, a participant content source 100 includes a portable or wearable device that is configured to capture audio content such as speech signals in an environment that a set of participants occupies. In an environment in which at least one wearable participant content source 100 is disposed, audio content can be captured from one or more participants, and subsequently be semantically analyzed to generate situational or strategic intelligence data relating to an event, situation, product, service, or transaction involving the participant(s) within the environment in which the wearable participant content source 100 is active. In certain embodiments, a portable or wearable participant content source 100 that is primarily configured for capturing audio content can also be configured for capturing visual content (e.g., digital images or pictures) at one or more times.
In the description that follows, particular representative types of portable or wearable participant content sources 100 configured as audio content capture devices 100a are described with reference to FIGs. 3A - 3C; and representative non-limiting examples of environments in which such audio content capture devices 100a can be utilized, activated, or deployed are described with reference to FIGs. 4A - 4C.
FIG. 3 A is a block diagram of a representative audio content capture device 100a in accordance with an embodiment of the disclosure. In an embodiment, an audio content capture device 100a includes a power source 110; a processing unit 1 12; a set of microphones 114; a user interface 116; a docking station interface 1 18; an optional Radio Frequency Identification (RFID) unit 120; and a memory 130, which includes an audio signal an audio signal capture / transfer module 132 and a captured content memory 134. Each element of the audio capture device 100a can be coupled to a common bus or internal communication pathway 140, and the elements of the audio content capture device 100a can be carried by or reside within a common housing 150. The power source 110 can be a battery (e.g., a replenishable or rechargeable battery), and the processing unit 1 12 can be a microcontroller, microprocessor, or the like. The set of microphones 114 can include one or more microphones configured to capture audio signals on an omnidirectional or semi-directional basis in an environment in which the housing 150 resides. The user interface 116 can include a set of buttons, switches, and/or a display device (e.g., a liquid crystal display (LCD)), and the docking station interface 1 18 can include a signal transfer interface such as a Universal Serial Bus (USB) interface by which captured audio signals can be transferred to a destination external to the audio content capture device 100a. The docking station interface 118 can also include a signal transfer interface by which the power source 1 10 can be recharged.
The audio signal capture / transfer module 132 can include a set of program instructions configured to manage, control, or direct aspects of the capture and storage of audio signals, and the transfer of captured audio signals to an external destination. The captured content memory 134 can be essentially any type of memory (e.g., a type of Random Access Memory (RAM)) that can receive and transfer digital audio signals and possibly other digital data. In a representative implementation, the captured content memory 134 can be configured to store at least approximately 30 minutes of audio content, or between approximately 1 - 4 hours of audio content. When the captured content memory 134 is full or nearly full, or when the power source 1 10 becomes depleted or requires recharging, the audio content capture device 100a can be coupled to a docking station to facilitate audio and/or other content transfer to an external destination and/or power source replenishment.
The RFID unit 120 can be configured to receive RFID information corresponding to objects, structures, or devices in the audio content capture device's physical environment. Such RFID information can be stored in the memory 130. For instance, the RFID unit 120 can be configured to receive RFID information or codes corresponding to products positioned in a retail sales environment when the audio content capture device 100a is close or proximate to such products. RFID information can form portions of metadata corresponding to captured audio content.
FIG. 3B is a schematic illustration of a representative docking station 160 configured for communication or coupling with at least one audio content capture device 100a in accordance with an embodiment of the disclosure. Depending upon embodiment details, a docking station 160 can operate in association with or under the control of a computer system (e.g., a desktop or laptop computer), or as a substantially independent device. In an embodiment, the docking station 160 includes a number of audio content capture device interfaces, each of which is configured for receiving or mating with an audio content capture device 100a to facilitate the transfer of captured audio content from an audio content capture device 100a to the docking station 160 and/or a network destination external such as a content reception and/or decoding system 200. Audio content capture device interfaces can additionally be configured for the transfer of power signals to an audio content capture device's power source 1 10. The docking station 160 further includes a processing unit, a memory, and a network interface unit configured to facilitate or enable the reception of captured audio content from audio content capture devices 100a, and the transfer of such captured audio content to one or more network destinations.
FIG. 3C is a block diagram illustrating a set of representative docking station functional modules in accordance with an embodiment of the disclosure, where such functional modules can correspond to or include program instruction sets. In an embodiment, docking station functional modules include an audio content loader 170; a batch transfer module 175; a device administration manager 180; and a device environment manager 185. The audio content loader 170 is configurable for receiving or retrieving captured audio content files from a set of audio content capture devices 100a, and storing such captured audio files in the docking station memory. The batch transfer module 175 is configured for communicating or transferring captured audio files within the docking station memory to one or more content reception and/or decoding systems 200, and/or other destinations such as a captured speech library or database. The device administration manager 180 is configured for facilitating administrative operations, such as establishing audio content capture device 100a parameters that can include a device carrier or wearer identifier or a captured voice profile of the device carrier or wearer that can be used for device carrier or wearer authentication purposes. The device environment manager 185 provides an interface by which an administrator can establish or adjust environmental settings associated with one or more audio content capture devices 100a, such that the audio content capture device(s) 100a can adequately filter participant speech from other types of environmental sounds such as background noise. An audio content capture device 100a can include or incorporate various types of structural and/or functional aspects in addition or as an alternative to those described above. For instance, in some embodiments an audio content capture device 100a can include a digital camera configured for capturing and storing visual content, which can serve as information content to be analyzed, or metadata associated with audio content. In certain embodiments, and audio content capture device 100a can periodically initiate wireless transfer of captured information content to a docking station 160. In particular embodiments, an audio content capture device 100a can be powered by wireless power transfer techniques, or an audio content capture device 100a can be (re)charged by way of a powermat.
Aspects of Representative Wearable Audio Content Capture Device Environments Audio content capture devices 100a in accordance with embodiments of the present disclosure can be utilized or deployed in a very wide variety of individual interaction communication contexts, situations, or environments. For instance, FIG. 4A is an illustration of a representative restaurant or food service environment in which a server or staff member (e.g., a waitress) 20a wears an audio capture device 100a that is configured to capture audio signals either continuously or when in the presence of a customer 20b.
An individual wearing or carrying an audio content capture device 100a as well as one or more other individuals with whom communication takes place proximate to the audio content capture device 100a can be considered as a participant. Thus, the waitress 20a and the customer 20b can each be participants, and the audio content capture device 100a worn by the waitress 20a can capture verbal communication between the waitress 20a and her customer(s) (e.g., during each interaction between the waitress and her customer(s) during a work shift), thereby facilitating subsequent processing and/or analysis of such verbal communication.
FIG. 4B is an illustration of a representative medical environment in which a medical professional such as a doctor 20c wears an audio content capture device 100a, which can capture audio signals corresponding to the medical professional's interactions with patients and/or colleagues.
FIG. 4C is an illustration of a representative law enforcement or security environment in which a law enforcement officer or security personnel 20e wears an audio content capture device 100a, which can capture audio signals corresponding to the officer's interactions with members of the public and/or colleagues.
Portable or wearable audio content capture devices 100a and one or more docking stations 160 can be deployed in wide variety of other types of environments, including but not limited to representative environments such as a retail sales environment in which one or more sales associates interact with customers or potential customers; a hotel environment in which hotel personnel interact with hotel guests; a compliance environment such as a finance or banking environment in which a financial professional interacts with one or more individuals; an assessment environment, such as an insurance assessment environment involving an insurance assessor (e.g., undertaking an automobile or home damage assessment); a counseling environment in which a therapist or counselor interacts with one or more patients (e.g., an individual or group counseling or behavioral therapy environment); and a corporate meeting or director environment such a board meeting in which board members interact with each other and/or company personnel. In any of the foregoing representative environments, or in any type of environment in which a portable or wearable participant content source 100 such as an audio content capture device 100a can be utilized or deployed, audio signals and possibly visual information corresponding to or associated with an individual / participant carrying or wearing the audio content capture device 100a and one or more participants with whom they interact (e.g., customers, colleagues, subjects, or members of the public) can be captured or recorded. Such audio signals and associated metadata can be transferred to a content reception and/or decoding system 200 and subjected to processing and/or analysis by a content processing and/or analysis system 500.
In view of the foregoing, portable or wearable participant content sources 100 such as audio content capture devices 100a facilitate or enable the direct or immediate capture, collection, and/or reception of real-time, face-to-face, and/or interpersonal interactions, conversations / discussions, or engagements between participants, in essentially any type of environment for which processing or analysis (e.g., semantic analysis) of real-time, face-to-face, and/or interpersonal participant information content could be useful. A portable or wearable participant content source 100 such as an audio capture device 100a can further facilitate the direct capture, collection, and/or reception of participant information content in a real-time manner that is independent or exclusive of, or prior to, a participant's communication of participant information content to one or more other individuals (e.g., a customer service agent or call center operator) after an interaction, conversation, or engagement with another participant. Thus, in accordance with an embodiment of the present disclosure, a portable or wearable participant content source 100 facilitates the capture, collection, and/or reception of participant information content directly from the participant(s) involved in the generation of participant information content, as the generation of the participant information content occurs.
As a result of the capture, collection, reception, transfer, and/or analysis of participant information content corresponding to real-time or face-to-face interactions, systems in accordance with embodiments of the present disclosure can process or analyze one or more participants' real-time, "first hand," and/or interpersonal experiences or accounts relating to an event, situation, product, service, transaction, or client, thereby reducing, minimizing, or eliminating a likelihood of misinterpreting situational, contextual, or semantic aspects of participant information content or generating inaccurate or erroneous information content analysis results, for instance, that can arise from processing or analyzing participant information content corresponding to separate, non-face-to-face, "second-hand," "third-hand," or other less direct or indirect accounts of an event or situation.
Aspects of Participant Information Content File and Associated Metadata Generation Any given type of participant content source 100 can generate a participant content file in association with capturing or recording participant information content. Participant content sources 100 can provide, communicate, or transfer one or more participant content files to one or more content reception and/or decoding systems 200. In general, a participant content file includes or references content signals or data that represents or forms portions of participant generated information content itself; plus corresponding metadata that establishes a set of associations or relationships between the content signals or data and one or both of (a) a content generation context such as an event, situation, product, service, transaction, and/or environment; and (b) one or more clients.
Content signals or data can include audio and/or visual signals or data (e.g., video, text, and/or graphical data). Metadata can include one or more of a filename; a content creation, reception, and/or communication time and/or date; one or more client identifiers (e.g., a business location, or a staff member identifier or name); one or participant identifiers such as a participant name, a participant address, a telephone number, and an email address; a participant content source identifier (e.g., a device type or identifier); and other information.
Aspects of Representative Content Reception, Processin / Analysis, and Other Systems Each of a content reception / decoding system 200, a content auto-retrieval system 300, and a content processing / analysis system 500 can correspond to or include one or more physical and/or virtual computer systems, apparatuses, and/or devices providing access to or having dedicated or shared processing, network communication, data storage, and memory resources. One or more of such systems can correspond to a server, a server farm, or a set of distributed computing resources such as cloud computing resources. Additionally, each of such systems, apparatuses, and/or devices can utilize, correspond to, provide access to, and/or include memory-resident or device-resident program instruction sets or software, which are executable by way of processing resources to facilitate, manage, control, or perform content reception, retrieval, processing, and/or analysis processes or services in accordance with particular embodiments of the present disclosure. Representative manners in which particular program instruction sets, software modules, and/or computing layers corresponding to content reception / decoding systems 200, content auto-retrieval systems 300, and content processing / analysis systems 500 in accordance with certain embodiments of the present disclosure can be configured to cooperatively facilitate, manage, control, provide, or perform such processes or services are described in detail hereafter.
FIG. 5 is a schematic illustration of portions of a content reception and/or decoding system 200 in accordance with an embodiment of the present disclosure. In an embodiment, the content reception and/or decoding system 200 is configured to receive and/or retrieve information content from a number of sources, devices, and/or locations, and perform content decoding operations including speech-to-text conversion operations in which speech is initially decoded by way of matching and converting tone impulses to one or more of terms / words, phrases, portions of sentences, and sentences. The content reception and/or decoding system 200 can include a content reception and transfer manager 210, a speech decoder 220, a set of acoustic and linguistic models 230, a scoring and pruning module 240, and one or more local databases 290. Each of the content transfer manager 210, the speech content decoder 220, the acoustic and linguistic model(s) 230, and the scoring and pruning module 240 can correspond to or include one or more program instruction sets.
The reception and content transfer manager 210 oversees information content and decoded information content communication operations, and can serve as (a) an information content recipient or destination with respect to participant content sources 100 and one or more content auto-retrieval systems 300; and (b) a decoded information content source or origin with respect to a content processing / analysis system 500 and/or a content analysis database 600. In a number of embodiments, the content reception and transfer manager 210 facilitates, coordinates, oversees, or directs the reception of participant content files; the transfer of information content such as audio content contained therein to an appropriate content decoding, interpretation, or recognition engine, such as the speech decoder 220; and the transfer of decoded participant content files to local and/or remote databases 290, 600. Any given decoded participant content file can include textual data corresponding to decoded audio content, and particular content file metadata.
The speech decoder 220 can include or be a speech decoding and/or recognition engine, such as a Weighted Finite State Transducer (WFST) or other type of speech decoding engine, which in association with the scoring and pruning module 240 utilizes the set of acoustic and linguistic models 230 to identify decoded speech results with a highest confidence level and output corresponding textual data in a manner understood by one of ordinary skill in the relevant art. Decoded speech results can include textual data that forms portions of a decoded participant content file. In view of the foregoing, in various embodiments the content reception and/or decoding system 200 is configured to manage or perform operations including the conversion of captured participant speech content and/or acquired auxiliary speech content to text by way of speech-to-text (STT) processes or operations.
FIG. 6 is a schematic illustration of portions of a content auto-retrieval system 300 in accordance with an embodiment of the present disclosure. In an embodiment, a content auto-retrieval system 300 includes a content acquisition and transfer manager 310, set of web crawlers or spiders 320, a services retrieval module 330, and a structured data retrieval module 340, each of which can correspond to or include one or more program instruction sets. The content auto-retrieval system 300 can further include one or more local databases 390. The content acquisition and transfer manager 310 manages or oversees the automatic retrieval or receipt of auxiliary information content from one or more sources, and the communication or distribution of retrieved auxiliary information content to one or more destinations. In some embodiments, the content acquisition and transfer manager 310 can communicate with a content analysis database 600 to determine for one or more client use cases particular types and/or sources of auxiliary information content to be retrieved. The content acquisition and transfer manager 310 can correspondingly manage, coordinate, define, or select content retrieval processes or operations performed by the web crawler(s) / spider(s) 320, the services retrieval module 340, and the structured data retrieval module 340 to facilitate or enable the acquisition, retrieval, or receipt of auxiliary information content in accordance in a manner that depends upon any given client use case under consideration.
In a number of embodiments, the content acquisition and transfer manager 310 can issue content acquisition instructions to the web crawler(s) / spider(s) 320, the services retrieval module 340, and the structured data retrieval module 340. Such instructions can include (a) a client use case reference or identifier that forms a portion of metadata corresponding to auxiliary information content; and (b) a reference to an auxiliary information content location or address (e.g., a universal record locator (URL)) or another type of information (e.g., a login and/or password) that facilitates or enables access to auxiliary information content. The web crawler(s) / spider(s), the services retrieval module 340, and the structured data retrieval module 340 can correspondingly interpret or decode an acquisition instruction, correspondingly access, acquire, retrieve, or request auxiliary information content, and store such auxiliary information content as portions of an auxiliary content file within a local database 390. Additional metadata and/or predefined objects such as stored procedures or user access or authentication rights corresponding to acquired auxiliary information content can be inherited from auxiliary information content itself. Following auxiliary information content retrieval, or on a periodic or as-needed basis depending upon one or more auxiliary information content update intervals, the content acquisition and transfer manager 310 can transfer one or more auxiliary content files to a content processing / analysis system 500, a content analysis database 600, and/or a content reception and/or decoding system 200. In a number of embodiments, the content acquisition and transfer manager 310 can communicate auxiliary content files to particular destinations based upon the type(s) of auxiliary information content within such files. For instance, in embodiments in which automatically retrieved auxiliary information content includes text data, the content acquisition and transfer manager 310 can communicate a raw, original, unstructured, or semi-structured auxiliary content file that includes auxiliary text data and any associated metadata to a content processing / analysis system 500 and/or a content analysis database 600. In embodiments in which automatically retrieved auxiliary information content includes audio data, the content acquisition and transfer manager 310 can communicate an auxiliary content file that includes auxiliary audio data and any associated metadata to a content reception and/or decoding system 200, which can subsequently perform speech-to-text operations upon the auxiliary audio data to generate a decoded auxiliary content file.
FIG. 7 is a schematic illustration of portions of a content processing and/or analysis system 500 in accordance with an embodiment of the present disclosure. In an embodiment, a content processing and/or analysis system 500 includes at least some of a communication module 510; an indexing module 515; a tokenization / parsing module 520; a natural language processing (NLP) and/or expression builder 525; a data mining module 530; a categorization module 535; a relevancing module 540; a knowledge extraction and/or discovery module 545; a set of use case processing agents 550; a set of use case learning modules 555; and a structured data conversion module 560, each of which can correspond to or include one or more program instruction sets. The content processing and/or analysis system 500 additionally includes one or more local databases and/or local caches 590.
The communication module 510 manages or coordinates communication between the client processing / analysis system 500 and other systems, apparatuses, or devices. In several embodiments, the communication module manages or coordinates (a) the reception of decoded participant content files and auxiliary content files, which can include decoded auxiliary audio content and/or original text data; (b) communication between the content processing / analysis system 500 and portions of a content analysis database 600; and (c) communication between the content processing / analysis system 500 and a client input / output manager 700.
The indexing module 515 can index auxiliary information content, such as original textual data received from a content auto-retrieval system 300. The tokenization / parsing module 520 can organize words within received content files (e.g., which include decoded participant content, decoded auxiliary content, and/or original auxiliary content) in accordance with known word classifications such as noun, name, place, and/or other classifications to generate semi-structured data. The natural language processing and/or expression builder 525 can then query the semi-structured data with one or more anchor query words, phrases, and/or expressions specified by an anchor set of query words, phrases, and/or expressions for a client use case under consideration. The natural language processing and/or expression builder 525 can additionally augment or expand the query using related or modified words, phrases, and/or expressions by way of natural language processing techniques to reflect or collect more natural forms of data input, in a manner understood by one of ordinary skill in the relevant art. Natural language processing widens a range or circle of permissible query or search results by including with varying degrees of confidence words, phrases, and/or expressions that are grammatically or contextually (e.g., with respect to logical context) related to the anchor word(s), phrase(s), or expression(s), In various embodiments, that natural language processor / expression builder 525 can augment or expand words, phrases, and/or expressions in multiple dimensions or directions, such as linguistic, acoustic / phonetic, and temporal dimensions. For instance, an anchor expression can be augmented or expanded by linguistic extension of a search phrase. Additionally or alternatively, an anchor expression can be augmented or expanded based upon one or more manners in which a phrase was spoken or emphasized, for instance, in accordance with a set of acoustic and/or phonetic measures that can indicate whether the phrase was spoken actively or passively or with positive or negative intonations, such that the search can locate relevant data or differentiate words to detect concepts such as irony or sentiment polarity. Furthermore, an expression can be augmented or expanded based upon a temporal context corresponding to the phrase, such as how often a phrase under consideration was uttered within a given period of time.
The content processing / analysis system 500 can thus identify or detect concepts that are of particular importance to a participant, such as identical, similar, or analogous concepts that the participant repeats multiple times (e.g., within shorter time frames) by way of natural language processing, expression building, and associated query augmentation or expansion. As a representative example, if captured participant information content includes phrases such as "can you guarantee this?", "you really can guarantee this?", and "are you sure you can guarantee it", where such phrases are possibly spoken in a short time interval, such phrases indicate persistence on the part of a participant seeking reassurance. The system 500 can identify such spoken activity as highly important to the participant, which in this representative example indicates that the participant rates a client's ability to guarantee a service or product specification (e.g., corresponding to meeting a deadline) as very important. Depending upon embodiment details, the expression builder 525 can access or reference the content analysis database 600; external or third party libraries or databases; its own local database or library 590 it builds over time; or services to convert expressions into augmented or wider relevant search expressions. As a representative example, if captured participant content corresponds to a medical environment involving doctor - patient conversations, and an anchor word, phrase, or expression corresponds to or includes "nausea," the expression builder can augment or extend a query to include words, phrases, or expressions corresponding to or including "dizzy spells" as these two patient states statistically occur together often or relatively often. A source of such statistics can be participant information content captured over time, external databases, or a historical analysis of anchor terms input by or on behalf of one or more clients. Results from the original and any augmented or expanded queries can be mined for patterns by the data mining module 530, and further grouped or categorized by the categorization module 535, such as by way of a statistical data mining and/or categorization technique (e.g., a naive bayes classifier or process). The relevancing module 540 can re-assess or evaluate the relevance of the search results (e.g., as categorized by the categorization module 535) in accordance with particular circumstantial criteria or parameters. Such criteria can include one or more of a role of a participant in a conversation; a history of anchor words, phrases, or expressions and corresponding queries or searches performed in a prior or recent time interval (e.g., corresponding to searches performed for or by a client over the past few days or weeks); recent client web browsing history; and the headings and/or bodies of recent client e- mails to identify words, phrases, expressions, or concepts that are currently most relevant or expected to be most relevant to a client under consideration. Such client-related history information can be made available or accessible to a client input / output manager 700, which can store client-related history information within the content analysis database 600.
In certain embodiments, the knowledge extraction and/or discovery module 545 further analyzes search results to identify subtle, hidden, or previously unrecognized knowledge. For instance, a knowledge extraction and/or discovery module 545 can create or build a relational schema based upon search results.
The set of use case processing agents 550 can further refine and reprocess search results to arrive at a generally, reasonably, or fairly accurate set of information content analysis results. In some embodiments, the set of use case processing agents 550 analyzes or refines search results in accordance with client workflow definitions or parameters corresponding to business events, scenarios, situations, or transaction types. The set of use case processing agents 550 can identify a high recurrence of a particular phrase in search results, and can fetch this phrase and reprocess it with additional or other attributes or parameters. For instance, if the word or phrase "revenues" is frequently identified, the system 500 can query using words related to revenues. If the word or phrase "increase" and "decrease" are frequently identified, the system 500 can establish a word pair such as "revenue increase" and investigate word patterns found under a search based upon "revenue increase." The content analysis database 600 can include a scenario or vertical specific library that can prioritize specific phrases for a client use case under consideration. The use case learning module(s) 555 can be configured to seed priorities of phrases common to any given use case.
Information content analysis results can be locally stored in one or more local caches or databases 590, for instance, by the structured data conversion module 560.
FIG. 8 is a schematic illustration of portions of a client input / output manager 700 in accordance with an embodiment of the present disclosure. In an embodiment, a client input / output manager 700 includes one or more of a client use case definition manager 710; an information content analysis results dissemination manager 720; an application / GUI builder 730; a set of webservers 740; a messaging or notification manager 750; and a set of local databases 790. The client use case definition manager 710 provides a set of user interfaces that facilitate or enable the definition of one or more use cases corresponding to a given client. The results dissemination manager 720 provides at least one user interface that can be accessed by or on behalf of a client, and which facilitates the selection or definition of particular manners in which information content analysis results and/or notifications or alerts corresponding thereto are to be communicated to a client under consideration, such as by way of web publication, storage on a client database, association with a client specific application program, and/or issuance of a message or alert to a client device such as a mobile telephone. The application / GUI builder 730 provides a set of user interfaces and by which particular information content analysis results can be associated with or provided to an application program or user interface that can be customized or tailored in accordance with client requirements. The application / GUI builder 730 can access or utilize a webservices architecture such as an API library to facilitate the creation of an appropriate client specific application program or user interface. The webserver(s) 740 can be configured to distribute or publish client relevant content over the Internet in one or more manners. Finally, the messaging manager 750 can send portions of information content result sets, statistics or summaries corresponding thereto, and/or messages, notifications, or alerts that indicate certain types of information content analysis result conditions (e.g., which can be defined in accordance with client-specified alert triggers associated with a client or particular client use cases) to one or more client systems or devices. The messaging manager 750 can include an SMS and/or an e-mail gateway.
FIG. 9 is a schematic illustration of portions of a client result destination 800 in accordance with an embodiment of the present disclosure. A client result destination 800 can include one or more browsers 810, and/or particular message, notification, or alert reception services or processes corresponding to or executing on one or more types of client systems or devices. A client result destination 800 can additionally or alternatively include a client database 890. In addition or as an alternative to the foregoing description with reference to FIGs. 1A - 9, particular representative embodiments detailing structural and functional aspects of content reception and analysis systems are provided in Appendix A to this specification.
Aspects of particular embodiments of the present disclosure address at least one aspect, problem, limitation, and/or disadvantage associated with existing systems and techniques for information content reception and analysis. While features, aspects, and/or advantages associated with certain embodiments have been described in the disclosure, other embodiments may also exhibit such features, aspects, and/or advantages, and not all embodiments need necessarily exhibit such features, aspects, and/or advantages to fall within the scope of the disclosure. It will be appreciated by a person of ordinary skill in the art that several of the above-disclosed systems, components, processes, or alternatives thereof, may be desirably combined into other different systems, components, processes, and/or applications. In addition, various modifications, alterations, and/or improvements may be made to various embodiments that are disclosed by a person of ordinary skill in the art within the scope and spirit of the present disclosure.

Claims

1. A method for processing information content, comprising:
capturing the information content through a content source;
saving the captured information content in a captured content memory;
semantically processing the captured information content with a content processing system;
identifying within the processed captured information content a predetermined analysis result condition; and
generating and providing a corresponding summary of the identified analysis result condition;
wherein the information content corresponds to any of a face-to-face interaction between at least two participants, and as an occurrence in response to an event identifier.
2. The method of claim 1, wherein the face-to-face interaction between at least two participants occurs in anyone of a law enforcement environment, a security environment, a retail sales environment, a compliance environment, a hotel environment, a finance environment, an assessment environment, a corporate environment, and a counseling environment.
3. The method of claim 1, wherein the event identifier is any one of a machine readable image, an email, a Short Message Service (SMS) message, and a Contactless Near Field Communication activation.
4. The method of claim 1, wherein the information content is any one of a audio content, a visual content, and a textual content.
5. The method of claim 4 wherein the information content is an audio content, and where the content is decoded through a speech-to-text conversion.
6. The method of claim 5, wherein acoustic and linguistic models are utilized during the speech-to-text conversion to obtain decoded speech results of high accuracy.
7. The method of claim 1, further comprising associating a content metadata with the captured information and saving the content meta data with the captured information content in the captured content memory.
8. The method of claim 1, wherein semantically processing the captured information content comprises matching and organizing word data in accordance with predetermined classification to obtain semi-structured data.
9. The method of claim 8, wherein semantically processing the captured information content further comprises running the semi-structured data through a natural language processing builder.
10. The method of claim 8, wherein semantically processing the captured information content further comprises data mining and categorizing the semi-structured data.
11. The method of claim 1, wherein the predetermined analysis result condition is associated to multiple occurrences of any of a predetermined, word, phrase, expression within a period of time.
12. The method of claim 1, wherein the information content is captured through an auxiliary content source, and wherein the auxiliary content source corresponds to any one of an Internet website, an Internet blog, a social media website, a virtual community, and a professional data library.
13. The method of claim 12, wherein the information content is captured with a content acquisition manager operating a web crawler module in the auxiliary content source.
14. The method of claim 1, further comprising:
saving a plurality of captured information content with content metadata associated with a client in a captured content memory;
determining a client-specific analysis result condition with a client input/output manager; semantically processing captured information content based on the client-associated content metadata and the client-specific analysis result condition;
generating a corresponding summary of the identified analysis; and
providing the corresponding summary with the client input/output manager.
15. A system for processing information content, comprising:
a content source for capturing information content;
a captured content memory for saving the captured information content; and a content processing system for:
semantically processing the captured information content;
identifying within the processed captured information content a predetermined analysis result condition; and
generating and providing a corresponding summary of the identified analysis result condition;
wherein the information content corresponds to any of a face-to-face interaction between at least two participants, and as an occurrence in response to an event identifier.
16. The system of claim 15, further comprising a participant content source for capturing a participant information content, where the participant content source is any one of a mobile telephone, a personal computing device, tablet computer, a portable speech capture device and an audio/visual recording device.
17. The system of claim 16, further comprising a docking station for receiving the participant content source, the docking station comprising an audio content loader for receiving captured information content from the docked participant content source, and a transfer module for transferring the received captured information content to a captured content memory.
18. The system of claim 15, further comprising a client input/output manager, comprising a case definition manager and a results dissemination manager, the client input/output manager configured for communication with the content processing system and at least one client result analysis destination.
19. The system of claim 15, further comprising a content auto-retrieval system for automated retrieval of auxiliary information content from an auxiliary content source.
20. The system of claim 19, wherein the content auto-retrieval system further comprises a content acquisition manager operating a web crawler in the auxiliary content source.
PCT/SG2012/000475 2011-12-13 2012-12-13 Information content reception and analysis architecture WO2013089646A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161569788P 2011-12-13 2011-12-13
SG61/569,788 2011-12-13

Publications (1)

Publication Number Publication Date
WO2013089646A1 true WO2013089646A1 (en) 2013-06-20

Family

ID=48612958

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2012/000475 WO2013089646A1 (en) 2011-12-13 2012-12-13 Information content reception and analysis architecture

Country Status (1)

Country Link
WO (1) WO2013089646A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9710539B2 (en) 2014-03-20 2017-07-18 Tata Consultancy Services Limited Email analytics
DE102017215016A1 (en) 2017-08-28 2019-02-28 Henkel Ag & Co. Kgaa Structured washing or cleaning agent with yield point

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040148154A1 (en) * 2003-01-23 2004-07-29 Alejandro Acero System for using statistical classifiers for spoken language understanding
US7099855B1 (en) * 2000-01-13 2006-08-29 International Business Machines Corporation System and method for electronic communication management
US20070250497A1 (en) * 2006-04-19 2007-10-25 Apple Computer Inc. Semantic reconstruction
US20080270380A1 (en) * 2005-05-06 2008-10-30 Aleksander Ohrn Method for Determining Contextual Summary Information Across Documents
US20100169314A1 (en) * 2008-12-30 2010-07-01 Novell, Inc. Content analysis and correlation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7099855B1 (en) * 2000-01-13 2006-08-29 International Business Machines Corporation System and method for electronic communication management
US20040148154A1 (en) * 2003-01-23 2004-07-29 Alejandro Acero System for using statistical classifiers for spoken language understanding
US20080270380A1 (en) * 2005-05-06 2008-10-30 Aleksander Ohrn Method for Determining Contextual Summary Information Across Documents
US20070250497A1 (en) * 2006-04-19 2007-10-25 Apple Computer Inc. Semantic reconstruction
US20100169314A1 (en) * 2008-12-30 2010-07-01 Novell, Inc. Content analysis and correlation

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9710539B2 (en) 2014-03-20 2017-07-18 Tata Consultancy Services Limited Email analytics
DE102017215016A1 (en) 2017-08-28 2019-02-28 Henkel Ag & Co. Kgaa Structured washing or cleaning agent with yield point
EP3450530A1 (en) 2017-08-28 2019-03-06 Henkel AG & Co. KGaA Structured washing or cleaning agent having a flow limit

Similar Documents

Publication Publication Date Title
US10445351B2 (en) Customer support solution recommendation system
US11050700B2 (en) Action response selection based on communication message analysis
US20180032612A1 (en) Audio-aided data collection and retrieval
WO2018080781A1 (en) Systems and methods for monitoring and analyzing computer and network activity
US10720161B2 (en) Methods and systems for personalized rendering of presentation content
US20140156341A1 (en) Identifying potential customers using social networks
US11798539B2 (en) Systems and methods relating to bot authoring by mining intents from conversation data via intent seeding
US20210263978A1 (en) Intelligent interface accelerating
US11341337B1 (en) Semantic messaging collaboration system
US20180365552A1 (en) Cognitive communication assistant services
US20190188623A1 (en) Cognitive and dynamic business process generation
CN115053244A (en) System and method for analyzing customer contact
EP3387556B1 (en) Providing automated hashtag suggestions to categorize communication
CN112069409B (en) Method and device based on to-be-done recommendation information, computer system and storage medium
WO2017100010A1 (en) Organization and discovery of communication based on crowd sourcing
US11755848B1 (en) Processing structured and unstructured text to identify sensitive information
US9906611B2 (en) Location-based recommendation generator
WO2013089646A1 (en) Information content reception and analysis architecture
US10877964B2 (en) Methods and systems to facilitate the generation of responses to verbal queries
US11809481B2 (en) Content generation based on multi-source content analysis
US20220207038A1 (en) Increasing pertinence of search results within a complex knowledge base
US10938985B2 (en) Contextual preferred response time alert
JP2022542634A (en) Systems and methods for ethical collection of data
US11943189B2 (en) System and method for creating an intelligent memory and providing contextual intelligent recommendations
US20230409831A1 (en) Information sharing with effective attention management system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12856983

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 20/08/2014)

122 Ep: pct application non-entry in european phase

Ref document number: 12856983

Country of ref document: EP

Kind code of ref document: A1